RE: Turing test

John E. Rylander (rylander@prolexia.com)
Wed, 5 Feb 1997 17:19:53 -0600

Hmm -- the reflector ate part of it last time. Here we go again. =
(Shades of Glenn's post on the global flood.... I just hope mine are =
even half as useful as his! :^> )

Dave,

My response will be even shorter than yours, this time. :^> =20

The nutshell response is this: even as devil's advocate, you seem to be =
conceding -my- main point, which is that if we take HAL (or a person) to =
be completely explainable in physically reductionistic terms, then HAL =
(or a person) lacks type-a consciousness, despite the presence of type-b =
consciousness. This defeats contemporary materialism and also most =
popular versions of functionalism, in my view.

It appears that our differences wrt HAL lie in whether or not we deem =
HAL to be completely reductionistically explicable (which I didn't spend =
much of time addressing), but I don't think that's worth arguing about. =
I think he is; you seem not to think so. I'm willing to let the matter =
drop there. (I wouldn't be if, as you sometimes imply, my goal were to =
"convince" HAL that -I- am conscious; but I don't really care at all =
about that, partly because I don't think HAL is conscious.)

One note, to restate for emphasis: by "is completely explainable in =
current physically reductionistic terms", I don't mean merely "has an =
empirical component that entirely acts in accord with the current laws =
of physics, possibly with an immaterial component that is (for all we =
know) something else entirely." No sir. I mean this is the whole ball =
of wax, wrt causality/predictivity, AND ontology. And I should =
re-re-emphasize "current" in this: I made quite explicit that seeing =
type-a consciousness as going beyond -current- physicalism doesn't =
entail "working miracles". Occasionally I think you see I mean these =
things, but usually I'm not at all sure.

Okay, two notes :^) : you repeatedly ask me to enhance the definition of =
type-a consciousness (for those new to the thread, it was just an =
ostensive pointing to our introspective/perceptual awareness), implying =
the positivistic/empirically predictive vagueness of it renders it =
unsuitable for philosophical argument. Well, sorry! I don't think =
that's true in this case anyway, and besides, I just haven't discovered =
any significantly better definition yet, if there even is one -- but I'm =
open to suggestions. (Not hypnotic.) But commonly when people try to =
make it more empirical for scientific or engineering evaluation, they =
end up talking about type-b consciousness, sometimes without even =
realizing it. My definition is operationally vague (I absolutely =
agree!), but also philosophically vaguely correct (which is worth -a lot =
more than nothing-, in my view).

Thanks for your comments, and sorry my arguments made you lose your good =
humor, Dave!

--John

----------
From: David Bowman[SMTP:dbowman@tiger.gtc.georgetown.ky.us]
Sent: Wednesday, February 05, 1997 4:15 pm
To: evolution@ursa.calvin.edu
Cc: rylander@prolexia.com; dbowman@tiger.gtc.georgetown.ky.us
Subject: Re: Turing test

I wish to respond to John Rylander's last post. My response will not be =
as
long as his because I am wearying of this subject and don't have the =
stamina
to make a detailed response to everything he said in his last opus, and,
being the devil's advocate, my heart is not in my arguments. Even so, =
this
post ended up being much longer than I would have liked. It's mostly =
shear
pedantry that is driving me on.

>One could use your argument to the conclusion that humans lack type-a
>consciousness just as HAL does. Eliminativists do something just like =
that.=20
>The other way is to argue that BOTH are conscious, or more precisely, =
that
>BOTH require consciousness as part of their explanation. You want this
>approach, I take it.

Not necessarily. My client doesn't care one way or the other. =
Eliminationism
is just as good as allowing both HAL and humans to have type-a =
consciousness.
Tbe main point of my objection is that your arguments are insufficient =
to
discriminate between HAL and humans regarding the presense or absence of
type-a consciousness one way or the other. This problem is especially =
acute
in that your definition of type-a consciousness is too ambiguous to tell =
if
the presence or absence of type-a consciousness would actually leave any
distinctive external physical evidence or not. (The Turing test is =
useless in
this regard as it only tests for type-b consciousness.)

>However, I'm not at all willing to grant the reductionistic premise (or =
its
>lack of consciousness conclusion) for humans. Both are dramatically =
more
>plausible for HAL, simply because (1) these reductionistic laws were
>essentially relied upon in his very design; if ever his designers =
detected a
>deviation from them, it would be considered a bug to be fixed;

So what if HAL's designers relied on the laws of physics? It seems that =
your
Designer sees to it that you follow the laws of physics as well. =
(Although,
from farther down in your response it seems that you may want to claim
to be a miracle worker of sorts, and that you are somehow endowed with =
these
miraculous abilities (which physics cannot cope with) by your type-a
consciousness. I can't see how such consciousness bestows this ability,
considering that your type-a definition is too thin to tell anything at =
all
about the consequences of its presence or its absence.)

> (2) =
HAL's
>hardware is -radically- different from humans', so to the extent that =
anti-
>reductionism is based on consciousness, and consciousness possibly =
based upon
>hardware, we've reason to doubt both HAL's irreducibility and his =
type-a
>consciousness (this doesn't seem to affect type-b at all, which seems =
in-
>principle unaffected by changes in hardware);

What reason to doubt? What do the hardware architectural differences =
between
HAL and human brains have to do with the presence or absence of type-a
consciousness? (Maybe your reasoning could be followed better if your
definition of type-a consciousness was less ambiguous and actually =
predicted
some objectively measurable effects.)

> (3) there is no =
introspective
>argument-from-analogy with HAL as there is with us humans or, to a
>significant degree, proto-humans, and so no such argument for =
irreducibility;

So there is strength in numbers here? Suppose HAL took control of the =
factory
that built him and he mass produced millions of HAL-type architectured
computers with which he could communicate (i.e. his offspring). He =
could
reason as you do that their similarity to him means that they all have =
type-a
consciousness (since he knows that he does), but you, who are so =
different from
HAL and his children, do not have it. (Who knows, maybe HAL may even =
have a
religion that postulated that his designers were an impersonal force.)

>and (4) I don't think even you doubt HAL's reducibility, right?

My client says he has no idea one way or the other about this.

>The other thing about HAL is that there's simply no NEED to bring =
type-a
>consciousness into the picture. Everything HAL does --every word, =
every
>action, every internal state -- is perfectly explicable without
>consciousness, and there's no independent reason to deem him conscious. =

>Ockham's razor seems apropos here.

HAL could say the same thing about you once he learned enough neurology
(unless you can work miracles because of your type-a consciousness).

>But aren't humans similarly reductionistic? While =
materialism-with-current-
>laws-of-physics is, I think properly (for now anyway), part of the
>METHODOLOGY of biological sciences, even when studying the brain, your
>argument requires that this be not merely methodologically adopted by
>scientists, but ONTOLOGICALLY TRUE, as well.

No, it doesn't. My argument is noncommittal on the ontology of the =
situation
regarding the presence or absence of type-a consciousness for either HAL =
or
humans. My client is agnostic here. My argument is solely a negative =
one.
I'm calling your bluff concerning *your argument* for the *absense* of =
type-a
consciousness in HAL. My client doesn't take a position on the ontology =
of
the situation.

>Let me clarify my point significantly: using my HALlish argument, I =
think
>it's demonstrable that because human beings are conscious, they are not
>reducible to physics AS CURRENTLY UNDERSTOOD. That's the key =
qualifier.

If it is demonstrable you sure haven't demonstrated it. You just =
claimed it.
How do you know that human (type-a) consciousness prevents human =
reducibility
to currently understood physics? I'll admit that not all the detailed
mechanisms of human brain functions are yet explicitly understood, but =
that
doesn't mean that those mechanisms, whatever they are, are not reducible =
to
physics--even the physics of today--as embodied in the so-called =
standard
model. How does the presence of type-a consciousness prevent =
reducibility at
the physical level?

>The real issue here and now is whether there are conditions under which =
laws
>of physics (1) taken as the total explanation and description (that is, =
no
>action or object is inexplicable), and (2) as we -currently- understand =
them,
>entail the existence of type-a consciousness. (A follow-on question =
would be
>if HAL embodies of these conditions, but I don't think it's necessary =
to ask
>that.)

If that is the issue then how about if you addressed your arguments to =
that
point? I would first suggest that in order for any argument that you =
come up
for the proposition that "type-a consciousness implies irreducibility of
behavior to currently understood physics" to be persuasive it would help =
if
your definition of type-a consciousness were firmed up enough so that =
one can
actually see what the implications of its presence or absence are =
(regarding
physical actions).

>However, type-a consciousness (the ordinary, vague, seemingly =
self-evident
>introspective/perceptual variety) seems to be another matter entirely. =
Take,
>e.g., a visual perceptual field. Neuroscientists continue to make =
truly
>breathtaking progress in understanding how the brain processes visual
>stimuli. However, so far as I know at any rate, they are not =
explaining
>REDUCTIONISTICALLY things like the conscious experience of, say, seeing =
a
>chess board. They can explore perhaps reductionistic ASPECTS of such
>explanations (e.g., neural activity, just presuming that such is at =
least
>largely explainable reductionistically), but the way that they know =
what's
>happening at the CONSCIOUS (type-a) level is by CORRELATION WITH
>INTROSPECTIVE REPORTS, a critical "unreduced" component, and certainly =
not by
>deriving the reality of such conscious experience from known physical,
>chemical, or biological laws.

It seems that now you have changed the point of discussion and answered =
a
different point than the one I thought we were discussing. This is an
argument that a natural physical description of brain operation doesn't
explain what an internal type-a perception is. Even my client will =
grant
that imaterial/unphysical things may not be wholly described with =
physical
scientific descriptions. (He is actually noncommittal regarding the
existence of an entire "spirit world".) Your argument above is not an
argument that the presence of type-a consciousness is somehow =
incompatible
with physical law and will therefore leave physical "fingerprints" of =
some
kind (which are inexplicable by physical law) that will therefore serve =
as an
indicator of the presence of said type-a consciousness. From what I can =
tell,
you have conflated the idea that physics is incomplete (in that it is
incapable of describing conceptually immaterial nonphysical concepts) =
with
the idea that the presence of type-a consciousness in association with a
physical object (such as a functioning brain or computer) will produce
physical effects which are inexplicable using currently understood =
physics.
These are two different issues.

>(4) Try a different approach: If chemical or neurological brain states =
when
> combined with the laws of physics and chemistry lead to type-a
> consciousness, then it should be possible to show how the =
eliminative
> materialists' theories violate the laws of physics. I don't think
> there's even a hint of a reason to believe that they do. <snip>

I don't think so either. Thus, this is not any evidence that the =
presence of
type-a consciousness leads to any *physical effects* that can be
distinguished from those produced in the absence of a type-a =
consciousness
one way or the other.

>(1) HAL is a being completely, without residue, and by design =
explicable
> in current purely physically reductionistic terms. (I've seen =
the
> blueprints, Dave. They rely on this. :^> )

OK, So? Just because his physical operation is physically describable =
in
"physically reductionistic terms" is no reason to conclude that HAL =
can't
have a type-a consciousness. He might have a "residue" of type-a
consciousness.

>(2) No explanation in current purely physically reductionistic =
terms
> refers to type-a consciousness. (Prove this wrong, win a trip =
to
> Sweden. :^> )

Quite true. That's why we can't use the physical operation of something =
to
tell if it has a type-a consciousness or not.
=20
>(3) If any explanation of O is complete and without residue, =
nothing
> outside that explanation is a part of the explanation of O.

I'm not admitting your premise here. I'll allow HAL to be *physically*
describable in terms of physics, but that doesn't address the issue of
whether or not he has a residue of a type-a consciousness which cannot =
be
described by physics. (I'm admitting the incompleteness of physics here
regarding some *immaterial* concepts--not physical ones. Since your
definition of type-a consciousness is so vague to be able to tell one =
way or
the other, it is quite possible that it is indeed such an immaterial =
concept.
But you haven't yet shown that type-a consciousness must leave any =
*physical*
evidence that is inexplicable by physics.)

>Therefore,
>(4) HAL lacks type-a consciousness (or more precisely, type-a
> consciousness is not part of the explanation for HAL).

Your conclusion doesn't necessarily follow. It may be true that =
invoking a
type-a consciousness is not *needed* to explain the physical operation =
of
HAL, but that doesn't address the issue of whether or not he has one. =
Similar
reasoning applied to you would say you have no type-a consciousness =
either,
*unless* you are indeed a miracle worker whose physical behavior =
violates the
laws of physics, and that these miraulous powers must be attributable to =
your
type-a consciousness by some as yet undefined means.

>(5) Humans have type-a consciousness.

You haven't shown this. It's an assumption on your part. (I personally
believe this as well, but my client is agnostic on the issue.)

>Therefore
>(6) Humans are not completely and without residue explicable in =
current
> purely physically reductionistic terms.

This may be true relative their immaterial type-a aspect, but it says =
nothing
about their *physical behavior* not being completely describable in
physically reductionistic terms *unless* humans are miracle workers by =
virtue
of their possession of their type-a consciousness. But if this is =
indeed the
case, you haven't shown it, since you have never made a connection =
between an
object's possession of a type-a consciousness and a distinctive physical
effect (inexplicable in terms of physics) produced by that =
consciousness.

Your arguments are insufficient to decide against animism let alone =
against
HAL's type-a consciousness. They are even insufficient to decide in =
favor of
type-a consciousness for humans--except by hypothesis.

I think in order to make further progress your definition of type-a
consciousness needs to be made much more precise so we have some idea as =
to
what to expect from the possession of type-a consciousness that is
distinctively different from merely a type-b consciousness that passes =
the
Turing test.

David Bowman
dbowman@gtc.georgetown.ky.us