RE: Turing test

John E. Rylander (rylander@prolexia.com)
Tue, 4 Feb 1997 10:58:57 -0600

Sorry for the delay in replying -- the real world duties call. I'll =
probably have to be even slower in replying to your reply to this. :^>

First, let me say thanks for the intelligence and good humor of your =
reply. Too often, at least one of those is missing in these sorts of =
discussions.

Second, while this is related to Christianity and Evolution is a =
significant way (many consider the evolution of consciousness to be =
difficult for reductionistic physics and biology to handle; I consider =
it's mere -existence- to be problematic for them), I don't know if it's =
topical enough to discuss here indefinitely. I've gotten and =
appreciated :^> good feedback from Brian Harper and Clarence Sills, but =
I don't want to presume that all find this interesting.

Now to substance of your points.

No comments except for 2 quick points on your response to Bill's =
posting. (1) I agree with you that brain function is likely =
significantly both more complex and more indeterministic than the =
traditional Turing machine, though it's not clear how mere complexity =
and indeterminism per se substantively affect any of these issues. =
(Though they may be correlated with things that -do- affect these =
issues.) (2) I too wish my ostensive, introspective definition of =
consciousness were more precise. On the other hand, from a =
philosophical perspective (truth-centric, regardless of utility), better =
to be vaguely right than precisely wrong.

You (Dave Bowman :^>) wrote =3D=3D=3D=3D=3D=3D=3D=3D=3D

John displays his meat-based anti-semiconductorist bigotry below. :-)
> E.g., HAL 9000 was clearly "conscious" in sense (b), but given =
that
>we (presumably) have a full, without-residue explanation of everything =
"he"
>does and is that makes no reference to consciousness in sense (a), then =
we
>can safely conclude that he was not conscious in sense (a). (<snip> =
...,
>and so IF we assume that HAL is COMPLETELY explainable by (say) =
solid-state
>physics, and if such physics make no reference to/explanation of
>consciousness in sense (a), than we can conclude that HAL is NOT =
conscious
>in sense (a), even epiphenomenally.)

Just because HAL 9000 doesn't explicitly violate the laws of solid state
physics, materials science, electronics, etc. doesn't mean he can't have
type-a consciousness any more than your brain not violating the laws
(reducible to physics) of aqueous chemistry, membrane chemistry,
electrochemistry, organic chemistry, other forms of biochemistry, =
polymer
science, molecular biology, etc. means that you don't have type-a
consciousness either.

....

Since when do we learn that HAL is "-certainly- -fully- explainable =
without
reference to anything like consciousness, because he is a =
reductionistically
materialistic machine"? Just because HAL is semiconductor-based doesn't =
mean
he is any more or less fully explainable without reference to =
consciousness
than you are. I haven't studied HAL's circuit diagrams in as much =
detail as
I should have, but I was under the impression that HAL did not operate
algorithmically as a deterministic Turing machine. Rather he used non-
deterministic methods to avoid hard NP completeness problems that would
defeat a mere Turing machine. I think he put random thermal =
fluctuations to
good use (via large gain amplification of thermal noise voltages) in his
thoughts, which made his behavior unpredictable from a deterministic =
point of
view. Your coersive defeator doesn't apply to HAL anymore than it =
applies to
you. Your willingness to use the Turing test to indicate (suggest =
type-a)
consciousness in hominids but not not for HAL is just blatant =
chauvinistic
carbonocentrism. (Besides, you even admitted that the Turing test only =
works
for the type-b definition of consciousness anyway.)

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D

I think this is an insightful objection -- I'm glad you brought it up. =
(As an aside, when you say "explicitly" I'm not sure if you mean to =
leave open the possibility that HAL -implicitly- violates some laws of =
contemporary physics. I'll assume not and proceed from there.)

Your argument is basically this: for all I've argued, HAL, despite his =
being a "reductionistically materialistic machine", may need =
consciousness to explain his behavior -just as much as we do-, and it's =
only "blatant chauvinistic carbonocentrism" to deny as much! (Oooooh, =
them's fightin' words!! :^> )

One could use your argument to the conclusion that humans lack type-a =
consciousness just as HAL does. Eliminativists do something just like =
that. The other way is to argue that BOTH are conscious, or more =
precisely, that BOTH require consciousness as part of their explanation. =
You want this approach, I take it.

However, I'm not at all willing to grant the reductionistic premise (or =
its lack of consciousness conclusion) for humans. Both are dramatically =
more plausible for HAL, simply because (1) these reductionistic laws =
were essentially relied upon in his very design; if ever his designers =
detected a deviation from them, it would be considered a bug to be =
fixed; (2) HAL's hardware is -radically- different from humans', so to =
the extent that anti-reductionism is based on consciousness, and =
consciousness possibly based upon hardware, we've reason to doubt both =
HAL's irreducibility and his type-a consciousness (this doesn't seem to =
affect type-b at all, which seems in-principle unaffected by changes in =
hardware); (3) there is no introspective argument-from-analogy with HAL =
as there is with us humans or, to a significant degree, proto-humans, =
and so no such argument for irreducibility; and (4) I don't think even =
you doubt HAL's reducibility, right?

The other thing about HAL is that there's simply no NEED to bring type-a =
consciousness into the picture. Everything HAL does --every word, every =
action, every internal state -- is perfectly explicable without =
consciousness, and there's no independent reason to deem him conscious. =
Ockham's razor seems apropos here. I mean, the Turing test wasn't made =
by God, right? It's just one mathematician's suggestion for type-b =
consciousness appraisal (though -I don't know- if Turing distinguished =
type-b from type-a -- that was a long time ago, and eliminative =
behaviorism [forerunner to functionalism] was in its heyday). Let's not =
give it some irrational level of profundity just because it makes =
science fiction stories really cool!

But aren't humans similarly reductionistic? While =
materialism-with-current-laws-of-physics is, I think properly (for now =
anyway), part of the METHODOLOGY of biological sciences, even when =
studying the brain, your argument requires that this be not merely =
methodologically adopted by scientists, but ONTOLOGICALLY TRUE, as well. =
This is a dubious assumption. The basis for this methodology isn't =
that it reflects and ontological reality, but that its useful, even if =
usefully fictional from a philosophical perspective (as is Newtonian =
physics, behavioristic psychology, Freudian psychiatry, etc. etc. -- all =
useful, all philosophically false). Again, for science, being =
pragmatic, much better a useful fiction than a useless truth.

Let me clarify my point significantly: using my HALlish argument, I =
think it's demonstrable that because human beings are conscious, they =
are not reducible to physics AS CURRENTLY UNDERSTOOD. That's the key =
qualifier. I don't think we've seen the last scientific revolution, and =
I'm confident that science will either expand in the future to robustly =
incorporate genuine type-a consciousness (-conceivably- with no changes =
at all in current predictions but only an expansion of the ontology [to =
include some form of mental properties or objects], unlike the =
Newtonian-to-Einsteinian revolution, which changed nearly every =
prediction at least to a tiny degree), OR will be seen as an =
in-principle limited form of explanation of human reality. (Maybe a bit =
of both. And my guess is there will be some predictive changes, =
probably having to do with human choice. Even if there are no =
predictive changes, it may still be that some predictions will be =
differently interpreted, as unpredictable but not truly random, e.g.)

The real issue here and now is whether there are conditions under which =
laws of physics (1) taken as the total explanation and description (that =
is, no action or object is inexplicable), and (2) as we -currently- =
understand them, entail the existence of type-a consciousness. (A =
follow-on question would be if HAL embodies of these conditions, but I =
don't think it's necessary to ask that.)

Clearly, physical laws do entail type-b consciousness under some =
circumstances; indeed, it may be better to say that type-b consciousness =
(functional "consciousness") is simply another name for some complex but =
not-in-principle-mysterious or controversial physical processes that =
perform certain functions.

However, type-a consciousness (the ordinary, vague, seemingly =
self-evident introspective/perceptual variety) seems to be another =
matter entirely. Take, e.g., a visual perceptual field. =
Neuroscientists continue to make truly breathtaking progress in =
understanding how the brain processes visual stimuli. However, so far =
as I know at any rate, they are not explaining REDUCTIONISTICALLY things =
like the conscious experience of, say, seeing a chess board. They can =
explore perhaps reductionistic ASPECTS of such explanations (e.g., =
neural activity, just presuming that such is at least largely =
explainable reductionistically), but the way that they know what's =
happening at the CONSCIOUS (type-a) level is by CORRELATION WITH =
INTROSPECTIVE REPORTS, a critical "unreduced" component, and certainly =
not by deriving the reality of such conscious experience from known =
physical, chemical, or biological laws.

Now is this just a matter of a bit more work, getting the details right? =
I don't think so at all (though I'm very open to being shown wrong, and =
there's more than likely a Nobel prize in it for the first one to do get =
these pesky details right! :^> ), and here's why:

(1) In every discussion I've had with reductionists, and in all my own =
informed (but still amateur) thinking about the physics of brain =
function and consciousness, it's been easy to deal with brain processes, =
but at no point does the -understanding of the physics or chemistry- =
lead to anything like a prediction of the existence of type-a conscious =
experience. It's all a matter of behavior of various levels of particle =
organization and behavior (quarks to brains), and at no point does, =
e.g., an actual visual perception enter concretely into the discussion, =
either causally (being caused, or causing something else) or =
ontologically.
This concrete, type-a consciousness is either (1) denied (as =
non-existent [by the eliminative materialists -- radical but bracingly =
consistent], or as irrelevant [by the -many- purely engineering-minded =
or -impatient- scientists]), or (2) merely verbally admitted =
(ontologically denied) via conflation of definitions (often handled as =
Hofstadter does, -in principle- being eliminative of type-a =
consciousness, but as a -practical- matter, still -talking- in type-a =
ways because it's an inevitable or at least useful shorthand for the =
type-b genuine reality, in the process giving the intellectually unwary =
the impression of having it both ways), or (3) truly admitted via =
offering explanations that do not derive from any known laws of physics =
or chemistry ("when we see this PETT scan image, we know the subject is =
thinking 'I wonder why those pod bay doors aren't opening the way they =
should', because HAL suggested we try this out on the whole Discovery =
crew just before they departed.")

(2) So far as I can see, it's not just that they're too busy to figure =
it out, or that it's hard to get the math right due to its complexity, =
or something like that. It's that, so far as I or anyone I've ever =
talked with about this has been able to see, -current- physics just =
doesn't have the right ontology or causal powers necessary to get this =
job done. Tomorrow's physics may, for all I know -- don't misunderstand =
me. I'm not a Cartesian dualist. But -today's- physics doesn't. (And =
this is why I'm excited by the work of Penrose et al -- I think they =
realize that physics may need some new concepts here, and they're doing =
the thankless and necessarily utterly speculative [and often therefore =
ridiculed] upfront work that may lead to real breakthroughs down the =
road. But I bet that one of these theorists who buys a theoretical =
lottery ticket will win someday....)

(3) When people have tried to argue that this is just not a deep or =
qualitatively different problem, but just a run-of-the-mill exceedingly =
difficult one (as is, say, predicting the weather, or predicting the =
detailed results of large-scale brownian motion, or understanding how, =
chemically, genotypes lead to phenotypes), I get no significant reply =
when I ask this: is there any -speculatively conceivable- material =
process or state that, from -known- laws of physics or chemistry, =
generates a type-a consciousness event?
Two subpoints:
(3.1) I have to emphasize -type-a- here, because of course the =
functionalists, referring to type-b consciousness, will rightly point =
out that there are indefinitely many physical states that cause (better, =
exemplify or instantiate) -type-b-, functional "consciousness". But =
that isn't what I'm referring to. (And sometimes even =
non-functionalists slip into momentary functionalism out of frustration =
here.)
(3.2) "Conflationists" (those who, normally inadvertently, conflate =
type-a and -type-b consciousness, often being confused by those who have =
a type-b ontology but still use type-a language for pragmatic reasons =
[cf Hofstadter]) often think they have a solution via correlation, as =
with my PETT scan example above. Their reasoning goes this way: we have =
material processes that, presumably, explain type-b "consciousness" =
based only on known physical laws (at least, no troubles we know of), =
and type-b consciousness is at the very least correlated with and in =
that sense explains type-a consciousness, therefore we have material =
processes based only on known physical laws that explain type-a =
consciousness. The fatal oversight is that there are no known physical =
laws that conceivably lead from the materially explicable material =
processes to the type-a consciousness. They're still relying on utterly =
unreduced introspective reports. A mixture of reductionistic and =
non-reductionistic components yields an overall non-reductionistic =
explanation.

Again, if anyone out there can do this, demonstrate via -known- physical =
laws how any conceivable material process not correlates with but =
-lawfully generates- -type-a- consciousness, I'm confident you'll be =
flown to Sweden. :^> (I'd guess it'll happen some day, but with some =
new "physical" concepts, concepts that wouldn't be called "physical" in =
today's physics.)

(4) Try a different approach: If chemical or neurological brain states =
when combined with the laws of physics and chemistry lead to type-a =
consciousness, then it should be possible to show how the eliminative =
materialists' theories violate the laws of physics. I don't think =
there's even a hint of a reason to believe that they do. I'd guess =
someday there will be, with some new physical laws/concepts in place; =
but there surely aren't now, are there? Is there even the slightest =
hint, based on current physics, that eliminative theories have physics- =
or chemistry-related errors in them? I think not. But I'd be delighted =
to be shown wrong!!

So then, the clarified, just-for-Dave :^> HALlish argument goes like =
this (still informal, I agree; -very- informal from a professional point =
of view!):

(1) HAL is a being completely, without residue, and by design explicable =
in current purely physically reductionistic terms. (I've seen the =
blueprints, Dave. They rely on this. :^> )
(2) No explanation in current purely physically reductionistic terms =
refers to type-a consciousness. (Prove this wrong, win a trip to =
Sweden. :^> )
(3) If any explanation of O is complete and without residue, nothing =
outside that explanation is a part of the explanation of O.

Therefore,
(4) HAL lacks type-a consciousness (or more precisely, type-a =
consciousness is not part of the explanation for HAL).

(5) Humans have type-a consciousness.
Therefore
(6) Humans are not completely and without residue explicable in current =
purely physically reductionistic terms.

For those not paying attention, how is this relevant to anything? =
Basically, it defeats contemporary materialism. Does that have any =
implications? I'd say so. (On the other hand, this argument is =
certainly not a carte blanche for just any sort of non-materialistic =
speculation! To say that current physics is not sufficient is NOT to =
say that it isn't highly relevant! And again, I suspect future physics =
will either be recognized as simply incomplete -or- substantially more =
hospitable to consciousness, personhood, and its ilk.)

Well, now that this is finally done with, I think I'll go smash some =
calculators (HAL jr's, I like to call them) -- just for fun! :^D And =
did I mention that I look to boot up my "Commander Data Personality =
Module for Windows NT", just do I can turn it off again right away? =
AhhHAHAHAHAHA!

Seriously, thanks for your comments, Dave. I'll be even slower in =
responding next time, especially since there's no end to the amount of =
discussion this topic warrants. I can certainly think of a number of =
issues I left out (particularly a possible argument that "If future =
physics may explain human consciousness, how do we know that under such =
new physics HAL isn't conscious", the basic reply being "his hardware is =
so different, there's no reason even to suspect he'll be conscious, the =
Turing Test being no weight toward that end given HAL's design"), and =
this (like my prior post) isn't nearly as organized or clear as I'd =
like, but I just don't have time to do both the quality and quantity of =
discussion on this I really want to. I'm not a professional =
philosopher. (And I don't play one on TV.)

I hope HAL's feeling-- er, functioning better. :^> If I've offended =
him, reinitialize for me, will you? :^) (I only say these things =
because I'm on Earth, of course!)

--John