Re: Are we machines?

Chris Cogan (ccogan@sfo.com)
Fri, 3 Dec 1999 20:12:47 -0800

>
> [...]
>
> >Chris
> >This is like someone arguing, in 1910, that the fact that adding machines
> >can't play chess is evidence that "the a basic AI assumption that humans
are
> >just machines is on the wrong track." More likely, it's evidence that the
> >machines aren't well-programmed for this task. What will you say if, in a
> >few years, machines are able to recognize faces easily?
> >
Brian
> I haven't been following this thread, but I can't resist barging in
> anyway since I, like Steve, am an avid chess player. Formerly
> competitive but now just for fun.
>
> My central question, I guess, is what the ability of a machine
> to play chess (or anything else) has to do with humans being
> machines. I think it is typically the case that machines do
> whatever they do better than humans. Isn't this why they were
> designed and built? An automobile goes faster than a human can
> run. So humans are cars? The logic escapes me.

The logic, such as it was, was to the effect that, since computers cannot
recognize faces and humans can, humans must have been designed, or at least
that the ability of humans to recognize faces was somehow *evidence* of
design. Yes, of course it's silly, even if we *assume* the fact that
computers can't recognize faces or play chess or do anything else. But,
that's the way *most* arguments for non-naturalism are (see my post just
prior to this one).

You are basically right. There is no probative relationship of the type that
Stephen wants to claim or suggest.

> I'm also curious about this "basic AI assumption". Can you support
> that this so-called basic assumption actually plays any significant
> role in AI? Steve has made some valid points, one of which I'll
> re-phrase in the current context. If this truly is a basic AI
> assumption then why were computer playing machines deliberately
> designed so as to play chess in a way that is fundamentally
> different from the way humans play chess?

Stephen is confusing AI with chess-playing. Of course, these computers and
programs are not even designed to *simulate* human mental processes. That is
not, and never was, their purpose. They are designed to play chess (or
whatever) as well as they can be made to. The basic AI assumption has
nothing to do with any well-known chess program. This is also true of the
other examples Stephen gives. Clocks are not made to simulate the rotation
of the Earth every 24 hours. Clock designers, as such, have no interest in
the fact that the Earth rotates every 24 hourse.

Genuine AI research and AI programs, along with AC (artificial
consciousness) approach such things from a radically different point of
view, except for a category based on a very limited form of artificial
intelligence (such as that used in expert systems, or other commercial
software). Genuine AI seeks to have computers do things the way humans do
(at least down to some significant level of detail). Deeper Blue and other
such programs are only remotely related to this type of research, mainly
because they are computer programs, or because, *occasionally*, they will
use some idea from AI research.

AI research, in fact, seeks ultimately to make a computer function the way
the mind does. However, much work is done that will not *directly* lead to
such a result, but that will, it is hoped, provide information that can lead
to further insight into the mind. Having robots wander around in toy
environments, for example, showed that the techniques that were used in such
cases did *not* scale up at all well to more-complex environments, which
implied that some significantly different method must be used by real
organisms.

I would say that the basic AI assumption is this: Humans *are* machines in a
meaningful sense, and that computers can, *in principle*, be made or
programmed in such a way as to *actually* perform "mentally" in a way that
is truly reflective of human intelligence and consciousness. They will
always be *somewhat* different, however, until we give them "bodies" that
strongly replicate the sensory input and effector output of the human mind.
Such "bodies" need not, in principle, be real, physical bodies, but they
must adequately provide the basis for an experience of living in our world.
That is, they must be such that, if a human's normal body sensations and
actions were switched to such a "body" (perhaps in a virtual reality), the
person would feel that he was in essentially the same world.

However, I do not believe that such world-replication is necessary to
achieve a *major* degree of "humanness" in a computer program, but, still,
the computer must be given *some* potentially meaningful world (imagine
yourself "trapped" inside of a computer; you would want *some* kind of world
and some kind of ability to act in order to feel human. If all you got was
strange and transient "sparkles" of sensation (for one example), it would
not be long (assuming your mind was still functioning as a human's would
normally function if it was still in a brain) before you started to
hallucinate, etc. To achieve a sane and human-*like* computer program, we
will have to provide it at least *some* sort of stable world (i.e., perhaps
organized visual patterns and auditory communication, etc.).

> Oh, and one last question. What definition is being used for
> machine?

CC
For me, a machine, broadly speaking, is "An intricate natural system or
organism, such as the human body." (American Heritage Dictionary) How,
though, do we distinguish a machine from a non-machine, in this case? I
would say that a machine is also *integrated* in some way, so that it
behaves as a unit (not merely as some loose system). Even this may be too
loose, however, because it would make a flock of birds into a kind of
machine, which seems to me to be perhaps going too far, though I'm not
absolutely opposed to it.

Now, despite the fact that I do *not* (as usual) hold the view imputed to me
by Stephen (i.e., that chess-playing machines are significantly like humans
mentally, etc.) I *do* hold that, in a broad sense, we *are* machines.
Stephen will have no case in this respect until he can prove that it is
*logically* impossible for a machine to be conscious, etc. I think he can
only do this by definition (i.e., by defining anything that is conscious
(etc.) as a non-machine, or by simply *asserting* that anything that's a
machine is non-conscious). He certainly can't do it by pretending that
present limitations on technology imply the existence of a designer.

It doesn't really matter whether we call humans machines or not.

--Chris Cogan