Re: Tattersall review of Wolpoff

Brian D. Harper (harper.10@osu.edu)
Fri, 31 Jan 1997 14:39:44 -0500

At 07:44 PM 1/30/97 -0600, Glenn wrote:

>At 09:42 AM 1/30/97 -0500, Brian D. Harper wrote:
>[snip]
>>Let's look at this from the point of view of algorithmic information
>>theory. Given that the spiritual has an effect on the physical, is
>>this effect compressible? In other words, can we have something
>>more than a set of (incompressible) observations on how the spiritual
>>has affected the physical? Can we compress these observations into
>>a set of rules that allow us to determine objectively when a spiritual
>>event has occurred based upon our past observations? If this truly
>>worked then wouldn't the truly objective person conclude that the
>>physical effects we observe are the result of a physical phenomena
>>and that we're just a bunch of buffoons fooling ourselves with all
>>this blather about spiritual stuff?
>
>I think it is precisely the incompressible set of "rules" that we associate
>with humanity that makes the reductionist view improbable. My wife may say
>the same thing to me 10 different times, with me reacting in the same way,
>but on the eleventh time, I might surprize her by getting angry or
>frustrated or whatever.
>
>If humanity can be reduced to a set of rules then they are not human.

This happens to me all the time. My wife tells me to take out the
trash 10 times. On the eleventh I get mad and tell her to take it
out herself. No no, just joking, I'm too young to die :-).

I think your example illustrates very well a basic feature of
randomness. It would be analogous say to the gambler who
thinks he's spotted a pattern on the Roulette wheel. He makes
a lot of money at first, but eventually loses his shirt.

>>
>>When it comes down to it, I think the criteria or comparisons
>>that we're talking about relate to the mind of man rather than
>>the spirit. Along these lines let me propose a question. I should
>>save this for Jim but hey, there's no rule against asking the
>>same question twice. Let's suppose that 10 years from now
>>some clever engineer makes a computer that passes Von Neumanns
>>test (for sake of time and space I'll assume people know what this
>>is, if not I can elaborate). If so would you conclude that the machine
>>is thinking? And if so, would you include it in humanity? If not,
>>why? It seems Von Neumanns criteria is similar to what you're
>>proposing. [BTW, for sake of clarity, I don't believe Von Neumann
>>proposed that such a machine would be human, only that it
>>would be thinking].
>
>
>First, I will explain a Von Neumann test. This is the test that must be
>passed in order for a computer scientist to say that they have created
>artificial intelligence. It is passed, if a person in a room, communicating
>via computer terminal with a human and a computer is unable to tell the
>difference between the two conversations. This has all sorts of interesting
>variation like the chinese translating box and others.
>

Wow, I really botched up, I mean this is almost as embarrassing as
my Foghorn Leghorn botch :-O. This is the Turing test not the Von
Neumann test. As Glenn says there are many variations on the
theme. Another is where a person gets to ask (via terminal) a
series of questions. Based on the answers she must figure out
if its a computer or a person. In this scheme it usually is a computer
at the other end whose objective it is to trick the human. Thus, the
computer programmer will need to program the computer to be deliberately
deceptive. For example, if asked what the square root of PI is out
to 35 significant figures it will have to pretend not to know. It will also
have to be programmed to give vague or surprising answers. For
example, it may question whether or not the human is really
conscious or not and whether they can prove it. If the human cannot, it
may then ask why the human thinks the Turing test is fair. :)

GM:==
>Now, I would suggest that humans over the past 4 centuries have been forced
>to engage in a von Neumann test on a large scale. During the age of
>discovery, Europeans met many other cultures. At first both sides did not
>view the other as a fellow human. Both sides felt that the other was not
>human. Eventually with lots of bumps and problems, everyone has been forced
>to conclude that those who do not look like us are indeed human. The
>biggest piece of evidence was that that other fellow over there, the one
>that looks so wierd, acts just like me. I am human, therefore he must be human.
>
>As to the eventual computer von Neuman test, I worry a lot about that.I have
>seen estimates that at current rates of increases of computer speed, by 2025
>we should have a computer with enough memory and speed to rival our brains.
>We will then know if AI is possible. But one other thing is certain. If a
>machine acts like me, it will most assuredly be used by antitheists as
>evidence that spirituality IS just so much blather.
>

I think you are right and that its important to plan ahead for this possibility.
I personally don't worry too much about a computer gaining consciousness
(boy, I can see it now, computer rights advocates, laws against turning off
computers :). Of course, I also don't think shouting "its impossible" with
fingers inserted snugly in ears is a particularly useful argument. In this
case its possible I think to avoid the negative approach of claiming its not
possible and then sitting passively on the sidelines waiting.

First let me illustrate. One of my favorite examples is chess playing
computers as I was, in my younger days, quite an avid chess player.
Once upon a time, in some far away land, I was rated as a national
master in correspondence play. I say this just to indicate that I'm aware
of the various factors involved in this controversy over whether these
chess playing computers are "thinking". I remember when the first
programs were being written and the programmers had to make a
decision about two alternative paths, should they try to teach the
computer chess strategy or should they emphasize brute strength
(speed). By and large, speed won out. I forget the exact number, but
computers can analyze on the order of millions of positions per second.
How to compete with this? I saw Deep Thought (I believe this to be
the forerunner of Deep Blue, which recently beat Kasparov) play
in tournament in Columbus and woop up on an International Master
rather handily.

Not long after, Kasparov (World Champ) played Deep Thought and beat
it like a baby, seemingly with no effort at all. A reporter asked Kasparov
what the computers biggest weakness was and Kasparov replied
"it doesn't know when to resign". This is somewhat of an inside joke.
A common fault of weak players is their inability to identify hopelessly
lost positions.

But then the unthinkable happened, Deep Blue actually beat Kasparov!
It beat him in a way that illustrates very well the difference between
human and computer chess players. Kasparov, based on his intuition,
offered a pawn sacrifice. Most human players wouldn't waste much time
analyzing whether they could take the pawn. Trusting the great tacticians
judgement they would decline and look for some safer route. The computer,
however, showed no fear and no respect. It had calculated that the seemingly
irresistible attack on its King was in fact resistable, so it took the
pawn, weathered the attack and won the end game with flawless technique.

Needless to say, a lot of hubbub followed this. There is no doubt that the
computers win is an humbling experience for chess players everywhere,
especially Kasparov. Should it be humbling for humanity in general?
Can we say that Deep Blue was "thinking" when it beat Kasparov?
Was this a Turing test? No way. Computers play with a characteristic style
that can be spotted rather easily.

OK, I've rambled long enough with this example. My more general point
is that strong AI is based on an underlying assumption that I don't think
too many are aware of. The assumption is that brains (or minds) think
algorithmically. This is far from being established, of course. Further
I think it is such a strong claim that the onus is on the strong AI'ers
to give some justification for it. The purpose of the chess example was
to illustrate how differences between algorithmic and non algorithmic
thinking might manifest themselves. The computer is blindly carrying out
an algorithm which relies mainly on speed. It calculates millions of positions
per second assigning some value to each position based on some other
algorithm. A human never does this. For example, Kasparov will eliminate
probabably in excess of 99.999% of all possible continuations almost
immediately based on some type of intuition. This is not subjective, one
can analyze the positions in great detail following the game and find that
he very seldom misses on identifying the best two or three candidate moves
almost immediately. The computer can't do this. It tries everything.
Is Kasparovs brain subconsciously carrying out some algorithm when
he does this magic? How could there be an algorithm for knowing a variation
is bad without actually calculating the variation?

Brian Harper
Associate Professor
Applied Mechanics
Ohio State University