Iain wrote:
>Not all people agree with this view [wkd: that it requires
something more than algorithms to make consciousness]
(though I think I do). Proponents
>of "Strong AI" (e.g. Douglas Hofstadter of "Godel, Escher and Bach"
>fame, Prof. Igor Aleksander of Imperial College London, and Ray
>Kurzweil) contend that intelligence and conciousness will be
>obtainable via an algorithm & hence it really is just a question of
>getting a powerful enough computer. Such people believe that
>consciousness would be an "emergent property" of a sufficiently large
>neural network, which is nothing more than an algorithm that can run
>on a computer.
>
>I've seen Aleksander give a lecture at a neural networks conference
>where he claimed that some neural net simulations he performed on his
>Mac were exhibiting behaviour akin to consciousness, but I can't say I
>was convinced by the writhing pixels on the screen.
>
>Others, such as Roger Penrose ("The Emperors New Mind") argue that an
>algorithm can never be conscious, and there is indeed something more
>fundamental (Penrose postulates something to do with quantum gravity).
> The other view is called "Weak AI" - we may build intelligent
>machines, but it will require more than conventional computers and
>software.
I would be wrong to say I _know_ the answer. But an algorithm
is clearly deterministic and like in I Robot, one can piece
together what went wrong if that is so.
I suppose a compromising position is to say that perhaps with
a large enough array of structures, there would be consciousness,
but it would be because the size of the system produced say
quantum effects that make the robot to see beyond it's narrow
world. Certainly as the linewidths get smaller, this is not
unreasonable.
>
>What I find curious is that both Hofstadter (Hard AI) and Penrose
>(Soft AI) bring Godel's theorem in to assist their arguments.
I did read Penrose argument. I have not read Hofstadter's.
What came to mind is that Godel is working with prime numbers
and taking a subset and showing that some theorems cannot
be proven with that subset that are in fact true. He extends
that to the fact that algorithms are mathematical and therefore
represent a limited subset of whatever is true. He then projects
that because the human mind can discover these truths, therefore
the mind is not algorithmic. A strong AI position would work
from the view that all of the behaviors of the mind are a
product of evolution and one only needs to add on more things
and they will eventually arrive at "us".
A falacy in the evolution argument is that just because there
is __sometimes__ intelligent life on this planet, therefore
it is inevitable that systems go in that direction. I doubt
that is so. We are simply lucky and 100% dependent on Grace.
That's all. But that is only my opinion. Evolution would
permit attaining understanding that was otherwise impossible,
but luck is important, and I think it misconstrues the
direction things actually go.
Central to this seems to be whether we can know things that
are true (say about God), but cannot prove them so. Again,
I surely don't _know_ the answer. It's an act of faith
to believe the Christ rose from the dead. Did God speak
to Elijah or not, I don't _know_. Is it actually wrong
to steal, or just an act that involves higher risks than
earning the money to buy the item? Again, I cannot prove
it is "wrong" to steal. Only my faith says that it is so.
We surely cannot prove it by evolution as crows make a good
living from using their power over other birds and from theft.
Whereas we should not appeal to all sorts of hidden
things, we should also ask sometimes "am I missing
something?" In the case of consciousness, our
arrogance is not something that will help much to
understand it.
by Grace alone we proceed,
Wayne
Received on Tue Mar 29 23:41:36 2005
This archive was generated by hypermail 2.1.8 : Tue Mar 29 2005 - 23:41:38 EST