On 6/21/07, mrb22667@kansas.net <mrb22667@kansas.net> wrote:
>
> As someone who probably leans towards AI pessimism, I've always tended to
> see
> how far we have to go rather than how far we've come. Maybe I need to be
> updated on some of this progress.
>
> E.g. wasn't it just recently that the U.S. started attempting high-tech
> video
> security in airports in an attempt to have known terrorist faces
> "recognized"
> by graphics software? And wasn't it discovered that things as simple as a
> band-aid or change or removal of glasses was enough to easily fool the
> software? In short, the computing power needed to match a 3-year old's
> ability to immediately visually recognize his mother (at any angle,
> variable
> lighting, and in real time with motion) hasn't yet even been remotely
> conquered by programmers, has it? Or in the audio realm, computers can
> read
> to us now, but can they read a poem or prose with any passion, (okay -- so
> not
> all of us can do that) -- but Data on Star Trek is portrayed as quite the
> aspiring artist in any number of pursuits. Can a computer give all the
> proper
> inflections on the fly as many literate people can? These abilities may
> well
> be the easiest part of simulating consiousness. And if that was
> convincingly achieved, the religious questions would be interesting
> indeed!
> 2015 seems impossible beyond question, but maybe I'm just behind.
>
> --Merv
I think the 2015 that the lecturer I saw predicted was way on the optimistic
side. But maybe Gene Roddenberry's 24th Century prediction for a "Data" is
a bit on the pessimistic side (provided, of course, that the strong AI
postulate is true and consciousness is indeed an emergent property of a
sufficiently complex neural network - and the jury's still out on that one!)
"Data" was a very interesting sci-fi creation and highlighted the kind of
problems you mention. In one episode he tries to learn humour by watching
recordings of stand-up comics and tries to emulate them. He is hopeless and
just doesn't "get" it - emphasizing his unfeeling "computer" nature. With
music, he fares better and produces performances that please people, but
equally is unable to "feel" but can only imitate the masters.
Perhaps this mirrors real computers - which cannot read except in rather an
expressionless way (like the "Stephen Hawking" voice that I believe is a
standard voice on the Mac). However, some music programs (such as
"Sibelius") have the ability to play "espressivo" - though I don't know how
well they do this, and what AI is built in to make this happen.
But we should be aware of Moore's law and the exponential increase of
computing power. When I was 9, my dad took me to his office, and I was
awe-struck by the electro-mechanical calculating machines that could
multiply two 10 digit numbers together! They would chunk and clatter away
for several seconds, and all those digits would whizz up on the dials. Now,
40 years later, the computer I'm typing at will do 4 billion of those things
a second! Given another 40 years and another ten orders of magnitude, and
it's hard to predict what may be achievable (though I still expect that
Microsoft Word will be bloated and slow and require top of the range
computers to run at a realistic speed :-). Though I'm sure it'll talk
intelligently to you while you type and that the paper clip will show a bit
more intelligence!
Iain
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Thu Jun 21 17:14:18 2007
This archive was generated by hypermail 2.1.8 : Thu Jun 21 2007 - 17:14:18 EDT