In the field of "artificial intelligence" the phrase "emergent property" was
quite a buzz word for a while. It is believed by some researchers in this
field, that, for example, consciousness (one of the last great unexplained
mysteries of science), would be an emergent property of a sufficiently large
neural network.
I worked in the field of neural networks for a long time, and did my PhD in
a related field (probabilistic modelling). I think the area was divided
into a class of people who used them for solving practical statistical
pattern recognition problems (my work at the moment is in using a
probabilistic technique for performing interval analysis on ECG signals),
and those who believe that such machines could posess "artificial
consciousness". An example of the latter is Prof. Igor Aleksander of
Imperial College London, who has done much work in attempting to make an
artificially conscious robot.
Those who would espouse the "consciousness as emergent property" would
presumably have to adopt the "Strong AI" hypothesis - ie that genuine
Intelligence/Consciousness can be achieved by software alone. Others, such
as mathematician Roger Penrose, and philosopher John Searle, dispute the
Strong AI hypothesis strongly. Penrose believes that consciousness lies in
a previously unexplored area of physics, and may be tied up with quantum
gravity. Searle is famous for the "Chinese Room" counter example - a
thought experiment where non-chinese speakers are in a sealed room into
which messages in Chinese symbols are passed. Here is an explanation of the
idea from a website I found
Against "strong AI," Searle (1980a) asks you to imagine yourself a
monolingual English speaker "locked in a room, and given a large batch of
Chinese writing" plus "a second batch of Chinese script" and "a set of
rules" in English "for correlating the second batch with the first batch."
The rules "correlate one set of formal symbols with another set of formal
symbols"; "formal" (or "syntactic") meaning you "can identify the symbols
entirely by their shapes." A third batch of Chinese symbols and more
instructions in English enable you "to correlate elements of this third
batch with elements of the first two batches" and instruct you, thereby, "to
give back certain sorts of Chinese symbols with certain sorts of shapes in
response." *Those giving you the symbols *"call the first batch 'a script'
[a data structure with natural language processing applications], "they call
the second batch 'a story', and they call the third batch 'questions'; the
symbols you give back "they call . . . 'answers to the questions'"; "the set
of rules in English . . . they call 'the program'":* you yourself *know none
of this. Nevertheless, you "get so good at following the instructions" that*
*"from the point of view of someone outside the room" your responses are
"absolutely indistinguishable from those of Chinese speakers." Just by
looking at your answers, nobody can tell you "don't speak a word of
Chinese." Producing answers "by manipulating uninterpreted formal symbols,"
it seems "[a]s far as the Chinese is concerned," you "simply behave like a
computer"; specifically, like a computer running Schank and Abelson's (1977)
"Script Applier Mechanism" story understanding program (SAM), which Searle's
takes for his example. But in imagining *himself* to be the person in the
room, Searle thinks it's "quite obvious . . . I do not understand a word of
the Chinese stories. I have inputs and outputs that are indistinguishable
from those of the native Chinese speaker, and I can have any formal program
you like, but I still understand nothing." "For the same reasons," Searle
concludes, "Schank's computer understands nothing of any stories" since "the
computer has nothing more than I have in the case where I understand
nothing" (1980a, p. 418). Furthermore, since in the thought experiment
"nothing . . . depends on the details of Schank's programs," the same "would
apply to any [computer] simulation" of any* *"human mental phenomenon"
(1980a, p. 417); that's all it would be, simulation. Contrary to "strong
AI", then, no matter how intelligent-seeming a computer *behaves* and no
matter what *programming* makes it behave that way, since the symbols it
processes are meaningless (lack semantics) *to it*, it's not really
intelligent. It's not actually thinking. Its internal states and processes,
being purely *syntactic*, lack semantics (meaning); so, it doesn't really
have *intentional* (i.e., meaningful) *mental states*.
Searle's argument has been challenged and is the subject of endless debates.
The concept of the "emergence" of a conscious entity from a machine that is
programmed to manipulate symbols and simulate, e.g. a brain is one that
raises many deep philosophical questions for religious believers - does it
have a soul, for instance?
Iain
On 5/5/06, Randy Isaac <randyisaac@adelphia.net> wrote:
>
> Perhaps this phrase warrants a little more discussion and
> clarification. I
> think there's a lot of good insight here but also a lot of confusion. I
> would certainly like to learn more about it from all of you.
>
> On one hand, if emergent properties are properties that are "...more than
> simply the sum of the parts..." then the concept is ubiquitous and almost
> trivial. Trivial in the sense of no major philosophical
> consequences. This
> definition of emergent can be easily illustrated by considering the
> hydrogen
> atom. The proton and the electron can be studied in great detail as
> separate entities but not until they interact with each other and are
> studied as a two-body system do you get the beauty of the various energy
> states and electron position/momentum distributions. Add more components
> and many other properties emerge. Combine enough atoms and you get solid
> state behavior such as conductivity, semiconductivity, superconductivity,
> and countless other properties that derive only from a collection of
> atoms.
> Virtually every system in the world displays this kind of emergent
> property.
> But I don't see reductionism as meaning a system is merely the sum of its
> parts in this sense nor do I see this kind of emergent property as
> offsetting reductionism.
>
> On the other hand, a more interesting approach is to break a system down
> into the relevant forces governing the interaction of the parts. The
> forces
> of gravity, electro-magnetism, and the weak and strong forces are the
> fundamental forces. A myriad of diverse properties emerge upon applying
> these forces to a set of elementary particles (or fields, if you prefer).
> Reductionism, it seems, would indicate that all properties can be reduced
> to
> a proper application of these forces (which scientists hope to unify some
> day into a grand unified theory). Are there properties in this world that
> cannot be reduced to these forces? In cosmology, the latest data indicate
> an incredible 95% of the universe is based on non-baryonic (dark) matter
> and
> on unknown (dark) energy. It remains to be seen where they come
> from. The
> other major question of emergent properties is life and
> consciousness. Are
> these emergent from the basic forces or is there also a (dark) unknown
> (supernatural?) parameter. This appears to be the next big thrust of
> reductionism, the reduction of our thoughts, emotions, behavior, religion,
> and ultimately our life, our free will, to properties that emerge from the
> fundamental forces of nature governing the constituent atoms in our
> brains.
>
> Several decades ago, I enjoyed reading Donal Mackay's books and his
> concepts
> of hierarchical, complementary levels of meaning and the fallacy of
> "nothing-buttery". There's a lot of good insight there but it's not clear
> that he's precluded reductionism in the second sense above. Emergent
> properties of complex systems may indeed require several complementary
> levels of explanation, each of which is complete in its own realm, yet
> none
> of which is a complete explanation of the system. This would not deny the
> reductionist view of underlying forces being the sole origin of all the
> levels. Mackay also gives examples where human intelligence has imposed
> meaning. One of his examples was a sentence written in chalk on a
> blackboard. It can be described "completely" chemically and physically at
> various levels but must also be explained at the level of meaning of the
> alphabet, the vocabulary, and the sentence structure. This is an example
> where the meaning is imposed from outside the system and has nothing to do
> with the inherent system itself.
>
> Net: I'm not a reductionist but making a clear argument against
> reductionism isn't so easy either.
>
> Randy
>
>
> ----- Original Message -----
> From: "Mervin Bitikofer" <mrb22667@kansas.net>
> To: <asa@calvin.edu>
> Sent: Thursday, May 04, 2006 10:38 PM
> Subject: Re: Evolutionary Psychology and Free Will
>
>
> ...... I had the privilege of hearing that doctor in person at
> > K-State (again -- courtesies of Keith), and the phrase "emergent
> > properties" was one of the answers given to reductionist thought. Yet I
> > still have only a vague notion of its meaning. A property of a whole may
> > "emerge" that is more than simply the sum of the parts. -- or at least I
> > can parrot this explanation. .......
> >
> > --merv
>
>
-- ----------- After the game, the King and the pawn go back in the same box. - Italian Proverb -----------Received on Sat May 6 11:43:49 2006
This archive was generated by hypermail 2.1.8 : Sat May 06 2006 - 11:43:49 EDT