Re: God...Sort Of

Brian D Harper (bharper@postbox.acs.ohio-state.edu)
Thu, 22 Jul 1999 16:27:38 -0700

At 11:40 PM 7/21/99 EDT, Kevin wrote:
>In a message dated 7/21/99 8:09:18 PM Mountain Daylight Time,
>bharper@postbox.acs.ohio-state.edu writes:
>
>>
>> Let's not jump to conclusions.
>>
>
>No need to. Experimental results demonstrate that Davies is wrong; the only
>question is why is he wrong. I believe it is most likely that he either
>didn't know about it or didn't really understand it, because in his
>single-page discussion of proteinoids, he described them as being the result
>of random polymerization. The research clearly demonstrates that thermal
>copolymerization of amino acids is not random, so he either didn't know about
>the research or he didn't understand it.
>

Perhaps I should let this slide since I haven't read the book
in question. Nevertheless :), I will make a guess. My guess is
that this situation is analogous to that where an evolutionary
biologist says that mutations are random even though they are
not random in the technical sense (from, say, probability
theory).

Whether Davies is using the word random correctly or not is
not really very interesting, IMHO. The point really is that
the nonrandomness (technical sense) of the process is what
guarantees small information content. To see this consider the
extreme case of a completely nonrandom (completely deterministic)
process wherein there is a single outcome. In Shannon information
theory the information content is reflected by the number of messages
in an ensemble of messages. If there is only one outcome, then the
number of messages in the ensemble is one and the information
content is zero. Yes, it's counter-intuitive, but this is why random
sequences have the highest information content.

We also arive at the same general conclusion within the context
of algorithmic information theory (AIT). Here, the information
content is the length of the shortest algorithm which
performs some task. Since natural laws are generally short
and concise, they can be described by short algorithms and
thus have small information content.

BH:==
>>
>> I'm still not sure exactly what
>> "specified complexity" is in terms of information theory (i.e.
>> its precise definition) but the statement:
>>
>> #'He explicitly says that laws cannot contain the recipe for
>> #life because laws are "information -poor" while life is
>> #"information - rich."'
>>
>

Kevin:==
>The problem with this claim is that it assumes life has always been
>"information - rich"; it is more likely it started out "information - poor"
>and evolved to its present level of "richness", which is what the
>experimental results in fact demonstrate.
>

The only "complaint" I might have about this statement is the
degree to which experimental results demonstrate the evolution
from poor--->rich.

One way of looking at this is that the results of info-theory
may give an indication of the appropriate direction for further
research. Let's suppose that one is convinced that the first
stages of the origin of life are deterministic. Not everyone
believes this, but Fox certainly did. Let's suppose also that
one thinks that information theory is telling us something
important. That deterministic laws are information-poor.
This would then lead one in the direction that you indicate.
A search for evolutionary mechanisms that would increase the
information content. Eigen's hypercycles are an example of
such a search, though I believe that search fell short of the
mark :).

But this general notion in no way negates what Davies said,
unless, of course, one views the process of evolution itself
as being deterministic.

BH:==
>>
>> is right on the mark. This is a fundamental result from
>> information theory proven first, if I remember correctly,
>> by Chaitin, one of the founders of algorithmic information
>> theory.
>>
>

Kevin:
>The way thermal copolymerization works is that the shape and chemical nature
>of the amino acids determines which amino acids will bind together. This is
>controlled by the known physiochemical laws. I tend to doubt that
>information theory really proves that the physiochemical laws cannot create
>life, but if it does then there is research that refutes this proof.
>

Ah yes, very good. There is a lot of misunderstanding about what
this result from information theory really means. To use info
theory terminology, it means that life is undecidable given only
the natural laws. Creationists have misunderstood the result to
mean that life requires input from an intelligent agent. The
information cannot arise, so the argument goes, from the action
of natural laws, so it must be added from an external source.
Actually, this seems to assume some type of conservation law
for information. Anyway, what some fail to understand is that
undecidable means undecidable :).

Since information theory deals with information directly
and only indirectly with life, let me rephrase what you
say above. What we should say is that it is uncertain whether
the physiochemical laws, acting alone, can create information.

"It is not certain that all is uncertain, to the glory of
skepticism." -- Pascal ;-)

Undecidability is not nearly so mysterious as it sounds.
Chaitin has shown, for example, that there is undecidability
even in pure mathematics. If there, why not in biology? :)

Brian Harper
Associate Professor
Applied Mechanics
The Ohio State University

"I'm tryin' to think, but nuthins happenin'"
-- Curly