RE: pure chance

John E. Rylander (rylander@prolexia.com)
Mon, 13 Jan 1997 09:24:59 -0600

Brian,

Too bad they didn't keep the word "random" for the process, and use some =
other term for the result. I think that'd be more in keeping with such =
ordinary usage as the term has. (So that if you flip a fair coin fairly =
ten times and get 10 heads, that would still be a random process [or say =
if one used its quantum equivalent, and assuming something like the =
Copenhagen interpretation], even if the result is very compressible.)

It's a technical field, so they can use terms in any way they wish too, =
but unnecessary redefinition often leads to confusion when they interact =
with non-specialists.

When they use "stochastic" do they mean "truly indeterministic", or just =
"unpredictable in detail" (like deterministic chaos, or even a hidden =
variable interp or q.m.)?

--John

----------
From: Brian D. Harper[SMTP:harper.10@osu.edu]
Sent: Monday, January 13, 1997 7:38 am
To: evolution@calvin.edu
Cc: Glenn Morton
Subject: Re: pure chance

At 10:55 PM 1/9/97 +0000, Glenn wrote:
>Brian,
>
>I am hoping that I have you softened up (which is highly unlikely;you =
are=20
>probably laying a trap for me), cause I want to return to a discussion =
we had=20
>about a year and a half ago. I think the most important statement in =
Yockey's=20
>book concerning the creation/evolution issue is this:
>
>"Thus both random sequences and highly organized sequences are complex =
because
>a long algorithm is needed to describe each one. Information theory =
shows
>that it is fundamentally undecidable whether a given sequence has been
>generated by a stochastic process or by a highly organized process. =
This is
>in contrast with the classical law of the excluded middle (tertium non =
datur),
>that is, the doctrine that a statement or theorem must be either true =
or
>false. Algorithmic information theory shows that truth or validity may =
also
>be indeterminate or fundamentally undecidable."~Hubert Yockey, =
Information
>Theory and Molecular Systems, (Cambridge: Cambridge University Press, =
1992),
>p. 82.
>
>As I recall we disagreed about this. We appealed to Yockey personally =
but as=20
>I recall I didn't think his answer was responsive to the question. =
Given that=20
>life is highly organized not ordered, and it is believed that life came =
from a=20
>random process, I interpreted this to mean that even if God had created =
life,=20
>by creating highly organized sequences of DNA and creating a cell to =
put it=20
>in, we would NOT be able to tell the difference between creation by God =
and=20
>evolution by random processes.
>

I do remember that we disagreed on a number of things. I seem to =
remember
the gist of some of our disagreements but am having a hard time =
remembering
all the gory details.=20

My ideas have changed quite a bit over the last couple of years. For
example I argued at length with Bill Hamilton (if I remember correctly)
that determining that a particular sequence is highly compressible is=20
tantamount to showing that it is the result of a highly ordered process=20
(e.g. some simple natural law). The reason being that a highly =
compressible=20
sequence is so terribly unlikely to occur by a random process. But here =
I=20
was making the horrendous mistake that I thought only others were =
capable=20
of, e.g. I was assuming said random process had outcomes that were all=20
equally probable (like tossing a fair coin). Actually, one of the =
examples I gave=20
in my lengthy posts on AIT was worked out specifically to convince me of =
the=20
error of my ways. In this example I rolled a 10 faced die many times, =
recording=20
1 if the die read between 3 and 10 and 0 if it read 1 or 2 (this is =
analogous to=20
flipping an unfair coin). The result turns out to be highly compressible =
due=20
simply to the preponderance of 1's in the sequence. So we have a highly=20
compressible sequence resulting from a purely stochastic process. The=20
fundamental point ( which I could state but didn't really appreciate =
yet) is=20
that the algorithmic complexity is an *intrinsic* measure, it is =
determined=20
purely by the structure of the sequence and says nothing about the =
process=20
that generated it.=20

I think the interpretation of Yockey's quote is related to this. From an
intrinsic measure of the complexity one can determine that the sequence
is highly complex but not how it was generated. So, I agree that your=20
interpretation is correct, although I'm not sure if Yockey would say it=20
that way :).=20

Now, lest I be misunderstood, let me emphasize that it is possible
to combine algorithmic information theory with probability theory
to draw some conclusions. For example, I can reasonably conclude
that is very unlikely to toss a fair coin 1000 times and get 750 heads.
But to reach this conclusion I have to know _a-priori_ something
about the process, i.e. a _fair coin_. In the real world, very few
processes have outcomes that equiprobable.

I think there is a good parallel here with recent discussions about
words and their meanings. Words as used in technical fields quite
often mean something entirely different than they do in every day
usage. A good example of this is the word random. In algorithmic
information theory (AIT) this word means something entirely different
than it does in common usage and it means something entirely
different than it does even in classical probability theory. This may
or may not have been apparent from the above discussion. In algorithmic
info-theory, random means only that a sequence is incompressible,=20
there is no algorithm for producing the sequence shorter than the
sequence itself. Solomonoff developed the basic ideas of AIT =
independently
from Kolmogorov and Chaitin (who was in high school at the time!),
expressing the results in the language of data and theories. Short
algorithms capable of compressing a huge set of data were said to be
"theories" or even laws. This is a good way of picturing things I think.
A set of data is said to be random if no "theory" can be found to =
compress
the data. But here we run into an interesting difficulty. Its entirely
possible for a random process (with random taking its meaning from
probability theory) to produce a non random result. This is just what
happened in my dice throwing example above. To avoid this confusion,
many have started replacing random process with stochastic process,
reserving "random" for its AIT interpretation.

There are many interesting implications of AIT. Glenn mentions one
of these in his quote above. Closely related to this is Yockeys =
conclusion
that life must be accepted as an axiom, it cannot be reduced =
(compressed)
to physics and chemistry. What does he mean by this?

Well, unfortunately, it seems also very easy to misunderstand the=20
implications of AIT. As an example, there is a short advertisement
for Yockey's book in Ross's _Facts and Faith_ [Fourth Quarter 1996,=20
volume 10(4)]. The last sentence of a short description of the book
reads:

"It demonstrates in a rigorous fashion the impossibility of life =
arising
by strictly natural processes"

I was really floored by this since Yockey demonstrates no such thing
and in fact I have to wonder whether the person who wrote this actually
read the book. I've been trying to give them the benefit the doubt in
making an honest mistake as opposed to a deliberate distortion. So,
perhaps they got this idea from Yockey's statement that life must
be accepted as an axiom. But this is not what Yockey meant at all.
What he meant is along the lines of the quote Glenn gave. Life is
undecidable given only the laws of physics and chemistry. This
doesn't mean that something else (something supernatural say)
is required, it just means undecidable. The question cannot be
decided. To my way of thinking, this conclusion is earth shattering
enough without having to distort it into the above quote from
_F&F_.

Brian Harper
Associate Professor
Applied Mechanics
Ohio State University