RE: pure chance

Brian D. Harper (harper.10@osu.edu)
Mon, 13 Jan 1997 12:32:47 -0500

At 09:24 AM 1/13/97 -0600, John Rylander wrote:
>Brian,
>
>Too bad they didn't keep the word "random" for the process, and use some other term for the result. I think that'd be more in keeping with such ordinary usage as the term has. (So that if you flip a fair coin fairly ten times and get 10 heads, that would still be a random process [or say if one used its quantum equivalent, and assuming something like the Copenhagen interpretation], even if the result is very compressible.)
>
>It's a technical field, so they can use terms in any way they wish too, but unnecessary redefinition often leads to confusion when they interact with non-specialists.
>

These are good points. Needless to say, there is much more to
the story than I've gotten in to. I mentioned Solomonoff's motivation
for getting into the basic ideas of AIT. Chaitin, who published his
first work in this area about the same time and who has done
more than anyone else to develop the theory, had as one of
his primary motivations developing a rational objective definition
for what is meant by random that would avoid some peculiarities
that arise from classical probability theory. Yockey goes into
this some his letter to _Nature_ "When is random random?"
v. 344 (26 April 1990) p. 823. The letter begins with the usual Yockey
flair:

"Sir-- The intelligentsia is not on speaking terms with
itself". ;-)

Here I want to illustrate a point made by Chaitin in one of his papers
since it has to do with issues that commonly come up in discussions
on probability. Suppose we toss a fair coin 64 times recording 0 for
tails and 1 for heads. Would either of the following two sequences
be more surprising?

(A) 0101010101010101010101010101010101010101010101010101010101010101

(B) 1101111001110101101101101001101110101101111000101110010100011011

Actually, I asked this question on talk.origins several years ago and many
people said something to the effect that while they probably would be
more surprised to get (A), they shouldn't be because both sequences
have exactly the same probability of occurring. Though this answer is
unsettling, it is correct from the point of view of classical probability
theory. So, Chaitin claimed one of his motivations was to rescue our
common sense and intuition from this really absurd conclusion.
This is not to say that our intuition is always correct, in this it turns
out to be correct and we really should be surprised to get (A).

AIT succeeds in this by putting many sequences into groups, according
to their compressibility, so that one no longer has to deal with individual
sequences that all have the same probability of occurring. One is then
able to show that *any* ordered sequence (not just the specific one
in (A) but any) is unlikely to occur by tossing a fair coin. This is a very
satisfying result for me, enough so that I'm willing to risk some
temporary confusion about terminology in attempting to establish
algorithmic randomness as the fundamental definition of the term.

Now let's go back to undecidability for a fundamental result that's
likely to really surprise. It's undecidable whether any specific
sequence is random, yet it is possible to prove that practically
every sequence is random.

JR:==
>When they use "stochastic" do they mean "truly indeterministic", or just "unpredictable in detail" (like deterministic chaos, or even a hidden variable interp or q.m.)?
>

I'm not sure about hidden variable quantum mechanics (since I know
nothing about it) , but deterministic chaos is definitely not stochastic.

Brian Harper
Associate Professor
Applied Mechanics
Ohio State University