Re: design: purposeful or random?

Brian D Harper (harper.10@osu.edu)
Fri, 28 Feb 1997 08:34:48 -0500

At 09:22 AM 2/27/97 -0500, Gordie wrote:

>As a statistician, I am dismayed by some of the recent discussion on
>randomness.
>
>Does a random sequence contain information? I certainly hope so:
>

I have appreciated this discussion and think that Gordie, Glenn and
Art have all made very valuable contributions.

First of all, I think Art captured very well the meaning of information
in its typical everyday usage. Nevertheless, it is common for
words to have a different meaning when used in some technical
field.

I also appreciated what Gordie wrote as it gives me a different
perspective on how the word random is used. It also illustrates
how words can mean different things in different fields. For
example, in algorithmic information theory, random has a very
precise meaning, namely incompressibility. This can be translated
as lacking any pattern or regularity since any pattern or
regularity could be used as a means of compression. Also, random
defined in this way does not have the usual negative connotations
normally attache to the word. For example, the compressed form
of the "message" is by definition random. Thus we can think of
multitudinous observations of the motions of the planets which
Newton somehow managed to compress into an extremely simple
law. In the AIT sense of the word, Newtons laws are random.

As I said, Art did a good job of describing the typical
interpretation of "information". I would like to argue that
the algorithmic information content (loosely, the descriptive
length) also captures the intuitive notion of information.
Think of some object, any object. It doesn't matter if you
know its function or even if it has a function. Now imagine
describing this object in writing to someone who doesn't
know anything about it. Further imagine that this person
lives in the new space station orbiting one of Jupiter's
moons. You have to write the description so that the object
is described fully yet in the fewest possible words since
sending the message will be expensive. I believe that the
length of this shortest description captures very well the
idea of both information and complexity. The more complex
an object, the more information required to specify the
object.

Now, about information and meaning and randomness. I've tried
to think of examples that do not involve messages written in
English or some other language since we are generally wedded to
the idea that the information in such messages is related
to the meaning.

Here's an example I started working on last night. Following
are three sets of instructions:

(A) FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

(B) LRRFFFLRRFFFLRRFFFLRRFFFLRRFFFLRRFFFLRRFFFLRRFFFLRRFFFLRRFFF

(C) FLLRLRLRLRRLFRLRLRLRRLRLRRLLFRRLLLLRRLFFLRRLLRRLLFLRRFRLLRLR

A person is given a set of instructions before entering a maze. As
they go through the maze they will continually have to choose
between going left, right or forward. The set of instructions tells
them how to get through the maze successfully. One can further
offer some reward for success weighted according to the number
of mistakes and the time required to get through.

What would be the meaning of these instructions for (a) the participants,
(b) the person conducting the test (c) a third party observing the
test?

No matter, it seems reasonable to me to say the amount of information
contained in the set of instructions is proportional to some measure
of the difficulty of correctly carrying out the instructions. This is
very close to the idea of descriptive length. Any pattern in the
instructions will help the person follow the instructions quickly and
accurately.

I would further argue that algorithmic information theory would rank them
in this order knowing absolutely nothing about meaning, mazes, left
right, etc. For example, I saved the three sequences above as
ascii files and then compressed them with gzip. Their compressed
lengths are 32, 38 and 56 bytes respectively for sequences A, B and
C.

In summary, I would argue that algorithmic information theory provides
a great, objective intrinsic measure of the amount of information.
In getting such a measure one has to sacrifice something, namely
the "meaning" of the sequence. This does not imply in any way that
the sequence being measured has no meaning or that if a sequence has
meaning that it cannot be determined.

Brian Harper
Associate Professor
Applied Mechanics
The Ohio State University

"Should I refuse a good dinner simply because I
do not understand the process of digestion?"
-- Oliver Heaviside