Re: Information: Brad's reply (was Information: a very

Greg Billock (billgr@cco.caltech.edu)
Sun, 28 Jun 1998 11:03:49 -0700 (PDT)

Glenn Morton replying here to Brad:

[...]

> >Further more you have not refuted my calculations with anything but your
> >personal opinion which does not seem to be based on any knowledge of the
> >material you are discussing.
> >
> >You misunderstand the basics of information theory if you think that random
> >noise consists of or can create information in any form whatsoever.
>
> You are equivocating on the word 'information' as meaning semantics. My son
> is a EE and you guys talk about fidelity of the MESSAGE being conveyed.
> But that is not informational content of a sequence. Yocky states

This is exactly right, and we will have made significant progress here
when everyone has finally understood that

>>>>> MEANING HAS NOTHING TO DO WITH THE INFORMATION CONTENT <<<<<<
>>>>> OF A MESSAGE AS APPROACHED BY INFORMATION THEORY <<<<<<

Please copy-paste this however many times it takes before it finally
sinks in.

Information theory is >>>>> ONLY <<<<< about resolving uncertainty in
the message you get. To say that randomly generated sequences of messages
do not generate information betrays a lack of comprehension of information
theory so profound as to defy explanation when coupled with advice to
others to learn it. A randomly generated sequence of messages produces
>>> MAXIMAL <<< information.

Quiz: How much information is conveyed by the following string:

"0"

*****

This is basically a ridiculous problem (before anyone writes another
underinformed post about it). Since there is basically no way to
detect what a one-shot information source could have done, there is
no way to model the distribution of its possible messages, and
consequently no way to figure out how much information was gotten.
Was I restricting myself to {0, 1}? to numerals? to one-digit
keyboard taps? with some weird probability distribution on them?
unrestricted length?

Applying information theory to biology is a bit more realistic, because
we have lots of examples of 'messages' and can at least make progress
in detecting their relationships (although the 'code' doesn't seem very
clear as of yet). A main problem is that nobody really knows how much
information content is in DNA. If someone tells you there are 2*length
bits there, you can feel free to say they are full of it; there is
no way all possible DNA sequences can be associated with living
organisms. A fraction so small as to be barely detectable on the
total scale has the chance of being associated with a live organism.
Nobody knows what that fraction is, or even where exactly the most
crucial parts are. So similar to the "0" message, it is a hard
problem to try to figure out what information theory might do for us
in biology. Even in more behavioral biology, it is hard. Is the
brain trying to maximize information somehow? If so, where? Do
species try to maximize genetic information somehow? The problem with
these scenarios is that they invariably predict utter randomness, since
that generates the most information, so people are still puzzling about
how to restrict the channel models so as to better apply it. Who knows,
perhaps someday a lot will come of it.

-Greg