Re: Entropy

From: David_Bowman@georgetowncollege.edu
Date: Wed Oct 25 2000 - 10:13:58 EDT

  • Next message: Bertvan@aol.com: "petty bickering academics"

    Regarding Richard's latest comments concerning my last post:

    >> .... Random sequences (i.e. sequences generated by fully
    >>random processes) are both incompressible and maximally complex.
    >
    >Your last sentence should strictly have said that sequences formed by random
    >processes *tend* to be incompressible and maximally complex, but are not
    >necessarily so. They may, by chance, turn out to have a regular pattern, and
    >so be compressible. (Sorry to be pedantic, especially as you've effectively
    >make this point below, but I didn't want anyone to be misled by this
    >sentence.)

    Good point. If we were to be even more pedantic we could say that
    sequences formed by random processes tend to be incompressible and
    maximally complex such that the tendency is manifest by a probability of
    the mean maximal compression ratio being significantly greater than 1
    (by some small fixed positive [delta] no matter how small) asymptotically
    approaching 0 in the limit of the length of the sequences becoming
    arbitrarily long.
       
    ...
    >Note also that Kolmogorov complexity is sometimes referred to as
    >"algorithmic randomness" by Chaitin and others, and this sometimes gets
    >reduced to "randomness" for short. In this special sense, one can say that
    >the second sequence above is "more random" than the first sequence. But it's
    >important to note that this is a non-probabilistic sense of the word
    >"random", and I feel that using the word in this sense causes a lot of
    >confusion, as in my recent discussion with Brian.

    Yes, alas. This is another good example of the disparate divergences in
    the meanings of important technical terms extant in the field (by even
    the very founders of the field in some cases). Such multiplicities of
    meaning often do make for a lot of unnecessary confusion.

    >Algorithmic randomness is a property of a sequence or state, while
    >statistical (or probabilistic) randomness is a property of a process. In
    >statistical terms, we cannot say that one sequence is more random than
    >another. A sequence either comes from a random process or it doesn't.

    Another good point.

    >It's also interesting to note that Kolmogorov complexity can give a result
    >opposite to that of Dembski's "specified complexity". The first sequence of
    >coin tosses above has lower Kolmogorov complexity than the second, but
    >higher Dembski specified complexity (with respect to the chance hypothesis
    >that the coins were tossed fairly).

    Yeah. I have not extensively read Dembski's writings, but from what I
    have read he does seem to use this notion in opposite ways. The best
    understanding that I have gotten about what he is claiming about such
    "specified complexity" is that in order for something (such as a
    symbol sequence) to have this property it needs to be *both* highly
    compressible *and* still be highly complex. The only way these two
    opposing properties can be simultaneously satisfied is if the original
    sequence is so *very* long that even after it has been compressed down
    the length of its complexity (which is much smaller than the original
    sequence because of its high compression ratio due to its high
    compressibility) it *still* is very long as a most-compressed sequence.
    Of course this is not the same as his other criterion of using the
    negative logarithm of the probability of a given sequence, since this
    latter value depends on a knowledge of just what the probability
    distribution is for generating the possible sequences, whereas the
    former notion only requires a copy of the actual sequence(es) in
    question.

    David Bowman
    David_Bowman@georgetowncollege.edu



    This archive was generated by hypermail 2b29 : Wed Oct 25 2000 - 10:18:53 EDT