RE: ABCDEFGHIJKMNOPQRSTUVWXYZ

Glenn Morton (grmorton@waymark.net)
Mon, 29 Dec 1997 21:50:30 -0600

At 07:14 PM 12/29/97 -0600, John E. Rylander wrote:
>Glenn,
>
>Occasionally one compression algorithm will compress information A to be
>smaller than B, and another making b smaller than A, so it's tricky to use
>practical compressibility even for comparisons, in some cases.

In the comparison you don't care what algorithm is used. Only the shortest
sequence is what is needed. If one method compresses a sequence shorter
than another then you use the algorithm that most compresses the sequence.

>
>Also, it -is- easy to come up with an algorithm that compresses the
>Britannica -- and only the Britannica -- down to a 1 bit file. (In my
>example, everything but the EB is compressed via PKZIP [a ubiquitous
>PC-based compression program].)
>
>The problem, as Yockey would point out, is that my compression/decompression
>algorithm simply contains the entire EB within it (to recognize it during
>compression, and to insert it during decompression of the unique 1-bit
>file), and so is presumably larger than the -normally- compressed EB itself,
>and so (by agreement) doesn't count as a success.
>
>But then I wonder -- if information is compressible IFF a smaller program
>can generate it, then are small quantities of information uncompressible per
>se? E.g., "111111" seems pretty compressible, but I bet it'd be hard to
>write a 5-instruction program (especially using Turing machine instructions)
>to generate it.

Put in a compressed machine code you could conceivably have something like

6451

which is shorter than the original
>
>Also, who gets to specify which program instructions are available
>
>But this is a tangent. I'll agree to say no more if you do. :^>

Then I too will shut up. :-)

glenn

Adam, Apes, and Anthropology: Finding the Soul of Fossil Man

and

Foundation, Fall and Flood
http://www.isource.net/~grmorton/dmd.htm