>In my mind I relate the information that Meyers is talking about to the
functional content of a working program of say a couple of hundred thousand
lines of Fortran code. Lets assume that we have another really smart program
that compresses the Fortran source as much as possible. This means that dead
code is eliminated, comments go, variable names are shortened and on and on
applying language rules till all the redundancy in the program is gone and
the program is totally unreadable by humans.
Surely it is XML. ;)
And in that case it would be bloatware. I remember ganglia sending a
megabyte size packet, but the data content was only about 30 bytes. XML is
now being used as a data transport. And that goes to show software engineers
are idiots. XML is great as a declarative specification/configuration
language. But its not for humans when in a datastream.
>Now we compress what is left using the normal compression tools, although I
wonder how much compression one would get.
Well, mr. huffman sez it will get larger. Been there, done that. Seen it
in action.
The rest of your post is about specification, not complexity. Anti-ID
people know one word: complexity. They don't _seem_ to talk about
specification. Am I wrong?And nobody talks about partition functions. Why
partition functions? Because chemical engineering is based on statistics.
In my view a mutation isn't sufficient. It needs other parts. Parts that
are produced by previous mutations but which may not be used until a later
mutation comes along. So its the concatenation of mutations that counts.
That spells a re-writing of the partition functions in the equations that
govern the physics. A cell isn't just biology. Its really chemical
engineering. And that mathematics is what governs populations of molecules
everywhere - organic or inorganic. Do you disagree?
On Sat, Nov 21, 2009 at 5:56 PM, Dave Wallace <wmdavid.wallace@gmail.com>wrote:
> Rich
>
> I am about half way through reading Signature in the Cell by Dr. Stephen
> Meyer. He takes very great pains to indicate that what he is talking about
> is not communications channel capacity and he does not mention compression
> or other aspects of Information Theory at all, at least as far as I have
> read. He specifically says that what he is talking about is not Shannon
> information. As best I understand him, he is talking about a functional
> design specification at the most basic reduced level.
>
> From your online discussion with Del Tackett
> Rich Blinne on May 22nd, 2008 10:16 pm
>
>> It is also presumed that DNA is a design specification. A more complex
>> design requires a more complex specification. But, there are two paradoxes
>> in biology known as the c-value and g-value paradoxes. More complex life
>> does not necessarily have a higher DNA weight nor higher numbers of gene.
>> The Human Genome Project overestimated the number of genes going in with
>> estimates of 80-140,000 genes when the real number is around 30,000. What’s
>> going on is there is a lot of “random” alternative splicing producing
>> multiple proteins from the same gene. Before you say a ha, note that humans
>> do not have the record for alternative splicing. The fruit fly does with
>> 38,000 splice variants. Thus, evolution does not need to generate new
>> “information” because ID’s concept of information is simply flawed.
>>
>
> In my mind I relate the information that Meyers is talking about to the
> functional content of a working program of say a couple of hundred thousand
> lines of Fortran code. Lets assume that we have another really smart program
> that compresses the Fortran source as much as possible. This means that dead
> code is eliminated, comments go, variable names are shortened and on and on
> applying language rules till all the redundancy in the program is gone and
> the program is totally unreadable by humans. Now we compress what is left
> using the normal compression tools, although I wonder how much compression
> one would get. As I see it the information the ID folks are talking about
> would be proportional to the final compressed program size plus the
> restriction that the results computed by the code must be correct ie the
> chemical strings/proteins in the cell must be able to function/reproduce
> whatever. Now the older programmers here know that replacing
>
> DO 3 I = 1,3
> by
> DO 3 I = 1.3
>
> could well result in a broken non functional program as a loop is turned
> into an assignment statement. Thus the compressed size is only part of the
> issue as the program must also function correctly. (note to none programmers
> what I illustrate reflects extremely bad programming language design but
> when it was designed back in the 1950s people did not know any better, it is
> doubtful that we have come very much further but I won't get started on that
> issue)
>
> AFAIK this kind of information is NOT what people talk about in Information
> Theory. Sure the compressed size, transmission characteristics are covered
> in Info Theory but not the requirement to execute properly.
>
> It seems to me that the kind of process I have described would get rid of
> the equivalent of spliced variants, none coding regions.... We also know
> that a totally different algorithm might also produce a properly functioning
> program and have a much shorter length. Thus Meyers is trying to estimate
> the approximately minimal amount of information/machinery needed to make the
> first working cell(s). The complexity is staggering since the dependencies
> are circular. Maybe you don't think so but I do. I think you are being a
> little unfair to people who are not information theorists and should
> postulate the most favorable interpretation of their meaning.
>
> Dave W
>
> To unsubscribe, send a message to majordomo@calvin.edu with
> "unsubscribe asa" (no quotes) as the body of the message.
>
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Sat, 21 Nov 2009 18:30:31 +1800
This archive was generated by hypermail 2.1.8 : Fri Nov 20 2009 - 19:30:50 EST