Re: design: purposeful or random? 1/2

Brian D Harper (harper.10@osu.edu)
Mon, 17 Feb 1997 23:32:51 -0500

At 06:02 AM 2/17/97 +0800, Steve Jones wrote:
>Group
>
>On Mon, 10 Feb 1997 21:42:27 -0500, Brian D. Harper wrote:
>
>>BH>I don't mean this to be negative, my suggestion is that you just
>>let slide stuff more than a week or two old and get caught up.
>>You'll be much more effective this way.
>
>>SJ>OK. I might do that.
>
>I haven't unsubscribed, but I am catching up by only responding to
>posts (with one or two exceptions) that have my name in them. After a
>bit of a lag, this is now working. I hope to post every couple of
>days.
>
>BH>To help you out, I've decided not to reply to any of your most
>recent group of posts except for this one.
>
>SJ>That's fine by me. Neither Brian, nor anyone else need feel that they
>have to reply to my posts or even read them! They are addressed to
>the Group and I know from private messages I get that some lurkers
>find them useful.

If you reply to one of my posts then I take it as a reply to me
regardless of how its addressed.

[...]

>
>SJ>...I will stick to laymen's definitions like "specified
>>complexity".
>
>>SJ>"Information in this context means the precise determination, or
>>specification, of a sequence of letters. We said above that a
>>message represents `specified complexity.' We are now able to
>>understand what specified means. The more highly specified a thing
>>is, the fewer choices there are about fulfilling each instruction.
>>In a random situation, options are unlimited and each option is
>>equally probable." (Bradley W.L. & Thaxton C.B., in Moreland J.P.
>>ed., "The Creation Hypothesis", 1994, p207)
>
>[...]
>
>>B&T's statement: "In a random situation, options are unlimited and
>>each option is equally probable."
>>
>>I followed up on this oops later in this thread in an attempt to
>>clarify, you probably haven't seen it yet.
>
>SJ>There is nothing wrong with this definition. It is *exactly* what
>"random" means:
>
>"Perhaps the most important sample type is the random sample. A
>random sample is one that allows for equal probability that each
>elementary unit will be chosen...Random numbers are digits generated
>by a process which allows for equal probability that each possible
>number will be the next." (Lapin L., "Statistics for Modern Business
>Decisions", 1973, pp194-195)
>

I'm curious whether you took a look at the follow-up post that I
mentioned above. There you will find that the 6 volume <Encyclopedia
of Mathematics> disagrees with <Statistics for Modern Business
Decisions>. The equal probability case is a special case.
They give a very good example (also used by Yockey
and by myself in another thread) of tossing a pair of fair die
and recording the sum. Surely you would agree that this is a
"random situation". Work it out and you will see that the variious
random events do not all occur with equal probability.

In the real world it seems to me that very few processes involve
events that occur with equal probability. Usual examples cited
involve games of chance and even here great care must excercised
to insure that probabilities really are all equal.

But the real issue here is not a definition of randomness but
rather whether the real physical process of interest involves
a random selection from several possibilities all of which occur
with equal probability.

>SJ>Brian appears to be getting mixed up with the *Darwinist-biological*
>definition of random as in "random mutation" (which indeed does not
>mean that "each option is equally probable"), but B&T are not talking
>about "random" as in mutation.
>

OK, Steve,

(a) specifically what process are they talking about?

(b) do all the events in this process occur with equal probability?

Suppose we take the protein first scenario for the origin of life.
I seem to remember that Bradley does a probabiliy calculation on
this somewhere. Anyway, in this situation we have a protein forming
by chance in the hypothetical primordial soup. Is it reasonable to
assume that each amino acid adds to a growing chain with equal
probability? If not, is the scenario still a random situation?

>SJ>BTW, Brian, just skipped over the main point which was:
>
>"Information in this context means the precise determination, or
>specification, of a sequence of letters. We said above that a
>message represents `specified complexity.' We are now able to
>understand what specified means. The more highly specified a thing
>is, the fewer choices there are about fulfilling each instruction.
>(Bradley W.L. & Thaxton C.B., in Moreland J.P. ed., "The Creation
>Hypothesis", 1994, p207)
>
>Perhaps he would care to comment on Bradley & Thaxton's
>definition of "specified" = "fewer choices"?
>

I would really like to but its tough. I'm still slowly reading
through Dembski's paper, maybe that will help. I want to measure
something. Given three or four objects can we measure their
specified complexity accurately enough to rank them in order
of increasing specified complexity? We have to be able to do
this or we will not be able to tell if specified complexity
increases or decreases, is conserved or what.

[...]

>SJ>Indeed. The word "information" as defined by information theory does
>not deal with meaning:
>
>"Some have attempted to develop a theory of meaning from these ideas.
>Shannon (1949) warned against this at the outset of his paper. The
>assignment of meaning or in biology, specificity, to certain
>particular member of the ensemble lies outside information theory."
>(Yockey H.P., "An Application of Information Theory to the Central
>Dogma and the Sequence Hypothesis", Journal of Theoretical Biology,
>46, 1974, pp371-372)
>
>which is the whole point of "information" in biology and human
>languages. That's why I don't accept Brian's "information theory"
>definition of "information" and prefer to use "specified complexity"
>instead.
>

Again, I am wondering why you are quoting Yockey when he disagrees
with you completely. You claim that information theory is concerned
with the engineering problem of communicating over a channel. It
is true that this was the application that Shannon had in mind.
This does not mean that it is the only application of information
theory. Nevertheless, you make a very good point that the application
of information theory to other situations requires some justification.
Yockey provides this justification by showing that electronic
communication systems are isomorphic with the genetic information
system.

Is it possible to justify in a similarly rigorous fashion the
appropriateness of natural language as an analogy to biological
information? Or the notion that specified complexity describes
biological information. Can this be justified?

For clarification, I am in no way shape or form requiring Steve
to do this.

>BH>The pioneers of information theory warned of this trap from the
>>beginning:
>
>> ================================
>The fundamental problem of communication is that of reproducing at
>one point either exactly or approximately a message selected at
>another point. Frequently the messages have MEANING; that is they
>refer to or are correlated according to some system with certain
>physical or conceptual entitites. These semantic aspects of
>communication are irrelevant to the engineering problem. --Shannon,
><Bell System Technical Journal> v27 p379 (1948).
>================================
>
>SJ>Yes. See above. Information theory is concerned with "the engineering
>problem" of "reproducing at one point either exactly or approximately
>a message selected at another point". It has nothing to do with the
>creation of the meaning in the first place.
>
>BH>To tie this in with biology we can observe that the genetic
>>information processing system can process the information for
>>forming a non-functional protein as easily as it can for a
>>functional protein.
>
>I am not so sure that this is compeletely true. Some "non-functional
>proteins" (eg. D-amino acids) may be unable to be processed by "the
>genetic information processing system."
>

Perhaps someone knowledgeable in molecular biology can help us
out here. Whether the genetic information system can process
D-amino acids came up in a dispute between Yockey and Avshalom
Elitzur in Journal of Theoretical Biology. Here I'll quote
briefly from Yockey's reply to Elitzur. all "quotations" are
from Elitzur's paper (JTB 168:429-459, 1994).

==========begin quote==========

There are a number of other mistakes and blunders in Elitzur's
paper that I shall deal with briefly:

... "For most scientists, as has been noted, the preference
exhibited by all living forms for L-amino acids and D-sugars
is _due to a mere coincidence_ in the appearance of the ancestor
of all organisms (Eigen, 1992; Shapiro, 1986)." Elitzur is
unaware of the fact that the genetic information system is
capable of forming D amino acids and placing them in specified
locations in antibiotics. This can hardly be "due to a mere
coincidence".
-- Hubert Yockey, JTB 176:349-355, 1995.
======================================

[...]

>>Yockey==============================================
>>The entropy that is applicable to the case of the evolution of the
>>genetic message is, as I believe the reader should now be convinced,
>>the Shannon entropy of information theory or the Kolmogorov-Chaitin
>>algorithmic entropy. ...
>>
>>The Kolmogorov-Chaitin genetic algorithmic entropy is increased
>>in evolution due to the duplications that occurred in DNA. [...]
>>Thus the genetic algorithmic entropy increases with time just as
>>the Maxwell-Boltzmann-Gibbs entropy does. Therefore creationists,
>>who are fond of citing evolution as being in violation of the
>>second law of thermodynamics (Wilder-Smith, 1981; Gish, 1989), are
>>hoist by their own petard: evolution is not based on increasing
>>_order_, it is based on increasing _complexity_.
>
>SJ>I am not aware that "Wilder-Smith" or "Gish" actually say that
>"evolution" is "in violation of the second law of thermodynamics".
>I would invite Brian to post where he or Yockey claims they do. In
>his chapter "Creationist Theory: Popular Evolutionist
>Misunderstandings", Ratzsch says:
>
>"Perhaps the most prevalent of the misconstruals of creationism
>involves the Second Law of Thermodynamics." (Ratzsch D.L., "The
>Battle of Beginnings", 1996, p91)
>

The references in parenthesis above are:

Wilder-Smith (1981). <The Natural Sciences Know Nothing of Evolution>

Gish, D. T. (1989). In a discussion of the origin of life on radio
station KKLA, Los Angeles, CA, on 29 June, 1989 with Dr. H.P.
Yockey, Dr. Gish repeatedly insisted that evolution was in
contradiction of the Second Law of Thermodynamics in spite of
my explanation to the contrary.

Brian Harper
Associate Professor
Applied Mechanics
Ohio State University
"Aw, Wilbur" -- Mr. Ed