Randomness and complex organization via evolution

From: Chris Cogan (ccogan@telepath.com)
Date: Tue Jul 11 2000 - 02:18:20 EDT

  • Next message: Cliff Lundberg: "Re: macroevolution or macromutations? (was ID) 2/2"

    The following little essay was written for another list, but it is relevant
    to the topic of this list, so I reproduce it here. It was originally
    written in response to the old creationist canard that complex organization
    cannot be obtained from random processes. The idea is that this would be
    like the parts of a BMW miraculously assembling themselves. Of course,
    though the evolutionary process involves a kind of randomness, it is not
    even *remotely* like such a miraculous occurrence, as should be clear from
    this essay, if not from ten seconds thought.
    ------------------------

    Why is it that processes of replication-with-variation nature that generate
    complex organization from randomness work?

    One reason is that randomness *is* complex organization.

    But, more importantly, these processes work by *accumulating* small changes
    in organization over a number of generations of replication and
    near-replication. Selection is not necessary. In fact, the process works
    *better* if there is no selection at all, and every copy is saved for
    further replication.

    To see how such an unintelligent process can (and does) work, imagine a
    machine that copies strings of information, with a generally high degree of
    accuracy. However, it is connected to a cosmic ray detector that sends
    signals to the machine, and the machine occasionally uses these signals as
    a basis for making modifications to the strings of information being
    replicated. Sometimes an extra bit or two is added or deleted per million
    bits (or more) of information. Sometimes a bit or two is simply inverted in
    value (a zero is replaced with a one, or a one is replaced with a zero).

    The machine is started up and fed a single string of information consisting
    of one bit set to zero. Replication occurs (for convenience of discussion)
    at fixed intervals, and each replication consists of doubling the
    population by making one copy of every existing string of information, so
    that, after thirty-two replications, there are some four billion strings of
    information, some few of which are no longer simply a single bit set to
    zero. Some are a single bit set to one, and a few are two or even three-bit
    strings set to various combinations of bit-values.

    Thus, even though all we have done is copy the original string with
    occasionally random changes, we *already* have some complexity that was not
    present in the original string.

    After sixty-four generations, those strings that were different from the
    original string after generation 32 now *each* have some four billion
    descendants, *many* of which are *significantly* more complex than the
    original single-bit string. Some of the strings, at this point, are many
    bits long.

    After a sufficiently large number of generations, we will have strings that
    are even *more* complex than the human genome, because they will
    effectively be *completely* random. At some point this process will have
    generated enough strings with enough variations that a simple uniform
    encoding will map one string to virtually every genome that exists or ever
    has existed, with none left out.

    If this is not complex enough, consider that the same kind of generation
    process will work with any "building block" medium of storing information,
    and that the information need not be stored in strings (though this is
    convenient because human genetic information is stored string-wise). Any
    kind of structure that can be replicated with variations, and that allows
    variations to be accumulated over time will work. (This can be proved by
    the simple expedient of showing that any such structure can be mapped to
    the case of strings of information, which has already been examined.)

    In the real world, of course, such a process generates *too much*
    complexity, much more than the environment can support, so *some* kind of
    selection mechanism will always eventually come into play.

    But, since, in the real world, variations *do* occur, and *are*
    (empirically) accumulative, the process works despite being severely
    limited by selective processes. Selection does not eliminate all complex
    organization. It eliminates only complex organizations that do not have
    locally-needed attributes for being perpetuated. Those that *are*
    perpetuated are free to be varied yet more (and they are) in subsequent
    generations.

    I described the randomizer as a cosmic-ray detector above because I wanted
    to make it clear even to the most rockheaded that the process was not
    intelligent, that the complexity that ended up in the strings resulted from
    the random input from the cosmic ray detector.

    In fact, if the input was *not* random, the process *possibly* would not
    work as well. Consider what would happen if every input was simply an
    instruction to set the existing bit to zero in making any copies. This
    would ensure perfect uniformity and simplicity in the entire population,
    even after billions of generations. Or, if the only change made was the
    adding of a zero bit to the string, so that all strings would be strings of
    zeroes.

    Complexity can only come from complexity, *in a sense*. But randomness *is*
    complexity, though not all complexity is what we would call randomness.
    However, we can encode complexity in a way that makes it all *appear*
    random to all the known tests for randomness. I have even occasionally
    toyed with the idea that the universe (or all of Existence) is,
    effectively, one giant blob of randomness, and that what we see as
    occasional *simplicity* (i.e., laws of physics that apply to a wide range
    of phenomena, etc.) is really the result of *decoding* the randomness into
    a form that introduces stretches of relative simplicity. This would be the
    reverse of encoding simplicity into apparent randomness. Interestingly, it
    should be possible to devise a decoding (or decompression) algorithm that
    would work with *any* random string of information and produce something
    with considerable order in it. Such products would nearly always be larger
    than the original random string, but that would be the "price" paid for
    gaining intelligible simplicity.

    What is actually occurring in evolution is *simplification* of complexity
    locally (at the expense of increasing it elsewhere). Natural selection
    kills off complexity that is *too* complex. The original complexity of the
    Earth (with all of its different elements, etc.) and the randomness of the
    flow of energy through Earth's biosphere provides the complexity, the
    "stirring" that keeps "feeding" complexity into genomes. Most of this
    complexity is dissipated by natural selection, but a little of it "sticks"
    and provides the basis for accumulating further complexity in future
    generations.

    Consider a world in which everything was perfectly uniform, with no
    variations at all from one point in space to another, and no flow of
    energy. It would be in such conditions as *this* that it would be
    miraculous to find life somehow evolving. The rich variability (i.e.,
    "randomness") of conditions and energy flows is the *source* of the
    complexity of life, not a *hindrance* to achieving such complexity.



    This archive was generated by hypermail 2b29 : Tue Jul 11 2000 - 02:20:10 EDT