Re: Evolutionary computation (was: Where's the Evolution?)

Rich Daniel (rwdaniel@dnaco.net)
Thu, 8 Apr 1999 12:15:24 -0400 (EDT)

> > First of all, you're using two different meanings of "indefinite".
> > When you say that nature cannot produce complexity that increases
> > "indefinitely", you mean "without limit".
>
> Actually, I don't mean "without limit."

Then your choice of words was unfortunate. You did not succeed in
communicating the idea that you had in mind.

> Anyway, I looked into that example
> (linked above) a little closer and found that I couldn't be more right. The
> circuit is always going to respond to frequency.

Note that the circuit (http://www.newscientist.com/ns/971115/features.html)
responds to the particular words "go" and "stop", not to particular
frequencies. Pure frequency discrimination was, however, used as an
intermediate goal.

> All he did was tune the circuit. Tuning (optimization) is not the
> creation of complexity by any definition I know of.

Two points: First, he did more than tune a circuit. Field programmable
gate arrays (if I understand them correctly -- I didn't know anything about
them before, but I glanced through a book on them yesterday) allow you to
actually build a circuit on the fly, specifying which gates should be
connected to which other gates, and what kind of gate should be used at
each position.

Secondly, even if you do consider it to be just tuning, it still created
complexity. Before the FPGA was programmed, it was just 100 randomly
initialized parts sitting there doing nothing in particular, just like a
snowflake is some huge number of atoms that don't interact in any
functional way. After programming, the 100 different cells co-operated
to produce the desired result when given the proper input. If this is
not new complexity, then you'll have to give me a better explanation of
what the word means.

> He could just as well as fed the IC a sequence from 0-to-whatever
> works best. No selection, no randomness, and achieved the same thing...

He could indeed. But it would have taken very much longer. Even if you
only try two different settings for each of the 100 cells, that's 2**100
(about 10**30) combinations. Even if each trial only took a microsecond,
it would have taken about 10**17 years.

You can say that this is a toy problem with no practical application. You
can say that the wrong tools are being used to solve it. But you can't get
away with claiming it's easy.

> > If you meant "having no goal", then snowflake
> > formation would be a counterexample, because the water vapor doesn't know
> > what the exact form of the snowflake is going to be before it freezes.
>
> Yes, the water vapor knows how it's going to form a snowflake. The
> differences between snowflakes is entropy, not complexity. Besides, a
> snowflake is really an example of order, not complexity, in the first place.

You've lost me. Let's try a different angle. Earlier, you implied that you
would accept a computer simulation of evolution as evidence that you might be
at least partly wrong in some respect. Please describe just the part of the
program that does the selection of which simulated organisms will survive.
Then explain in what sense the selection "has no goal". In what sense is
the increased complexity (if it occurs) "indefinite"? And how does that
differ from the FPGA example?

> ...Nature doesn't technically have goals. Plants have no goal of using light
> in the most efficient way possible. It's simply that the plants that use
> light most efficiently live and the ones that don't get choked out.
> (hypothetically)...

I agree.

> But, there's still a difference between a formal and definite goal (as
> in the newscientist link above) vs. a preference for a better tuned
> variation (optimization).

I don't understand what you're trying to say here.

> [...]
> > Fifthly, evolutionary computation is useful in a wide variety of practial
> > applications, especially non-linear multi-dimensional problems where there
> > are no known analytical solutions. See for example _Evolutionary
> > algorithms in engineering applications_, by Dasgupta and Michalewicz.
>
> Random variation and selection by pre-determined goal should work well in
> solving some complex problems, ...

Thank you. Perhaps this renders moot much of the debate about the FPGA
circuit.

> but it's not going to create complexity.

Addressed above.

> But, non-random variation will work better.
> It's like guessing a number between 1 and 10. A random guess and selection
> by "higher" and "lower" will get you to the solution. But, equally dividing
> the portions (e.g. start with 5) will get you to the solution faster...

This is only because you know something about the shape of the function you're
trying to optimize. You are correct that when analytic methods are known,
they're usually better. But this is a side issue. The question is not,
"What's the most efficient way to create complexity?" It's, "Can complexity
be created by random variation plus non-random selection?"

> > ...As for complexity, it's an accidental and only occasional byproduct of
> > evolution. If a parasite can have more offspring by jettisoning its
> > ability to live outside the host, it will do so.
>
> That doesn't sound like an increase of complexity "jettisoning" implies a
> loss of complexity.

Absolutely correct. That was my point. Evolution does not always increase
complexity. Parasites are an example of this. Later I gave a hypothetical
example of when evolution *would* increase complexity.

> Which, BTW, most evolutionist's example of "good mutations" are nothing but
> losses of genetic information which have some positive effect in a given
> environment.

I doubt if "most" is true -- I think that in most cases we don't even know
which genes were mutated -- but I will grant that *some* examples involve
loss of complexity.

> Seedless oranges have helped some orange trees to survive by promoting
> a symbiotic relationship with humans. But, the loss of seeds is not an
> increase in complexity.

I completely agree.

[Thought experiment involving H. sapiens deleted.]

> ...Superior doesn't mean more complex. "3" is a superior approximation of Pi
> than is "7" but "3" is not a more complex point on the number line than "7".
> A mutation can fix a genetic typo, but it's not going to create complexity.
> It's just tuning and optimization.

Perhaps I did not explain my example clearly enough. Let me try again.

Consider a working organism that is well-adapted to its environment. It
has a certain number of genes that are all making proteins, and each protein
performs a function that helps the organism survive and reproduce.

Now take one of those genes and cripple it with a single-point base
substitution that causes a useless protein to be produced, one which has no
function, but only gets in the way.

The complexity of the organism has been decreased, right? (This is not a
rhetorical question. Please answer it.) The number of meaningfully-
interacting parts has been decreased by one. You said yourself that a
seedless orange is less complex than a wild orange.

Often, the organism will continue to function and reproduce, though at a
lower efficiency. If we start with a population of such organisms, do you
agree that it's possible for mutation and natural selection to re-create
the original genome? Why would that not be an increase in complexity?

Please note that this particular argument is not about whether all the
complexity of life could have evolved from a single simple ancestor. It's
about whether complexity can ever increase naturally.

> > I hope by now you can see that where we disagree is in the idea that God
> > created every species at a local maximum in its fitness landscape. Please
> > tell me that *something* I've said makes sense to you. It *is* at least
> > theoretically possible for mutation and selection to increase complexity.
>
> First, your assuming that any biological structure can be obtained from any
> other biological structure through single steps -- each increasing fitness.

Almost right, except for the implication that it's uphill both directions.
I'm saying that evolution can be true *if* there is a path of increasing
(or at least not significantly decreasing) fitness from the first organism
to each species alive today, with each step along the path being a single
mutation.

I wouldn't call this an *assumption*. I'm not (yet) trying to get you to
accept it as true; I'm just trying to get you to say that evolution could
be true *if* it were true. I'm trying to get you to stop using "nature
can't create indefinite complexity" as a *reason* why evolution *must* be
false.

> Secondly, you're assuming that the local maximum is at a more complex
> position.

No, I'm saying that increased fitness *sometimes* has increased complexity.

> Both assumptions are wrong in many cases.
>
> I believe God created every creature at a local maximum (with time, there's
> been deterioration and fracturing/speciation). And, the only line that a
> creature can fallow is between that local maximum and inviability, not
> between that creature and another kind of creature. And, it is painfully
> obvious that the forces toward inviability are stronger then the forces that
> move a creature to its local maximum. Thus, the rule of nature is
> extinctions and an increasing genetic load (bad mutations).

You forgot to answer the following question:

*If* individuals reproduce, and *if* the offspring is slightly different
from the parents, and *if* an individual's reproductive success is at least
partially a function of its genetic code -- there may be some randomness in
the function; an individual might get hit by a meteor due to no fault of its
own -- and *if* the first individual does not start out at a local maximum
in the fitness function, and *if* the mutational steps taken can include
at least every neighboring point on the genomic landscape, and *if* you
wait a sufficient amount of time, *then* the offspring will tend to move
toward a local maximum.

I think I've qualified it sufficiently. Do you agree that if all the above
conditions are true, then evolution toward a local maximum *must* occur?

> You're trying to substitute your reasoning for direct observation.

There's nothing necessarily wrong with that. It's sometimes easier to
make a general argument than it is to give a specific example. I can
easily prove that there exists a prime number greater than 10**100, but
I'd be hard pressed to supply an example.

You're making a very strong claim: not just that evolution is false, but that
complexity can never be increased ("indefinitely", whatever that means) by
random variation and non-random selection. I don't have much hope of
convincing you that evolution is true, but I do hope to convince you that
your strong claim is mistaken. Mathematical reasoning is an appropriate
tool for this.

I do intend, however, to provide some real-world biological observations
later.

> How about an empirical example of mutation and selection creating an
> indefinite increase in complexity?...

I can't give an example until I understand what you mean by "indefinite".

Cordially yours,
Rich Daniel rwdaniel@dnaco.net http://www.dnaco.net/~rwdaniel/