RE: Evolutionary computation (was: Where's the Evolution?)

Cummins (cummins@dialnet.net)
Tue, 6 Apr 1999 16:23:37 -0500

> [mailto:evolution-owner@udomo3.calvin.edu]On Behalf Of Rich Daniel

> > >
> > > See http://www.newscientist.com/ns/971115/features.html. You
> > > could not be more wrong.

> First of all, you're using two different meanings of
> "indefinite". When you say that nature cannot produce complexity that
increases
> "indefinitely", you mean "without limit".

Actually, I don't mean "without limit." Anyway, I looked into that example
(linked above) a little closer and found that I couldn't be more right. The
circuit is always going to respond to frequency. All he did was tune the
circuit. He used random tuning to start with and used a goal
to select the random data that best approached the goal. Using a goal for
selection is always going to produce a definite limit (by definition) --
that's were this experiment blatantly fails my challenge to show that nature
can create an indefinite increase in complexity. But, he didn't get
"evolution" to create any degree of complexity at all, in the first place.
Tuning (optimization) is not the creation of complexity by any definition I
know of. He could just as well as fed the IC a sequence from 0-to-whatever
works best. No selection, no randomness, and achieved the same thing. But,
he wanted the smoke and mirrors to impress people.

> If you meant "having no goal", then snowflake
> formation would be a counterexample, because the water vapor doesn't know
> what the exact form of the snowflake is going to be before it freezes.

Yes, the water vapor knows how it's going to form a snowflake. The
differences between snowflakes is entropy, not complexity. Besides, a
snowflake is really an example of order, not complexity, in the first place.

> Secondly, it is legitimate in a computer simulation of evolution
> to provide
> a specific goal. Nature also sets up specific problems for organisms to
> solve. For example, plants have the "goal" of creating chemical
> energy from
> light energy in the most efficient way possible.

Nature doesn't technically have goals. Plants have no goal of using light
in the most efficient way possible. It's simply that the plants that use
light most efficiently live and the ones that don't get choked out.
(hypothetically) But, I guess you could call it a goal anyway. But, there's
still a difference between a formal and definite goal (as in the
newscientist link above)vs. a preference for a better tuned variation
(optimization).

> Thirdly, the problem did not allow the use of capacitors; it was
> specifically
> defined as using only 100 cells of a field-programmable gate array.

I was just pointing out that a human designer (using discrete electrical
devices) could have used less than 1/100 (not 10x as the article claimed) of
the circuitry and achieved a vastly superior solution. Anyway, such things
as propagation delays, parasitic capacitance, and hysteresis in general,
filled the role of the capacitor in frequency discrimination.

> Fourthly, the focus of the investigation was not to design a circuit that
> could distinguish between words; we already know how to do that.
> The point
> was to explore the usefulness of an evolutionary algorithm in designing
> hardware circuits.

The "evolutionary algorithm" was just a complicated and inefficient method
of tuning the circuit that is poorly suited to the desired function in the
first place.

> Fifthly, evolutionary computation is useful in a wide variety of practial
> applications, especially non-linear multi-dimensional problems where there
> are no known analytical solutions. See for example _Evolutionary
> algorithms
> in engineering applications_, by Dasgupta and Michalewicz.

Random variation and selection by pre-determined goal should work well in
solving some complex problems, but it's not going to create complexity.
But, non-random variation will work better.
It's like guessing a number between 1 and 10. A random guess and selection
by "higher" and "lower" will get you to the solution. But, equally dividing
the portions (e.g. start with 5) will get you to the solution faster. (note,
we're still talking about tuning, not complexity)

> Finally, let me use a simple example to try to convince you that the
> technique really does work:
>
> Consider a very straightforward optimization problem: You want to find
the
> maximum value of an n-dimensional function. Starting with a random point,
> calculate the function's value at that point. Then "mutate" the point by
> adding a small vector, in effect choosing a second point that is near the
> first. Then evalutate the function at that point and compare the two. If
> the second value is greater, "select" it (i.e., use it as your starting
> point for the next "mutation"), otherwise keep the first point.

This is the same sort of thing as guessing the number between 1 and 10.

> A little argument would have made me concede that fitness landscapes are
not
> static. Conditions change. I would have conceded that "micro-evolution"
> could occur to *keep* species near local maxima, since the local maxima
are
> moving. But I would have argued that the local maxima never move very
far.

Any "need" for the species to change too much will result in extinction.

> As for complexity, it's an accidental and only occasional byproduct of
> evolution. If a parasite can have more offspring by jettisoning its
> ability to live outside the host, it will do so.

That doesn't sound like an increase of complexity "jettisoning" implies a
loss of complexity. Which, BTW, most evolutionist's example of "good
mutations" are nothing but losses of genetic information which have some
positive effect in a given environment. Seedless oranges have helped some
orange trees to survive by promoting a symbiotic relationship with humans.
But, the loss of seeds is not an increase in complexity.

> "But," you say, "that doesn't count, because we started with a normal Homo
> sapiens to begin with, and ended up at the same place!" It's a *thought*
> experiment. *If* we had started with the inferior H. sapiens, *then* the
> processes of random mutation and non-random natural selection *would* have
> created the superior version.

Superior doesn't mean more complex. "3" is a superior approximation of Pi
than is "7" but "3" is not a more complex point on the number line than "7".
A mutation can fix a genetic typo, but it's not going to create complexity.
It's just tuning and optimization.

> I hope by now you can see that where we disagree is in the idea that God
> created every species at a local maximum in its fitness landscape. Please
> tell me that *something* I've said makes sense to you. It *is* at least
> theoretically possible for mutation and selection to increase complexity.

First, your assuming that any biological structure can be obtained from any
other biological structure through single steps -- each increasing fitness.
Secondly, you're assuming that the local maximum is at a more complex
position. Both assumptions are wrong in many cases.

I believe God created every creature at a local maximum (with time, there's
been deterioration and fracturing/speciation). And, the only line that a
creature can fallow is between that local maximum and inviability, not
between that creature and another kind of creature. And, it is painfully
obvious that the forces toward inviability are stronger then the forces that
move a creature to its local maximum. Thus, the rule of nature is
extinctions and an increasing genetic load (bad mutations).

You're trying to substitute your reasoning for direct observation. How
about an empirical example of mutation and selection creating an indefinite
increase in complexity? If it really is possible, a computer program should
easily be able to demonstrate it. (a computer is just another lab
instrument)