Pim,
Thanks for the references, which were fascinating to read. I've probably
assimilated more from the Toussaint paper on "neutrality" than the other one
as yet.
I don't think the "neutrality" argument answers my main point (about the
design of the shape of the fitness space), fascinating though the concept
is. As I understand it, from the biological example given in the paper, the
authors are saying that amino acids with more codons encoding for them are
more likely to produce neutral mutations (that don't change the amino acid),
than those which have fewer codons. Additionally the probability of a
neutral mutation depends on which codon is the present one. Thus, they show
that different parts of the genome can evolve at different rates to others,
and that the "search strategy" can adapt over time. Additionally, in the
computer science example they give, they introduce extra material in the
genome which doesn't affect the genotype to phenotype mapping, but which
instead changes the mutation rate itself (by adapting the parameters of a
covariance matrix of random vectors to be added to the phenotype parameters
in mutation).
I agree that these two examples would "help" the evolution along by having a
more adaptive and powerful search strategy, but they don't actually change
the shape of the fitness surface being explored, merely the rate at which it
can be traversed. In fact this adaptation of learning rate has an analogue
in neural network training algorithms. The most simple algorithm for the MLP
network I mentioned earlier is to compute the gradient of the cost function
with respect to each of the network parameters ( aka synaptic connection
strengths, or "weights"). Once the gradient vector is known, it is
multiplied by a small number, called the "learning rate", and the result
subtracted off the current weights vector. There is a single learning rate
per weight. But a modified version of the algorithm (known as "delta-bar
delta") uses a simple heuristic that allows the learning rate for each
individual weight to vary adaptively and independently of the others. This
produces a marked speed up of the learning of the network. But it doesn't
get you off the hook in terms of the pre-processing of the data. All the
"intelligent design" (for want of a better word) still has to be done:
scaling, transformation of input data, selection of appropriate inputs etc.
This is independent of the training algorithm chosen.
Both of the papers refer correctly to the "representation problem" aka the
Genotype to Phenotype mapping, and this is what I would term the "design"
problem which we tackle. I don't think the neutrality paper addresses this
design problem - it merely shows how the search algorithm can be enhanced.
The other paper, which I didn't read so carefully, shows more promise, in
talking about copying genes and making new ones. I can see that this clearly
does change the fitness surface. If you have a gene that could evolve into
another useful function, but at the cost of losing the original function,
then by making a copy of it, one of them is then free to evolve without loss
of function. In this way, what is a local minimum has been turned into a
saddle point. This is achieved because the dimensionality of the phase space
being searched has been increased. There is a lot of useful stuff in the
paper about "modularity" (which is akin to what I said about decoupling the
variables), and also the "building block hypothesis", which which I am well
familiar. I'm afraid I don't have a reference on this, but I recall my PhD
supervisor (who started out as an enthusiastic advocate of GA's and who
later became highly sceptical of them), told me of a conference paper given
apparently by a big name in GA's (wasn't Holland, but someone of that ilk),
which showed empirically that the building block hypothesis had been shown
not to work in an empirical simulation. The vague details I recall is that
they ran two experiments with two different Genotype->Phenotype mappings,
one of which was expected to fulfill the building block hypothesis, by
putting related groups close together (so as to minimize the chance of
disruption by crossovers), and one where the layout was much more chaotic
and not well suited to building blocks staying together. As far as I recall,
there was no significant difference in the performance of the two
experiments. Regrettably I don't have the reference.
But I do think that the second paper is worth thinking about.
I'd like to add that I'm not one of these people who say "The bacterial
flagellum is irreducibly complex and so it couldn't have evolved and
therefore God must have gone POOF! and made it at the right time". To do so
would indeed be "arguing from ignorance", which you seem to want to accuse
me of. Furthermore, as a Christian, it seems to run against Gen 1:31 which
states that what God had created was "very good" - it doesn't seem too good
if occasionally everything gets stuck and God has to do a surreptitious
miracle to fix things. In the Bible, miracles seem to be more to do with
God's dealings which his people and his revelation to us, and not fixing a
less than perfect creation.
So I think the flagellum did evolve, but the space in which it evolved was
designed so it could evolve. As Glenn points out in his paper, the universe
seems incredibly fine-tuned cosmologically so that even a universe with
stars that obeys the second law of thermodynamics can exist. At the more
fine detail level, I would say the fact that the 64 possible codons are
unevely distributed among the amino acids giving possibility of neutrality
and assistance to evolutionary searching might also be viewed as a form of
fine tuning - via the laws of chemistry, and a place where one might be able
to posit design (though not to prove it).
Would you not say that my concept of "design" poses no threat to doing
science - science is all about discovering how God's universe works.
Iain.
On 5/27/05, Pim van Meurs <pimvanmeurs@yahoo.com> wrote:
>
>
> --- Iain Strachan <igd.strachan@gmail.com> wrote:
> > On 5/27/05, Pim van Meurs <pimvanmeurs@yahoo.com>
> > wrote:
> > >
> > > Why? I fail to see how us using biologically
> > > inspired algorithms shows that intelligent design
> > > was needed.
>
>
> > Because every example of us using biologically
> > inspired algorithms that I've ever come across
> > requires us to do the intelligent pre-programming -
> to
> > stack the deck at the start of the
> > learning/optimization process in order to make it
> > work.
>
> Aha, that seems to be the ID fallacy of argument from
> ignorance. Does this mean that just because we design
> and optimize algorithms that therefor in biology the
> same has to apply?
>
>
> > I've not seen any example in the machine learning
> > literature that suggests that the intelligent
> > pre-programming can be done "by natural
> > selection" as you seem to claim. That's not to say
> > that biologically inspired algorithms are a useless
>
> > tool - the applications of Neural nets are
> > many, and I have seen quite a few useful
> > applications of GA's that it's hard to see how you
> > could do any other way ( I recently saw a very
> > impressive application where subjects could
> assemble
> > a "photofit" picture of an assailant using a
> > technique that seemed very similar to Dawkins's
> > "Biomorphs" for example - but the design of the
> > "chromosomes" in this case was very clever indeed,
> > using eigenvalue/eigenvector decompositions of
> facial
> > shapes and textures - THAT was the intelligent part
> of
> > the work, for which the author deserves full credit
> -
> > not the evolutionary part).
>
> Yes, the problem is that any of our applications of
> genetic algorithms can be argued to require
> intelligent design. But to extend this to nature,
> where all that is needed is variation and selection,
> it seems harder to argue that an actual intelligent
> designer is needed.
>
> > But the fact remains that these are not magic tools
> > that will solve your problems without you putting
> in
> > the design effort simply because they are
> > biologically inspired.
>
> True, Evolution is no guarantee for solutions.
>
> > > For instance, science has been studying
> evolvability
> > > and shown how evolvability can be under control of
> > > selection. In fact, Toussaint and others have
> shown
> > > how neutrality becomes an essential component in
> > > evolvability and that neutrality becomes subject
> > > to selection.
> > > So if natural selection can influence
> evolvability,
> > > why is intelligent design **required**?
>
> > We seem to come from different disciplines. I work
> > on optimization and learning systems - you seem
> more
> > familiar with the biology. I think it would
> > be helpful if you could explain what you mean by
> > natural selection influencing evolvability. Do you
> > mean that it shapes the fitness surface?
>
> You and Glenn seemed to suggest that pre-programming
> was needed. What I am pointing out is that
> evolvability, that is the ability to evolve, can be
> under natural selection.
>
> Wagner and Altenberg
> http://dynamics.org/~altenber/PAPERS/CAEE/
>
> Toussaint "Neutrality: A Necessity for
> Self-Adaptation"
>
> http://homepages.inf.ed.ac.uk/mtoussai/publications/toussaint-igel-02-cec.pdf
>
> > Could Natural Selection somehow magically produce a
>
> > coding of a human face by doing Eigenvector
> > decompositions of a large database of faces?
>
> What relevance does this have to the discussion?
>
>
>
-- ----------- There are 3 types of people in the world. Those who can count and those who can't. -----------Received on Sat May 28 05:17:55 2005
This archive was generated by hypermail 2.1.8 : Sat May 28 2005 - 05:17:58 EDT