Re: [asa] Parameterized Evolution

From: Jim Armstrong <jarmstro@qwest.net>
Date: Sun Aug 24 2008 - 21:48:53 EDT
Forgive my jumping in at this late date in the discussion, but this cast a light on one of the issues (or questions) that seems to not be discussed very much in this context. I am unexpectedly inclined toward Dembski's take on this specific comment, without knowing anything about the larger context of the basis for his comment, and though probably not in sync with his rationale. It seems to me that much of the modeling type of research activity - whether mental or algorithmic - explores ways in which an evolutionary process may arrive at a particular outcome (goal). That is sort of understandable in the attempts to respond to the particular examples offered by the ID folks. It also provides a means for measuring algorithmic success, as Bill mentioned.

The question I have is just how well those results will really model the evolutionary process. The actual evolutionary process does not have a form or function objective, but more of an accommodation objective. Its way of working would seem not to be that of a built-in trajectory toward a particular target form or function (though that might arguably be the case in a front-loaded creational model). Instead, its modus is to improve at the matters of  being (survival) and propagating in that entity's particular ambience. We are of course particularly interested after the fact at the trajectory that ends with mankind. But that end result isn't a given for a generic evolutionary process (witness the astonishing diversity in today's snapshot of the evolutionary tree in our own particular Terran environment).

So my question is, is this difference in evolutionary "objective" in the minds of experimenters when they are modeling the somewhat different problem of a path to a particular configuration or functional goal? Is it a difference that would bring into question the validity, or at least the degree of validity of insights obtained from a goal-structured experiment? Or perhaps there is no real way at present to model the more open-ended process, so we settle for the goal-oriented experiment.

Just wondering.

JimA  [Friend of ASA]

William Hamilton wrote:
OK. Thanks for clearing up the points I asked about. I was thinking in terms of computer models, and you were thinking in real world terms. In the real world there is no static fitness function. It may stagnate for a long time, but then the environment changes.

I have no confidence whatever in the ID crowd's views on genetic algorithms.  I believe it was Bill Dembski who criticized genetic algorithms because the fitness function was prespecified. He thought that was cheating. I call it setting a goal -- without which you can't expect an algorithm to converge to anything meaningful. And yes, I agree that gradient methods and Newton's method are very susceptible to local minima. GA's avoid that problem. Simulated annealing is said to also, but I haven't used SA, so I can't testify from personal experience.

On Sun, Aug 24, 2008 at 7:13 PM, Rich Blinne <rich.blinne@gmail.com> wrote:

On Aug 24, 2008, at 3:39 PM, William Hamilton wrote:


I find much that I agree with in this thread -- and some things that are puzzling.  As Iain pointed out, when the input variables are coupled, changing a number of inputs (not necessarily all) simultaneously makes sense. If you can compute the gradient, then you can use steepest descent or the conjugate gradient method or Newton's method to determine how much to adjust each input variable.

The problem why GA and simulated annealing are used rather than Newton's method is to avoid a local minima which your discussion below alludes to. You change too many variables simultaneously and you get too greedy. Maybe a clarifying note here will be helpful. When I am talking about vary multiple variables I am talking about varying multiple variables that all lower the cost function. (See my next paragraph) If you dive too quickly to a local minima you might get "stuck". 
 
I find this puzzling. I think the criterion for using a GA is the form of the fitness function. If it varies smoothly and (especially) if its gradient can be computed or approximated without undue computational effort then steepest descent, conjugate gradient or possibly Newton's method are appropriate. If it has a complex and/or nondifferentiable structure, then GA's are appropriate. For example if you were trying to maximize (1-x^2)*sin(1/x) on [-1,1] a gradient algorithm wouldn't be appropriate because the function is not differentiable at 0. Yet a GA would (I haven't tried it but I'm pretty sure this is so) find a point near zero. (The function isn't even defined at zero, but  the peaks will increase in value as the search point tends toward zero. )
In a multivariable optimization problem a GA will vary many, perhaps all, of the input variables, in possibly large steps. Chaos doesn't result because the unsuccessful variations are eliminated in the next generation.

As you stated unsuccessful states get pruned by GA. What the effect of what was proposed by UcD was a set of large net changes which would presumably be after pruning and that would be chaotic. What we are illustrating is that a large number of simultaneous deliberate changes is inferior to a large number of random changes subject to selection as a design strategy. Selection keeps that rate under control. When I am designing in a deliberate fashion rather than running a brute-force computer model I better not be turning all the knobs simultaneously. If something goes wrong I don't have a clue why.



Agreed. Genetic algorithms work well in computers and in nature because they are brute force paralel algorithms. Use a large enough population and have available a rich enough set of variations, and eventually you will get an improvement in fitness. And as Iain pointed out, such a population can track a changing fitness function. (I don't understand Rich's comment about how  static fitness function can lead to extinction. Shouldn't it just lead to a fairly stable population that wanders around the peak of the fitness function?)


If your cost is static then the diversity that is producing is small. As you rightly say, it just wanders around the peak. If such an environment gets "shocked" it cannot respond well to the shock and all the selection does is to select everything away and thus mass extinction. Take a modern example. If you have a stable climate and have life that's adapted to it there is very little to select from when you shock it with a large anthropogenic climate change. This is why mass extinction rather than a burst of biodiversity is being predicted given AGW.

Rich Blinne
Member ASA



--
William E (Bill) Hamilton Jr., Ph.D.
Member American Scientific Affiliation
Rochester, MI/Austin, TX
248 821 8156

To unsubscribe, send a message to majordomo@calvin.edu with "unsubscribe asa" (no quotes) as the body of the message. Received on Sun Aug 24 21:48:59 2008

This archive was generated by hypermail 2.1.8 : Sun Aug 24 2008 - 21:48:59 EDT