RE: [asa] Parameterized Evolution

From: Alexanian, Moorad <alexanian@uncw.edu>
Date: Mon Aug 25 2008 - 09:59:52 EDT

Are we talking here about finding extrema, minimum or maximum, in, say,
potential fields so that we can find, for instance, molecular bindings?
If so, I am curious of how we establish the quantitative properties of
the potential. Don't we need some sort of theory?

Moorad

 

From: asa-owner@lists.calvin.edu [mailto:asa-owner@lists.calvin.edu] On
Behalf Of William Hamilton
Sent: Monday, August 25, 2008 9:41 AM
To: Iain Strachan
Cc: Rich Blinne; Dave Wallace; ASA
Subject: Re: [asa] Parameterized Evolution

 

I agree. The GA enthusiast's answer to the curse of dimensionality is
that if you have a large enough population and enough computing power to
produce a large number of generations in reasonable time you stand a
chance of finding the global optimum. But the question of whether you
have truly found the global optimum is a difficult one. The only way I
know is to try additional runs with different initial populations --
essentially what you do when using steepest descent. And the higher the
dimensionality the more additional runs you ought to try. And you still
have no guarantee. A report was written at GM R&D several years ago with
a title something like "How we tied up every Sun workstation in GMR
trying to solve problem XXX via genetic algorithms" I should have gotten
that report, because I'm sure it would have been interesting reading.
But I do know that after that report there was significantly less
enthusiasm for GA's. The one situation where I got significant
experience with a GA was when we were trying to design
magnetorheological shocks. We had considerable freedom in the design. We
could specify the base fluid, the characteristics of the iron particles,
the size of the orifices, the characteristics of the coil, the drive
current and several other parameters. My boss had done a design using
basic principles but we weren't sure it was the best possible design. I
used some GA code I downloaded from MI State and wrote an objective
function -- and got my boss' solution, with a wide variety of initial
populations. That gave us some confidence that my boss' solution was a
good one, but certainly didn't prove it was the global optimum. It
turned out to be a good solution because you can buy it in several GM
cars (Cadillacs and Corvettes at the time I retired). On another
occasion I and Chuck Stevenson (ASA member but not a list participant)
tried to simulate a Galapagos finch - like situation using a GA. But
unfortunately we were unable to obtain good survival data classified by
finch body measurements and year/season. (I wrote to Rosemary Grant
asking for some, but she wasn't ready to turn loose of it) We tried
using the data available in several of the Grant's pubs, but in this
case the problem we specified was too easy: The GA converged in the
first couple of generations. Anyway you cut it, optimization is a tough
problem (except of course for thee problems they give you to try in an
optimization course)

On Sun, Aug 24, 2008 at 9:48 PM, Iain Strachan <igd.strachan@gmail.com>
wrote:

I am actually quite sceptical about the ability of genetic algorithms to
avoid local minima. They often exhibit a phenomenon called "premature
convergence" where the entire population is clustered around the local
minimum. Now although the GA has the ability to make large jumps via
crossovers, in a high dimensional space it is extremely unlikely that a
big jump will increase the fitness. This is due to a well known problem
called "the curse of dimension" ie the exponential increase in the
volume of the search space with dimension.

 

Iain

Sent from my iPod

On 25 Aug 2008, at 01:55, "William Hamilton"
<willeugenehamilton@gmail.com> wrote:

        OK. Thanks for clearing up the points I asked about. I was
thinking in terms of computer models, and you were thinking in real
world terms. In the real world there is no static fitness function. It
may stagnate for a long time, but then the environment changes.
        
        I have no confidence whatever in the ID crowd's views on genetic
algorithms. I believe it was Bill Dembski who criticized genetic
algorithms because the fitness function was prespecified. He thought
that was cheating. I call it setting a goal -- without which you can't
expect an algorithm to converge to anything meaningful. And yes, I agree
that gradient methods and Newton's method are very susceptible to local
minima. GA's avoid that problem. Simulated annealing is said to also,
but I haven't used SA, so I can't testify from personal experience.

        On Sun, Aug 24, 2008 at 7:13 PM, Rich Blinne
<rich.blinne@gmail.com> wrote:

         

        On Aug 24, 2008, at 3:39 PM, William Hamilton wrote:

        
        
        

         

        I find much that I agree with in this thread -- and some things
that are puzzling. As Iain pointed out, when the input variables are
coupled, changing a number of inputs (not necessarily all)
simultaneously makes sense. If you can compute the gradient, then you
can use steepest descent or the conjugate gradient method or Newton's
method to determine how much to adjust each input variable.

         

        The problem why GA and simulated annealing are used rather than
Newton's method is to avoid a local minima which your discussion below
alludes to. You change too many variables simultaneously and you get too
greedy. Maybe a clarifying note here will be helpful. When I am talking
about vary multiple variables I am talking about varying multiple
variables that all lower the cost function. (See my next paragraph) If
you dive too quickly to a local minima you might get "stuck".

         

        I find this puzzling. I think the criterion for using a GA is
the form of the fitness function. If it varies smoothly and (especially)
if its gradient can be computed or approximated without undue
computational effort then steepest descent, conjugate gradient or
possibly Newton's method are appropriate. If it has a complex and/or
nondifferentiable structure, then GA's are appropriate. For example if
you were trying to maximize (1-x^2)*sin(1/x) on [-1,1] a gradient
algorithm wouldn't be appropriate because the function is not
differentiable at 0. Yet a GA would (I haven't tried it but I'm pretty
sure this is so) find a point near zero. (The function isn't even
defined at zero, but the peaks will increase in value as the search
point tends toward zero. )
        In a multivariable optimization problem a GA will vary many,
perhaps all, of the input variables, in possibly large steps. Chaos
doesn't result because the unsuccessful variations are eliminated in the
next generation.

         

        As you stated unsuccessful states get pruned by GA. What the
effect of what was proposed by UcD was a set of large net changes which
would presumably be after pruning and that would be chaotic. What we are
illustrating is that a large number of simultaneous deliberate changes
is inferior to a large number of random changes subject to selection as
a design strategy. Selection keeps that rate under control. When I am
designing in a deliberate fashion rather than running a brute-force
computer model I better not be turning all the knobs simultaneously. If
something goes wrong I don't have a clue why.

        
        
        
        

         

        Agreed. Genetic algorithms work well in computers and in nature
because they are brute force paralel algorithms. Use a large enough
population and have available a rich enough set of variations, and
eventually you will get an improvement in fitness. And as Iain pointed
out, such a population can track a changing fitness function. (I don't
understand Rich's comment about how static fitness function can lead to
extinction. Shouldn't it just lead to a fairly stable population that
wanders around the peak of the fitness function?)

                 

         

        If your cost is static then the diversity that is producing is
small. As you rightly say, it just wanders around the peak. If such an
environment gets "shocked" it cannot respond well to the shock and all
the selection does is to select everything away and thus mass
extinction. Take a modern example. If you have a stable climate and have
life that's adapted to it there is very little to select from when you
shock it with a large anthropogenic climate change. This is why mass
extinction rather than a burst of biodiversity is being predicted given
AGW.

         

        Rich Blinne

        Member ASA

        
        
        
        --
        William E (Bill) Hamilton Jr., Ph.D.
        Member American Scientific Affiliation
        Rochester, MI/Austin, TX
        248 821 8156

-- 
William E (Bill) Hamilton Jr., Ph.D.
Member American Scientific Affiliation
Rochester, MI/Austin, TX
248 821 8156
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Mon Aug 25 10:00:11 2008

This archive was generated by hypermail 2.1.8 : Mon Aug 25 2008 - 10:00:11 EDT