On Sat, Aug 23, 2008 at 2:37 PM, Rich Blinne <rich.blinne@gmail.com> wrote:
>
> On Aug 23, 2008, at 12:42 PM, Iain Strachan wrote:
>
>
>
> On Sat, Aug 23, 2008 at 5:18 PM, Rich Blinne <rich.blinne@gmail.com>wrote:
>
>>
>> On Aug 23, 2008, at 8:29 AM, Dave Wallace wrote:
>>
>>
>>> http://www.uncommondescent.com/biology/thoughts-on-parameterized-vs-open-ended-evolution-and-the-production-of-variability/
>>>
>>> I found this post on UcD somewhat interesting however, I have only a
>>> vague idea as to what Parameterized Evolution is. Can anyone point to a
>>> simple definition. As best I can tell it involves a
>>> predetermination/limitation of biological evolutionary search space.
>>> Dave W (ASA member)
>>>
>>
>> The reason why engineers are more prone to recognize this is because
>>> engineers have to develop systems repeatedly, and know how much trouble it
>>> is to get parts to play well together. Adjusting the system requires
>>> adjusting multiple parts simultaneously, which can't be accomplished without
>>> a guiding information system (which, in ID circles, is termed front-loaded
>>> evolution - which requires the action of an intelligent agent at the
>>> beginning) or the creativity and intervention of an intelligent agent at
>>> each step.
>>>
>>
>> This is such utter B.S. Speaking as an engineer, they don't have a clue on
>> how it works. If you change everything simultaneously you get chaos. Rather,
>> you make small revisions to working designs.
>
> I find much that I agree with in this thread -- and some things that are
puzzling. As Iain pointed out, when the input variables are coupled,
changing a number of inputs (not necessarily all) simultaneously makes
sense. If you can compute the gradient, then you can use steepest descent or
the conjugate gradient method or Newton's method to determine how much to
adjust each input variable.
>
>
> I don't think it's complete B.S. - in fact I originally thought that your
> statement about changing everything simultaneously leading to chaos was also
> utter B.S., though I think I can see where you're coming from, and it's not
> the same place as the ID'er is coming from. I think ID folk tend to see
> evolution as a bit like the mathematical problem of trying to optimize an
> objective function of multiple variables, to find the correct combination.
> The final value of the objective function (which in a genetic algorithm
> would be the "fitness function") is dependent simultaneously on all the
> variables. Whether or not the problem can be solved by a genetic algorithm
> depends entirely on how tightly the variables are coupled together.
>
> I find this puzzling. I think the criterion for using a GA is the form of
the fitness function. If it varies smoothly and (especially) if its gradient
can be computed or approximated without undue computational effort then
steepest descent, conjugate gradient or possibly Newton's method are
appropriate. If it has a complex and/or nondifferentiable structure, then
GA's are appropriate. For example if you were trying to maximize
(1-x^2)*sin(1/x) on [-1,1] a gradient algorithm wouldn't be appropriate
because the function is not differentiable at 0. Yet a GA would (I haven't
tried it but I'm pretty sure this is so) find a point near zero. (The
function isn't even defined at zero, but the peaks will increase in value
as the search point tends toward zero. )
In a multivariable optimization problem a GA will vary many, perhaps all, of
the input variables, in possibly large steps. Chaos doesn't result because
the unsuccessful variations are eliminated in the next generation.
> For most of the problems I've worked on ( optimisation of weights in a
> neural network; optimisation of large chemical plants) the variables are
> always highly coupled. So the approach to take is to compute the gradient
> vector of the objective function with respect to all the variables that are
> to be tuned; then you make small steps in the direction of the gradient
> vector (or in a search direction based on the gradient vector, as in
> Conjugate Gradients or Quasi-Newton methods). In general most of the
> elements of the gradient vector are going to be non-zero, and hence you do
> indeed change everything simultaneously, and it does not lead to chaos, but
> allows the solution to be found in an iterative fashion. By contrast, if
> you vary one variable at a time, in turns, then you get an absolutely
> useless optimisation algorithm, unless the variables are decoupled. For
> example if you are trying to find the minimum of x^2 + y^2 + z^2 then the
> variables are decoupled, and you can change one at a time. But if you had
> xy + yz + zx then you could not.
>
>
>
> One misconception of evolution is that it is an optimization algorithm.
> Evolution by its nature hits "good enough" long before optimum. That being
> said, hitting "good enough" is more like real-world design than mathematical
> optimization. The real cost function is not to produce a specific feature
> but survival of a genetic line in a particular environment (which changes
> over time, more on this below).
>
Agreed. Genetic algorithms work well in computers and in nature because they
are brute force paralel algorithms. Use a large enough population and have
available a rich enough set of variations, and eventually you will get an
improvement in fitness. And as Iain pointed out, such a population can track
a changing fitness function. (I don't understand Rich's comment about how
static fitness function can lead to extinction. Shouldn't it just lead to a
fairly stable population that wanders around the peak of the fitness
function?)
>
>
>
> I think that is where the ID people are coming from; possibly influenced by
> misleading metaphors such as Dawkins's "Climbing Mount Improbable". I know
> this too, because I was misled by this analogy, and could easily see that
> such problems weren't easily soluble via a classical genetic algorithm. It
> was the main reason why I was attracted to ID originally - essentially the
> notion of "Irreducible Complexity" explained clearly to my why genetic
> algorithms (of the classical type of optimising parameter sets derived from
> mutating, and naturally selected bit strings), had so few examples where
> they worked. [ I now think whole "hill climbing" analogy is quite false -
> for a start the peak of the hill doesn't always stay in the same place!]
>
>
> Let me expand on your last sentence. Again, because this is viewed as an
> optimization algorithm what is not appreciated -- except by you -- is the
> so-called cost function is not static. What is being solved is a nearby
> problem and thus amenable to genetic algorithms. That solution is co-opted
> later on to solve a *different* problem. If anything, the analogy is more
> of buying and adapting IP than building from scratch. If the cost function
> did not change with time all evolution would provide is mass extinction as
> you noted the difficulty of a genetic algorithm to find solutions that have
> a high distance from the original state. If you do need to change many
> variables simultaneously it simply won't work and that was my original
> point. Thus, you never get a beaver with a chain saw no matter how much
> advantage such an adaptation would give. To be sure, if you see large
> discontinuities as a result of many variable changes it would be a coup de
> grace against evolution but that's not what the data shows. There are no
> beavers with chain saws. Secular evolutionists -- despite bad metaphors by
> Dawkins and company -- get this but ID proponents have failed to understand
> this fairly simple concept over decades and I find this quite frustrating.
>
>
>
>
> However, I think you are looking at it from a different perspective; you
> start with a good working design - one of the hallmarks of which is going to
> be modularity - it would be designed so that you could indeed change small
> bits without affecting the others. In such a case, you are indeed right in
> suggesting that changing everything at once would be chaotic.
>
>
> Exactly. Evolution does show modularity in at least two ways:
>
> 1. Many evolutionary conserved functions.
> 2. So-called convergent evolution where similar functions are derived
> through different descent lines.
>
> This shows that a goal or design is quite compatible with the evolutionary
> process. IDM needs to spend more time trying to understand the evolutionary
> process than falsely claiming superiority. Then they might have more chance
> at success with their original goal of providing evidence that creation and
> specifically life is designed by God.
>
> Rich Blinne
> Member ASA
>
-- William E (Bill) Hamilton Jr., Ph.D. Member American Scientific Affiliation Austin, TX 248 821 8156 To unsubscribe, send a message to majordomo@calvin.edu with "unsubscribe asa" (no quotes) as the body of the message.Received on Sun Aug 24 17:40:30 2008
This archive was generated by hypermail 2.1.8 : Sun Aug 24 2008 - 17:40:30 EDT