Re: Evidence and proof; was More on Gosse's OMPHALOS

From: George Andrews Jr. (gandrews@as.wm.edu)
Date: Thu Feb 15 2001 - 10:56:54 EST

  • Next message: Adrian Teo: "RE: So far, new genetics leave plenty of room for faith"

    Hi Iain;

    > Iain Strachan wrote::

    > > But my
    > > reasons for skepticism towards evolution have to do with my own experiences
    > > using "genetic algorithms" (GA's), a form of machine learning "inspired" by
    > > evolution. While this area of work has proved useful in a few niche areas,
    > > I think there are sound theoretical reasons supporting what I found
    > > empirically; that the algorithms only solve small scale problems, but cannot
    > > solve problems involving more than a few dozen variables. I won't go into
    > > intricate details here, but it has to do with what is known as the "curse of
    > > dimension", a term coined by the control theorist Bellman in 1961.
    > > Essentially it shows that certain types of problem in high dimensional space
    > > have a computational complexity (i.e. run time) that scales exponentially
    > > with the problem dimension. As no-one (to my knowledge) has managed to get
    > > a genetic algorithm to train up a simple neural network with a few dozen
    > > parameters, it seems likely to me that even billions of years isn't going to
    > > be long enough to develop complex specific protein codes. That's all I'll
    > > say on GA's for the moment; maybe later it's a possible thread of
    > > discussion.
    > >

    I do not understand why the curse of dimensionally presents a problem to evolution.
    Isn't it only a problem for
    linear or parametric models? Nature is surly nonlinear and therefore nonparametric
    models--that avoid the curse--are called for when trying to understand and mimic
    nature. Besides, I thought GA's were invented precisely for solving high
    dimensional problems by exploring the solution space with large numbers of
    initial-condition dependent ensembles? Hence, GA's actually benefit from the so
    called "blessing of dimensionally" in that there are lots solutions to choose from.

    In terms of training neural nets, since we train each other, perhaps it is a matter
    of equality of complexity that is the
    problem you allude to; i.e. "it takes one to train one." Isn't a GA less complex
    than a many parameter neural net--since it is an algorithm?

    Thanks
    GEorge A.

    George A. Andrews Jr.
    Physics/Applied Science
    College of William & Mary
    P.O. Box 8795
    Williamsburg, VA 23187-8795



    This archive was generated by hypermail 2b29 : Thu Feb 15 2001 - 10:52:43 EST