Re: Information request re: Dawkins' "weasel" algorithm

From: Richard Wein (rwein@lineone.net)
Date: Tue Oct 10 2000 - 11:05:27 EDT

  • Next message: Wesley R. Elsberry: "Re: Information request re: Dawkins' "weasel" algorithm"

    From: Richard Wein <rwein@lineone.net>

    >From: Wesley R. Elsberry <welsberr@inia.cls.org>
    >
    >>Information request to William Dembski:
    >>
    >>[Quote]
    >>
    >>He starts with a target sequence taken from Shakespeares
    >>Hamlet, namely, METHINKS IT IS LIKE A WEASEL. If we tried to
    >>attain this sequence by pure chance (for example, by randomly
    >>shaking out scrabble pieces), the probability of getting it on
    >>the first try would be around 1 in 10^40, and correspondingly
    >>it would take on average about 10^40 tries to stand a better
    >>than even chance of getting it.12 Thus, if we depended on pure
    >>chance to attain this target sequence, we would in all
    >>likelihood be unsuccessful. As a problem for pure chance,
    >>attaining Dawkinss target sequence is an exercise in
    >>generating specified complexity, and it becomes clear that
    >>pure chance simply is not up to the task.
    >>
    >>But consider next Dawkins' reframing of the problem. In place
    >>of pure chance, he considers the following evolutionary
    >>algorithm: (1) Start with a randomly selected sequence of 28
    >>capital Roman letters and spaces (thats the length of METHINKS
    >>IT IS LIKE A WEASEL); (2) randomly alter all the letters and
    >>spaces in the current sequence that do not agree with the
    >>target sequence; (3) whenever an alteration happens to match a
    >>corresponding letter in the target sequence, leave it and
    >>randomly alter only those remaining letters that still differ
    >>from the target sequence. In very short order this algorithm
    >>converges to Dawkinss target sequence. In The Blind
    >>Watchmaker, Dawkins recounts a computer simulation of this
    >>algorithm that converges in 43 steps.13 In place of 10^40
    >>tries on average for pure chance to generate the target
    >>sequence, it now takes on average only 40 tries to generate it
    >>via an evolutionary algorithm.
    >>
    >>[End Quote - WA Dembski, "Can Evolutionary Algorithms Generate
    >>Specified Complexity", "Nature of Nature" conference, Baylor
    >>University]
    >>
    >>There are several issues that this text brings up. Of the three
    >>steps listed as comprising Dawkins' algorithm, only step (1) has
    >>anything like it in the pages of "The Blind Watchmaker". Steps
    >>(2) and (3) appear to be inventions rather than descriptions.
    >>What is the basis for claiming that steps (2) and (3) represent
    >>Dawkins' "weasel" algorithm?
    >>
    >>Further on, the issue of "tries" it takes to find a solution
    >>is raised. For "pure chance", a figure of ~10^40 "tries" is
    >>given, which would correspond to individual candidate
    >>solutions tested. For "weasel", though, only ~40 "tries" are
    >>given, but in this case the number 40 derives from the number
    >>of generations taken by the "weasel" algorithm rather than the
    >>number of candidate solutions examined. It seems to me that
    >>for the purpose of comparison, a "try" ought to mean the same
    >>thing for both approaches. I would like to see a restatement
    >>of the section concerning "tries" that takes this into
    >>account.
    >
    >It's been a while since I read TBW, but I'm almost certain you're wrong
    >here, Wesley. Dembski's description above of Dawkins' weasel algorithm
    seems
    >OK to me (except that I wouldn't call the weasel model an "evolutionary
    >algorithm", because it has a built-in target, and I don't think Dawkins
    >calls it one.)

    On re-reading, I see that Dembski's description of the weasel model is less
    clear than I first thought. But it can just about be reconciled with
    Dawkins' original.

    Correctly described, each randomization of the remaining unmatched
    characters is considered one step, and proceeds whether or not any new match
    was achieved at the last step. Dembski has the algorithm randomizing the
    remaining unmatched characters when, and only when, a new match is made.
    This creates following potential problems:
    (a) One has to assume that each randomization (of all remaining unmatched
    characters) is completed before proceeding to check whether any new matches
    have occurred, but this is not clear from Dembski's account.
    (b) Dembski's account implies that if, at any stage, randomizing the
    remaining unmatched characters fails to produce another match, then the
    algorithm ceases; this is clearly wrong, but one can assume it's not what
    Dembski intended.
    (c) It's not clear from Dembski's account what constitutes a "step".

    I think the issue is one of poor writing on Dembski's part rather than a
    difference in interpretation of Dawkins' model. If one knows what Dawkins'
    really wrote and interprets Dembski generously, then there should be no
    problem.

    Richard Wein (Tich)
    --------------------------------
    "Do the calculation. Take the numbers seriously. See if the underlying
    probabilities really are small enough to yield design."
      -- W. A. Dembski, who has never presented any calculation to back up his
    claim to have detected Intelligent Design in life.



    This archive was generated by hypermail 2b29 : Tue Oct 10 2000 - 11:04:35 EDT