Dembski's confusion over specified complexity

From: Richard Wein (rwein@lineone.net)
Date: Thu Nov 23 2000 - 03:54:50 EST

  • Next message: Bertvan@aol.com: "Chance and Selection"

    [I originally posted the following to the Meta Reiterations list. I thought
    the issue was sufficiently interesting to justify posting it here too.]

    In my critique of the Design Inference (Metaviews 096), I wrote that Dembski
    tends to conflate his own usage of the term "specified complexity" with the
    same term as used by other writers, even though it seems very unlikely that
    those other writers are using it in the same sense.

    A particularly good example of Dembski's confusion appears in his latest
    Metaviews article (098):

    "Specified
    complexity is a form of information, though one richer than Shannon
    information, which focuses exclusively on the complexity of
    information without reference to its specification. A repetitive
    sequence of bits is specified without being complex. A random
    sequence of bits is complex without being specified. A sequence of
    bits representing, say, a progression of prime numbers will be both
    complex and specified. In _The Design Inference_ I show how inferring
    design is equivalent to identifying specified complexity
    (significantly, this means that intelligent design can be conceived
    as a branch of information theory)."

    Here, Dembski states that "A repetitive sequence of bits is specified
    without being complex." As an example of a repetitive sequence of bits, I
    take one in which all the bits are 1's. Under some definitions of specified
    complexity, this sequence may well fail to have this property. But, using
    Dembski's own definition, this sequence *does* have specified complexity
    with respect to the chance hypothesis that the bits are independent
    equiprobable random variables, assuming the number of bits is sufficiently
    large.

    By Dembski's criteria, a suitable specification would be "all bits 1" or
    "all bits the same". Indeed, Dembski chooses a similar specification in his
    Caputo example. Depending on which of these specifications we choose, the
    probability of the specified event is either equal to or double the
    probability of a specific "random" sequence of bits, and therefore has the
    same (or just one bit less) specified complexity.

    Dembski seems to be confusing his own definition of complexity (which is a
    probabilistic one) with the more commonly used complexity measure of
    Kolmogorov and Chaitin (which is an algorithmic one). The two concepts are
    quite different and give very different results!

    The whole issue of specified complexity is a red herring, as far as the
    Design Inference is concerned. What Dembski's own method requires him to
    show is that the probability of some "specified" biological
    structure evolving naturally is very small. If he can do that, he will have
    achieved something of great interest. Dembski's transformation of
    probabilities into complexities only serves to obfuscate this simple idea.

    Richard Wein (Tich)
    --------------------------------
    "Do the calculation. Take the numbers seriously. See if the underlying
    probabilities really are small enough to yield design."
      -- W. A. Dembski, who has never presented any calculation to back up his
    claim to have detected Intelligent Design in life.



    This archive was generated by hypermail 2b29 : Thu Nov 23 2000 - 03:54:21 EST