Re: Dembski's universal small probability bound

From: Richard Wein (rwein@lineone.net)
Date: Tue Jun 20 2000 - 10:13:08 EDT

  • Next message: Richard Wein: "Re: Dembski's universal small probability bound"

    Let me clarify something I wrote...

    Wesley:
    >>This is where I see the conditional probability used in TDI
    >>come in. We compute the complexity of an event on the basis
    >>of how improbable the event is if it were due to chance.
    >>This does not mean that we actually believe it to be due to
    >>chance, or that we consider the chance hypothesis sufficient
    >>to the task. But like other forms of statistical inference,
    >>chance gives us a null hypothesis to perhaps reject.

    Me:
    >But you can't just calculate the probability of an event "as if it were due
    >to chance". You need to assume a chance hypothesis. For example, the
    >probability of rolling a 6 with a die takes a different value if I assume
    >the die is fair than if I assume the die is biased in a particular way.

    First of all, let me say that "chance hypothesis" and "null hypothesis" are
    effectively the same thing. I use "chance hypothesis" because it's the term
    Dembski uses in TDI.

    There are many different chance hypotheses that one can choose to test, e.g.
    (a) the die is fair (probability of each outcome is 1/6);
    (b) the die is biased with a 50% probability of a 6 and 10% probability of
    each other outcome;
    (c) the die is biased with a 95% probability of a 6 and 1% probability of
    each other outcome;
    etc.

    One cannot simply test the hypothesis that the outcome is "due to chance";
    one needs a specific hypothesis about the probability distribution. (Perhaps
    the use of the term "chance hypothesis" causes confusion, because it sounds
    like a hypothesis that the event is "due to chance".)

    I think there is a common assumption that a variable which is "random" or
    "due to chance" has a uniform probability distribution, i.e. case (a) above.
    People who claim to apply Dembski's methods usually make some simplistic
    assumption of this sort. All they then succeed in doing is showing that
    their simplistic hypothesis is false.

    In the case of a natural event, such as the evolution of a flagellum, the
    relevant chance hypothesis--the one that needs to be tested--is that the
    flagellum evolved by random mutation and natural selection. But, of course,
    no-one can calculate the probability of the flagellum evolving under this
    hypothesis. It's far too complicated. So, instead, IDers make some
    simplistic assumption, e.g. that the components of the flagellum (or its
    DNA) each have an independent uniform distribution. They then proceed to
    falsify this irrelevant hypothesis. In other words, they're attacking a
    straw man.

    This is the same thing that creationists have been doing for years. But
    Dembski has attempted to give this old chestnut a new lease of life by
    cloaking it with equivocations, disguised by a camouflage of
    impressive-sounding formulas and jargon. Reinventing the Explanatory Filter
    as CSI is just the latest of these equivocations.

    The most interesting question to me is whether Dembski is doing this
    knowingly, in which case it's a very clever piece of propaganda, or whether
    he's succeeded in fooling himself, along with his supporters.

    Richard Wein (Tich)



    This archive was generated by hypermail 2b29 : Tue Jun 20 2000 - 10:11:58 EDT