Re: The Design Inference

From: Richard Wein (rwein@lineone.net)
Date: Mon May 15 2000 - 19:21:52 EDT

  • Next message: billwald@juno.com: "Re: Definitions"

    Since reading TDI, I've re-read the various reviews of the book which can be
    found on the web, and noticed some significant variations in the reviewers'
    interpretations of Dembski. In fact I think some of the reviewers have
    seriously misinterpreted Dembski in places. The reviews in question are by
    Sober (http://philosophy.wisc.edu/sober/dembski.pdf), Chiprout
    (http://members.aol.com/echiprt/design.htm) and Elsberry
    (http://inia.cls.org/~welsberr/zgists/wre/papers/dembski7.html). In
    addition, Howard Van Till kindly responded to my earlier post, sending me a
    copy of a review he wrote for Zygon magazine.

    In this post I'd like to deal with just one question: whether the
    Explanatory Filter requires each chance hypothesis to be considered and
    rejected individually, or whether rejection of one chance hypothesis leads
    to the automatic rejection of all other possible chance hypotheses too.
    Sober takes the latter interpretation, and criticizes it strongly. If this
    is really what Dembski means, then Sober is right to criticize it, indeed to
    tear it to shreds, since it's clearly absurd.

    However, I think he's misinterpreted Dembski. Although Dembski does make a
    number of remarks, both in TDI and elsewhere, which seem to support this
    interpretation, a careful reading of TDI rules it out. The evidence is
    two-fold:

    i) The formulation of the design inference at the bottom of p. 50 and top of
    p. 51 has as premises that the event (E) must be specified and of small
    probability with respect to *all* chance hypotheses (H). This requirement is
    carried through to all the subsequent formulations.

    ii) The footnote at the bottom of page 44 gives an example with two chance
    hypotheses, and makes clear that they must each be rejected. It continues:
    "In case still more chance hypotheses are operating, design follows only if
    each of these additional chance hypotheses get eliminated as well, which
    means that the event has to be an SP event with respect to all the relevant
    chance hypotheses and in each case be specified as well."

    I must say I find it mystifying, even suspicious, that, out of the many
    examples in the book, only one involves multiple chance hypotheses, and that
    one is a minor example, relegated to a footnote. The issue of multiple
    chance hypotheses is surely a very important one, and Dembski should deal
    with it at greater length.

    The confusion is greatly compounded by some of Dembski's on-line articles, a
    good example being "Another Way to Detect Design?"
    (http://www.baylor.edu/~William_Dembski/docs_critics/sober.htm). This
    article is actually a reply to Sober's review, yet Dembski fails to point
    out Sober's crucial misinterpretation which I mentioned above!!!

    I also notice that Dembski doesn't once refer to the Explanatory Filter as
    such. It seems that by the time he wrote this article he had switched to his
    "specified complexity" terminology. But he fails to note this change in
    terminology, even writing: "In The Design Inference (Cambridge, 1998) I
    argue that specified complexity is a reliable empirical marker of
    intelligent design." However, I can't find any use of the term "specified
    complexity" in TDI. While it's possible I've overlooked the odd use of it, I
    rather doubt it, because in TDI the word "complexity" is reserved for use in
    connection with the "tractability" requirement, which is itself a
    prerequisite of specification. Furthermore, there's no mention of
    "specified complexity" in the index.

    You may wonder why I'm making a fuss about a matter of terminology. The
    reason is that I think Dembski (whether consciously or unconsciously) is
    using this change in terminology to equivocate over the matter of multiple
    chance hypotheses. As I mentioned in my last post, there is not, in general,
    merely a single level of specified complexity. Instead, there must be one
    level of specified complexity for each chance hypothesis. By continually
    referring to specified complexity in the singular, without respect to a
    chance hypothesis, Dembski avoids the issue of multiple chance hypotheses. I
    also think his use of the term "specified complexity" is liable to mislead
    the reader into accepting by default that living organisms have this
    attribute, because organisms are obviously complex in the normal sense of
    the word (but not necessarily so in Dembski's specialized sense).

    At this point, let me correct an error that I made in my last post. I
    concluded there that the Explanatory Filter is nothing but a tautology. In
    fact, I realize now that the the Explanatory Filter does have one
    constructive (non-tautological) thing to say: it says that the Law of Small
    Probability can legitimately be applied to multiple chance hypotheses with
    respect to the same event. That this is so is not obvious. And, as far as I
    can see, Dembski does not establish it. Indeed, I have a strong suspicion
    that it's not true for all sets of chance hypotheses. But I need to consider
    this question more carefully. I hope to come back to it in a future post.

    That's all for now. Comments would be appreciated.

    Richard Wein.

    -----Original Message-----
    From: Richard Wein <rwein@lineone.net>
    To: evolution@calvin.edu <evolution@calvin.edu>
    Date: 14 May 2000 16:33
    Subject: The Design Inference

    >Until now, I've based my dismissal of Dembski's design argument on the
    >articles I've found on the web. I've read all of Dembski's non-theological
    >articles that I've been able to find, and several reviews of his book, "The
    >Design Inference". However, I wanted to read the book itself, and have it
    >had on request from the public library for many weeks. Now, at last, I've
    >received it and read it. (No, the wait wasn't because of it's
    >popularity--quite the opposite! The book is virtually unknown here in
    >Britain, and the library had to acquire it through the
    >inter-library loan system.)
    >
    >I'd like to summarize my views on the book, and hopefully provoke some
    >discussion. (By the way, I studied statistics at university, to BSc level,
    >but that was a long, long time ago, so I would consider myself a
    >well-informed layman on the subject of statistics.)
    >
    >The Law of Small Probability
    >---------------------------------------
    >Most of the book is devoted to establishing this law, which says--put
    >simply--that "specified events of small probability do not occur by
    chance".
    >
    >It seems to me that this law introduces two novelties into statistical
    >theory:
    >(a) It allows a specification to be established retrospectively, i.e. after
    >the event in question has occurred.
    >(b) It provides a method for setting a small probability bound, below which
    >we can justifiably say that events do not occur.
    >Let me say that these are novelties as far as I'm concerned, but I can't
    say
    >with any confidence that they haven't already been addressed in the
    >statistics literature.
    >
    >Now, I don't propose to discuss the LSP in detail. Such a discussion would
    >be pretty technical and probably not of interest to many readers here. If
    >Dembski has succeeded in putting this law on a firm theoretical basis, then
    >I think he will have made a significant contribution to statistics.
    However,
    >I'm rather doubtful about whether he has done so. Several of his inferences
    >seem dubious to me. But I don't feel competent to make a definite
    >pronouncement on the matter. I'd like to wait and see what the statistics
    >community has to say on the matter. Does anyone here know what the reaction
    >of the statistics community has been?
    >
    >Anyway, regardless of whether this law does indeed have a firm theoretical
    >foundation, I'm quite willing to accept it as a practical methodology. The
    >law *seems* reasonable, and it's one which we all intuitively apply. It's
    >our intuition of such a law that enables us to cry foul when we see someone
    >deal a perfect Bridge hand (each player receives 13 cards of one suit).
    It's
    >our intuition of such a law that leads us to conclude that Nicholas Caputo
    >rigged the ballots (in Dembski's favourite example).
    >
    >Dembski wants to establish this law because he hopes to use it to to prove

    >that life could not occur by chance. Well, I have no problem with that.
    >Dawkins implicitly uses such a law when he argues, in The Blind Watchmaker,
    >that "We can accept a certain amount of luck in our explanations, but not
    >too much."
    >
    >In developing his LSP, Dembski is doing science (or perhaps, more
    >accurately, mathematics). Whether it is good or bad science remains to be
    >seen (as far as I'm concerned). However, when he moves on from the LSP to
    >the Explanatory Filter, Dembski jumps from the arena of science into the
    >quagmire of pseudo-science.
    >
    >The Explanatory Filter
    >------------------------------
    >Dembski's Explanatory Filter (EF) says that, once you've eliminated
    >regularity and chance as possible explanations of an event, you must
    >conclude that the event is the result of design. So what's wrong with this?
    >
    >Well, first of all, Dembski is equivocal about what he means by "design".
    He
    >initially defines design to be the "set-theoretic complement of the
    >disjunction regularity-or-chance", or, in other words: "To attribute an
    >event to design is to say that it cannot reasonably referred to either
    >regularity or chance" (p. 36). By this definition, the EF is tautological,
    >but Dembski promises that he will later provide us with a means of
    >determining which cases of "design" can be attributed to "intelligent
    >agency". Or is he going to attribute *all* cases of design to intelligent
    >agency? This is where Dembski starts to equivocate.
    >
    >i) On page 36, he writes: "The principal advantage of characterizing design
    >as the complement of regularity and chance is that it avoids committing
    >itself to a doctrine of intelligent agency. In practice, when we eliminate
    >regularity and chance, we typically do end up with an intelligent agent.
    >Thus, in practice, to infer design is typically to end up with a "designer"
    >in the classical sense." Dembski's use of the word "typically" strongly
    >imples that not all cases of design can be attributed to intelligent
    agency,
    >i.e. that design does not necessarily imply intelligent agency.
    >
    >ii) In Section 2.4, "From Design to Agency" (starting p. 62), Dembski
    >returns to this issue and attempts to establish a connection between design
    >and (intelligent) agency. I'm not going to address the issue of whether he
    >succeeds in doing so. All I'm interested in for now is whether he claims to
    >establish a *necessary* connection, i.e. that *all* cases of design can be
    >attributed to agency. The answer is that he does. In the final paragraph of
    >the section, he summarizes: "It's now clear why the Explanatory Filter is
    so
    >well suited for recognizing intelligent agency: for the Explanatory Filter
    >to infer design coincides with how we recognize intelligent agency
    >generally." And again: "It follows that the filter formalizes what we have
    >been doing right along when we recognize intelligent agents. The
    Explanatory
    >Filter pinpoints how we recognize intelligent agency" (p. 66).
    >
    >iii) In case anyone should try to reconcile the contradiction of (i) and
    >(ii) above by claiming that "typically" should be read as something like
    "to
    >all intents and purposes", let me point out that Dembski actually gives an
    >example of a situation where the EF (according to Dembski) indicates design
    >but we where we cannot (according to Dembski) infer an intelligent agency.
    >The example is on page 226, and I'll give details if anyone is interested.
    >But I think it's sufficient to note Dembski's conclusion: "Thus, even
    though
    >in practice inferring design is the first step in identifying an
    intelligent
    >agent, taken by itself design does not require that such an agent be
    >posited. The notion of design that emerges from the design inference must
    >not be confused with intelligent agency." (Note that the terms "design
    >inference" and "Explanatory Filter" appear to be synonyomous. One might
    have
    >assumed that DI = EF + the mysterious extra criterion that allows us to
    >distinguish between simple design and intelligent agency, but the last
    >sentence quoted shows that this cannot be the case.)
    >
    >So despite, the claim to the contrary on page 66, it seems that the EF on
    >its own is not sufficient to identify intelligent agency. In that case,
    what
    >additional information is required? Dembski continues (p. 227): "When the
    >design inference infers design, its primary effect is to limit our
    >explanatory options. Only secondarily does it help identify a cause. To
    >investigate a cause we need to investigate the particulars of the situation
    >where design was inferred. Simply put, we need more details. In the Caputo
    >case, for instance, it seems clear enough what the causal story is, namely,
    >that Caputo cheated. In the probabilistically isomorphic case of Alice and
    >Bob, however, we may have to live without a causal explanation..." So, in
    >order to attribute the Caputo case to design, we need to know the causal
    >design story (he cheated). But the whole purpose of the design inference
    was
    >to give us a way of identifying design *without* knowing the causal story.
    >Dembski has just shot himself in the foot!
    >
    >Having seen Dembski demolish the whole raison d'etre of his own EF, it
    >hardly seems worth discussing it any further. But I'd like to clear up
    >another point of confusion, namely the distinction between Dembski's
    >"regularity" and "chance" categories.
    >
    >It seems rather confusing to use the name "regularity" in opposition to
    >"chance", since even chance events exhibit regularities. What is a
    >probability distribution if not a regularity? When we look further, we see
    >that the events Dembski assigns to the regularity category are those which
    >"will (almost) always happen". In other words, those with a high
    probability
    >(a probability of 1 or nearly 1). In fact, Dembski later refers to them as
    >"highly probable" (HP) events. Dembski also refers to chance events as
    >events of "intermediate probability" (IP). So why draw a distinction
    between
    >HP and IP events? After all, the boundary between them is undefined
    (Dembski
    >never gives a boundary probability), and both categories of events are
    going
    >to ultimately suffer the same fate (be dismissed as not due to design). I
    >can see no theoretical reason for distinguishing between them, only a
    >practical one: when considering the nature of a particular event, we can
    >rule out design immediately if its probability is sufficiently
    high--there's
    >no need to waste time worrying about whether it's specified or not. From a
    >logical point of view, however, we might just as well lump these two
    >categories together. And, if we do that, what should we call the new
    >category? Well, we could call it "regularity", since, as I've already said,
    >even chance events have regularities. But this seems to presuppose that the
    >remaining category (design) can't also exhibit regularities, which seems to
    >me like an unwarranted assumption. In fact, the only sensible name that I
    >can think of is "not designed"!
    >
    >So, it seems that if the Explanatory Filter says anything at all, it
    amounts
    >to the following: once we've eliminated all the possible "not designed"
    >explanations, we must conclude that the event is due to design. In other
    >words, it's a tautology!
    >
    >The Inflationary Fallacy
    >-------------------------------
    >Although I said I wasn't going to discuss Dembski's Law of Small
    Probability
    >in detail, I'd like to address one issue related to it, not because it's an
    >
    >
    >
    >
    >important one, but just because it interests me.
    >
    >In justifying his "universal probability bound", Dembski argues that he
    >doesn't need to allow for the probabilistic resources of the multiple
    >universes which, according to some physicists, result from inflationary
    >big-bang cosmology or quantum mechanics. He writes: "...there is something
    >deeply unsatisfying about positing these entities simply because chance
    >requires it" (p. 215). It's rather parochial of Dembski to assume that
    >physicists have proposed these theories just to undermine probabilistic
    >arguments of the sort he wants to make. I'm sure that's not the case. And,
    >if these other universes really do exist, then we must face up to the
    >implications.
    >
    >While not accepting the possibility that such universes exist, Dembski
    >attempts to argue that, if they did, they would make the concept of chance
    >unintelligible: "But with unlimited probabilistic resources, we lose any
    >rational basis for eliminating chance". Leaving aside the question of
    >whether these multiple universe theories necessarily entail an *infinity*
    of
    >universes, this is an interesting point, and I think it betrays Dembski's
    >agenda. There would indeed, no longer be any rational basis for rejecting a
    >chance explanation of the origin of intelligent life (Dembski's aim). No
    >matter how infinitesimally small the probability of life might be, it would
    >occur in an infinitesimal proportion of universes, and we wouldn't be
    >surprised to find ourselves in such a universe, because, we couldn't
    >possibly find ourselves in any of the others.
    >
    >However, the same argument does not apply when we consider other chance
    >events which are not vital to our very existence. To take Dembski's
    example:
    >"Despite a life of unremitting charity and self-sacrifice, did Mother
    Teresa
    >in her last days experience a neurological accident that caused her to
    >become an ax murderer?" Well, if we assume that such an event had
    >infinitesimal but non-zero probability, then, yes, there will be universes
    >where that happened. But there's no particular reason why we should find
    >ourselves in one of those universes. Therefore we have every right to be
    >surprised, nay astounded, if our own Mother Theresa was revealed to be an
    ax
    >murderer, and to reject the chance hypothesis. It follows that there will
    be
    >some *some* universes in which the chance hypothesis will be wrongly
    >rejected, but the probability of that happening in *our* universe is
    >infinitesimal.
    >
    >In short, I think Dembski is wrong to exclude multiple universes in
    >principle. However, I for one would also find it deeply unsatisfying if the
    >naturalistic explanation for life had to resort to multiple universes,
    >unless the arguments for those multiple universes were unimpeachable (of
    >which I'm yet to be convinced).
    >
    >Is There Design in Nature?
    >-------------------------------------
    >Interestingly, Dembski doesn't address this question in TDI, although it's
    >clear that that's what the book is leading up to, and some of his
    supporters
    >claim that it does indeed do so. For example, in "Scientists Find Evidence
    >of God" (http://www.arn.org/docs/insight499.htm) by Stephen Goode we find:
    >
    >"Dembski recently published his own addition to the ever-growing
    Intelligent
    >Design Movement, a closely argued book that he calls The Design Inference,
    >in which Dembski (whose impressive list of degrees led one friend to
    >describe him as "the perpetual student") brings to bear his knowledge of
    >symbolic logic and mathematics to argue in favor of design in nature."
    >
    >I suspect that many rank-and-file creationists are laying out their
    >hard-earned cash for this book in the expectation of finding an argument
    for
    >ID in nature. If so, they're wasting their money, because what they're
    >actually getting is mostly a technical treatise on statistics, which, valid
    >or not, is going to be of interest to very few people. By the way, here in
    >Britain the book costs £40, about 4 times the cost of a typical popular
    >science book. That's quite a lot of money to waste if it isn't what you
    >wanted!
    >
    >Anyway, given that Dembski doesn't attempt to apply the Explanatory Filter
    >to nature in this book, does he do it anywhere else? Well, I haven't been
    >able to find an application of the Explanatory Filter as such, but I've
    >found some on-line articles in which Dembski uses some related concepts
    >named "actual specified complexity" and "complex specified information"
    >(CSI). As far as I can tell, these two terms are synonymous with each
    other,
    >and a phenomenon is considered to possess these attributes if it results
    >from a specified event of small probability.
    >
    >So what does Dembski say about actual specified complexity? Well, nothing
    >very definitive:
    >
    >"Does nature exhibit actual specified complexity? This is the million
    dollar
    >question. Michael Behe's notion of irreducible complexity is purported to
    >be a case of actual specified complexity and to be exhibited in real
    >biochemical systems (cf. his book Darwin's Black Box). If such systems are,
    >as Behe claims, highly improbable and thus genuinely complex with respect
    >to the Darwinian mechanism of mutation and natural selection and if they
    >are specified in virtue of their highly specific function (Behe looks to
    >such systems as the bacterial flagellum), then a door is reopened for
    >design in science that has been closed for well over a century. Does nature
    >exhibit actual specified complexity? The jury is still out." Explaining
    >Specified Complexity
    >(http://www3.cls.org/~welsberr/evobio/evc/ae/dembski_wa/19990913_explaining
    _
    >csi.html).
    >
    >Unfortunately, it seems that some of Dembski's followers haven't heard that
    >the jury is still out:
    >
    >"Drawing upon design-detection techniques in such diverse fields as
    forensic
    >science, artificial intelligence, cryptography, SETI, intellectual property
    >law, random number generation, and so forth, William Dembski argues that
    >specified complexity is a sufficient condition of inferring design and
    that,
    >moreover, such specified complexity is evident in nature." William Lane
    >Craig, (http://www.leaderu.com/offices/dembski/docs/bd-dibook.html).
    >
    >Just one more point. Dembski use specified complexity as a measure of
    >improbability. But, as he himself says: "...probabilities are always
    >assigned in relation to chance hypotheses". So it's misleading to refer to
    >specified complexity as a singular measure. A phenomenon has a separate
    >measure of specified complexity for each of the possible chance hypotheses
    >that could explain it.
    >
    >Well, that will do for now. Comments would be welcomed!
    >
    >Richard Wein (Tich)
    >"The most formidable weapon against errors of every kind is reason. I have
    >never used any other, and I trust I never shall." -- Thomas Paine, "Age of
    >Reason"
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >



    This archive was generated by hypermail 2b29 : Mon May 15 2000 - 19:20:20 EDT