Fwd: [METAVIEWS] 098: Intelligent Design Coming Clean, Part 3 of 4

From: Stephen E. Jones (sejones@iinet.net.au)
Date: Tue Nov 21 2000 - 17:52:58 EST

  • Next message: Stephen E. Jones: "Fwd: [METAVIEWS] 098: Intelligent Design Coming Clean, Part 4 of 4"

    Group

    Part 3 of Dembski's 4-part post.

    Steve

    ==================BEGIN FORWARDED MESSAGE==================
    5. Can Specified Complexity Even Have a Mechanism?

    What are the candidates here for something in nature that is
    nonetheless beyond nature? In my view the most promising candidate is
    specified complexity. The term "specified complexity" has been in use
    for about 30 years. The first reference to it with which I'm familiar
    is from Leslie Orgel's 1973 book _The Origins of Life_, where
    specified complexity is treated as a feature of biological systems
    distinct from inorganic systems. Richard Dawkins also employs the
    notion in _The Blind Watchmaker_, though he doesn't use the actual
    term (he refers to complex systems that are independently specified).
    In his most recent book, _The Fifth Miracle_, Paul Davies (p. 112)
    claims that life isn't mysterious because of its complexity per se
    but because of its "tightly specified complexity." Stuart Kauffman in
    his just published _Investigations_ (October 2000) proposes a "fourth
    law" of thermodynamics to account for specified complexity. Specified
    complexity is a form of information, though one richer than Shannon
    information, which focuses exclusively on the complexity of
    information without reference to its specification. A repetitive
    sequence of bits is specified without being complex. A random
    sequence of bits is complex without being specified. A sequence of
    bits representing, say, a progression of prime numbers will be both
    complex and specified. In _The Design Inference_ I show how inferring
    design is equivalent to identifying specified complexity
    (significantly, this means that intelligent design can be conceived
    as a branch of information theory).

    Most scientists familiar with specified complexity think that the
    Darwinian mechanism is adequate to account for it once one has
    differential reproduction and survival (in _No Free Lunch_ I'll show
    that the Darwinian mechanism has no such power, though for now let's
    let it ride). But outside a context that includes replicators, no one
    has a clue how specified complexity occurs by naturalistic means.
    This is not to say there hasn't been plenty of speculation (e.g.,
    clay templates, hydrothermic vents, and hypercycles), but none of
    this speculation has come close to solving the problem. Unfortunately
    for naturalistic origin-of-life researchers, this problem seems not
    to be eliminable since the simplest replicators we know require
    specified complexity. Consequently Paul Davies suggests that the
    explanation of specified complexity will require some fundamentally
    new kinds of natural laws. But so far these laws are completely
    unknown. Kauffman's reference to a "fourth law," for instance, merely
    cloaks the scientific community's ignorance about the naturalistic
    mechanisms supposedly responsible for the specified complexity in
    nature.

    Van Till agrees that specified complexity is an open problem for
    science. At a recent symposium on intelligent design at the
    University of New Brunswick sponsored by the Center for Theology and
    the Natural Sciences (15-16 September 2000), Van Till and I took part
    in a panel discussion. When I asked him how he accounts for specified
    complexity in nature, he called it a mystery that he hopes further
    scientific inquiry will resolve. But resolve in what sense? On Van
    Till's Robust Formation Economy Principle, there must be some causal
    mechanism in nature that accounts for any instance of specified
    complexity. We may not know it and we may never know it, but surely
    it is there. For the design theorist to invoke a non-natural
    intelligence is therefore out of bounds. But what happens once some
    causal mechanism is found that accounts for a given instance of
    specified complexity? Something that's specified and complex is by
    definition highly improbable with respect to all causal mechanisms
    currently known. Consequently, for a causal mechanism to come along
    and explain something that previously was regarded as specified and
    complex means that the item in question is in fact no longer
    specified and complex with respect to the newly found causal
    mechanism. The task of causal mechanisms is to render probable what
    otherwise seems highly improbable. Consequently, the way naturalism
    explains specified complexity is by dissolving it. Intelligent design
    makes specified complexity a starting point for inquiry. Naturalism
    regards it as a problem to be eliminated. (That's why, for instance,
    Richard Dawkins wrote _Climbing Mount Improbable_. To climb Mount
    Improbable one needs to find a gradual route that breaks a horrendous
    improbability into a sequence manageable probabilities each one of
    which is easily bridged by a natural mechanism.)

    Lord Kelvin once remarked, "If I can make a mechanical model, then I
    can understand; if I cannot make one, I do not understand."
    Repeatedly, critics of design have asked design theorists to provide
    a causal mechanism whereby a non-natural designer inputs specified
    complexity into the world. This question presupposes a self-defeating
    conception of design and tries to force design onto a Procrustean bed
    sure to kill it. _Intelligent design is not a mechanistic theory!_
    Intelligent design regards Lord Kelvin's dictum about mechanical
    models not as a sound regulative principle for science but as a
    straitjacket that artificially constricts science. SETI researchers
    are not invoking a mechanism when they explain a radio transmission
    from outer space as the result of an extraterrestrial intelligence.
    To ask for a mechanism to explain the effect of an intelligence
    (leaving aside derived intentionality) is like Aristotelians asking
    Newton what it is that keeps bodies in rectilinear motion at a
    constant velocity (for Aristotle the crucial distinction was between
    motion and rest; for Newton it was between accelerated and
    unaccelerated motion). This is simply not a question that arises
    within Newtonian mechanics. Newtonian mechanics proposes an entirely
    different problematic from Aristotelian physics. Similarly,
    intelligent design proposes a far richer problematic than science
    committed to naturalism. Intelligent design is fully capable of
    accommodating mechanistic explanations. Intelligent design has no
    interest in dismissing mechanistic explanations. Such explanations
    are wonderful as far as they go. But they only go so far, and they
    are incapable of accounting for specified complexity.

    In rejecting mechanical accounts of specified complexity, design
    theorists are not arguing from ignorance. Arguments from ignorance
    have the form "Not X, therefore Y." Design theorists are not saying
    that for a given natural object exhibiting specified complexity, all
    the natural causal mechanisms so far considered have failed to
    account for it and therefore it had to be designed. Rather they are
    saying that the specified complexity exhibited by a natural object
    can be such that there are compelling reasons to think that no
    natural causal mechanism is capable of producing it. Usually these
    "compelling reasons" take the form of an argument from contingency in
    which the object exhibiting specified complexity is compatible with
    but in no way determined by the natural laws relevant to its
    occurrence. For instance, for polynucleotides and polypeptides there
    are no physical laws that account for why one nucleotide base is next
    to another or one amino acid is next to another. The laws of
    chemistry allow any possible sequence of nucleotide bases (joined
    along a sugar-phosphate backbone) as well as any possible sequence of
    L-amino acids (joined by peptide bonds).

    Design theorists are attempting to make the same sort of argument
    against mechanistic accounts of specified complexity that modern
    chemistry makes against alchemy. Alchemy sought to transform base
    into precious metals using very limited means like furnaces and
    potions (though not particle accelerators). Now we rightly do not
    regard the contemporary rejection of alchemy as an argument from
    ignorance. For instance, we don't charge the National Science
    Foundation with committing an argument from ignorance for refusing to
    fund alchemical research. Now it's evident that not every combination
    of furnaces and potions has been tried to transform lead into gold.
    But that's no reason to think that some combination of furnaces and
    potions might still constitute a promising avenue for effecting the
    desired transformation. We now know enough about atomic physics to
    preclude this transformation. So too, we are fast approaching the
    place where the transformation of a biological system that doesn't
    exhibit an instance of specified complexity (say a bacterium without
    a flagellum) to one that does (say a bacterium with a flagellum)
    cannot be accomplished by purely natural means but also requires
    intelligence.

    There are a lot of details to be filled in, and design theorists are
    working overtime to fill them in. What I'm offering here is not the
    details but an overview of the design research program as it tries to
    justify the inability of natural mechanisms to account for specified
    complexity. This part of its program is properly viewed as belonging
    to science. Science is in the business of establishing not only the
    causal mechanisms capable of accounting for an object having certain
    characteristics but also the inability of causal mechanisms to
    account for such an object, or what Stephen Meyer calls "proscriptive
    generalizations." There are no causal mechanisms that can account for
    perpetual motion machines. This is a proscriptive generalization.
    Perpetual motion machines violate the second law of thermodynamics
    and can thus on theoretical grounds be eliminated. Design theorists
    are likewise offering in principle theoretical objections for why the
    specified complexity in biological systems cannot be accounted for in
    terms of purely natural causal mechanisms. They are seeking to
    establish proscriptive generalizations. Proscriptive generalizations
    are not arguments from ignorance.

    Assuming such an in-principle argument can be made (and for the
    sequel I will assume it can), the design theorist's inference to
    design can no longer be considered an argument from ignorance. With
    such an in-principle argument in hand, not only has the design
    theorist excluded all natural causal mechanisms that might account
    for the specified complexity of a natural object, but the design
    theorist has also excluded all explanations that might in turn
    exclude design. The design inference is therefore not purely an
    eliminative argument, as is so frequently charged. Specified
    complexity presupposes that the entire set of relevant chance
    hypotheses has first been identified. This takes considerable
    background knowledge. What's more, it takes considerable background
    knowledge to come up with the right pattern (i.e., specification) for
    eliminating all those chance hypotheses and thus for inferring
    design. Design inferences that infer design by identifying specified
    complexity are therefore not purely eliminative. They do not merely
    exclude, but they exclude from an exhaustive set of hypotheses in
    which design is all that remains once the inference has done its work
    (this is not to say that the set is logically exhaustive; rather it
    is exhaustive with respect to the inquiry in question -- that's all
    we can ever do in science).

    It follows that contrary to the frequently-leveled charge that design
    is untestable, design is in fact eminently testable. Indeed,
    specified complexity tests for design. Specified complexity is a
    well-defined statistical notion. The only question is whether an
    object in the real world exhibits specified complexity. Does it
    correspond to an independently given pattern and is the event
    delimited by that pattern highly improbable (i.e., complex)? These
    questions admit a rigorous mathematical formulation and are readily
    applicable in practice. Not only is design eminently testable, but to
    deny that design is testable commits the fallacy of _petitio
    principii_, that is, begging the question or arguing in a circle
    (Robert Larmer developed this criticism effectively at the New
    Brunswick symposium adverted to earlier). It may well be that the
    evidence to justify that a designer acted to bring about a given
    natural structure may be insufficient. But to claim that there could
    never be enough evidence to justify that a designer acted to bring
    about a given natural structure is insupportable. The only way to
    justify the latter claim is by imposing on science a methodological
    principle that deliberately excludes design from natural systems, to
    wit, methodological naturalism. But to say that design is not
    testable because we've defined it out of existence is hardly
    satisfying or legitimate. Darwin claimed to have tested for design in
    biology and found it wanting. Design theorists are now testing for
    design in biology afresh and finding that biology is chock-full of
    design.

    Specified complexity is only a mystery so long as it must be
    explained mechanistically. But the fact is that we attribute
    specified complexity to intelligences (and therefore to entities that
    are not mechanisms) all the time. The reason that attributing
    specified complexity to intelligence for biological systems is
    regarded as problematic is because such an intelligence would in all
    likelihood have to be unembodied (though strictly speaking this is
    not required of intelligent design -- the designer could in principle
    be an embodied intelligence, as with the panspermia theories). But
    how does an unembodied intelligence interact with natural objects and
    get them to exhibit specified complexity. We are back to Van Till's
    problem of extra-natural assembly.

    6. How Can an Unembodied Intelligence Interact with the Natural World?

    There is in fact no conceptual difficulty for an unembodied
    intelligence to interact coherently with the natural world. We are
    not in the situation of Descartes seeking a point of contact between
    the material and the spiritual at the pineal gland. For Descartes the
    physical world consisted of extended bodies that interacted only via
    direct contact. Thus for a spiritual dimension to interact with the
    physical world could only mean that the spiritual caused the physical
    to move. In arguing for a substance dualism in which human beings
    consist of both spirit and matter, Descartes therefore had to argue
    for a point of contact between spirit and matter. He settled on the
    pineal gland because it was the one place in the brain where symmetry
    was broken and where everything seemed to converge (most parts of the
    brain have right and left counterparts).

    Although Descartes's argument doesn't work, the problem it tries to
    solve is still with us. When I attended a Santa Fe symposium
    sponsored by the Templeton Foundation in October 1999, Paul Davies
    expressed his doubts about intelligent design this way: "At some
    point God has to move the particles." The physical world consists of
    physical stuff, and for a designer to influence the arrangement of
    physical stuff seems to require that the designer intervene in,
    meddle with, or in some way coerce this physical stuff. What's wrong
    with this picture of supernatural action by a designer? The problem
    is not a flat contradiction with the results of modern science. Take
    for instance the law of conservation of energy. Although the law is
    often stated in the form "energy can neither be created nor
    destroyed," in fact all we have empirical evidence for is the much
    weaker claim that "in an isolated system energy remains constant."
    Thus a supernatural action that moves particles or creates new ones
    is beyond the power of science to disprove because one can always
    claim that the system under consideration was not isolated.

    There is no logical contradiction here. Nor is there necessarily a
    god-of-the-gaps problem here. It's certainly conceivable that a
    supernatural agent could act in the world by moving particles so that
    the resulting discontinuity in the chain of physical causality could
    never be removed by appealing to purely physical forces. The "gaps"
    in the god-of-the-gaps objection are meant to denote gaps of
    ignorance about underlying physical mechanisms. But there's no reason
    to think that all gaps must give way to ordinary physical
    explanations once we know enough about the underlying physical
    mechanisms. The mechanisms may simply not exist. Some gaps might
    constitute ontic discontinuities in the chain of physical causes and
    thus remain forever beyond the capacity of physical mechanisms.

    Although a non-physical designer who "moves particles" is not
    logically incoherent, such a designer nonetheless remains problematic
    for science. The problem is that natural causes are fully capable of
    moving particles. Thus for a designer also to move particles can only
    seem like an arbitrary intrusion. The designer is merely doing
    something that nature is already doing, and even if the designer is
    doing it better, why didn't the designer make nature better in the
    first place so that it can move the particles better? We are back to
    Van Till's Robust Formational Economy Principle.

    But what if the designer is not in the business of moving particles
    but of imparting information? In that case nature moves its own
    particles, but an intelligence nonetheless guides the arrangement
    which those particles take. A designer in the business of moving
    particles accords with the following world picture: The world is a
    giant billiard table with balls in motion, and the designer
    arbitrarily alters the motion of those balls, or even creates new
    balls and then interposes them among the balls already present. On
    the other hand, a designer in the business of imparting information
    accords with a very different world picture: In that case the world
    becomes an information processing system that is responsive to novel
    information. Now the interesting thing about information is that it
    can lead to massive effects even though the energy needed to
    represent and impart the information can become infinitesimal (Frank
    Tipler and Freeman Dyson have made precisely such arguments, namely,
    that arbitrarily small amounts of energy are capable of information
    processing -- in fact capable of sustaining information processing
    indefinitely). For instance, the energy requirements to store and
    transmit a launch code are minuscule, though getting the right code
    can make the difference between starting World War III and
    maintaining peace.

    When a system is responsive to information, the dynamics of that
    system will vary sharply with the information imparted and will
    largely be immune to purely physical factors (e.g., mass, charge, or
    kinetic energy). A medical doctor who utters the words "Your son is
    going to die" might trigger a heart attack in a troubled father
    whereas uttering the words "Your son is going to live" might prevent
    it. Moreover, it doesn't much matter how loudly the doctor utters one
    sentence or the other or what bodily gestures accompany the
    utterance. Such physical factors are largely irrelevant. Consider
    another example. After killing the Minotaur on Crete and setting sail
    back for Athens, Theseus forgot to substitute a white flag for a
    black flag. Theseus and his father Aegeus had agreed that a black
    flag would signify that Theseus had been killed by the Minotaur
    whereas a white flag would signify his success in destroying it.
    Seeing the black flag hoisted on the ship at a distance, Aegeus
    committed suicide. Or consider yet another nautical example, in this
    case a steersman who guides a ship by controlling its rudder. The
    energy imparted to the rudder is minuscule compared to the energy
    inherent in the ship's motion, and yet the rudder guides its motion.
    It was this analogy that prompted Norbert Wiener to introduce the
    term "cybernetics," which is derived etymologically from the Greek
    and means steersman. It is no coincidence that in his text on
    cybernetics, Wiener writes about information as follows
    (_Cybernetics_, 2nd ed., p. 132): "Information is information, not
    matter or energy. No materialism which does not admit this can
    survive at the present day."

    How much energy is required to impart information? We have sensors
    that can detect quantum events and amplify them to the macroscopic
    level. What's more, the energy in quantum events is proportional to
    frequency or inversely proportional to wavelength. And since there is
    no upper limit to the wavelength of, for instance, electromagnetic
    radiation, there is no lower limit to the energy required to impart
    information. In the limit, a designer could therefore impart
    information into the universe without inputting any energy at all.
    Whether the designer works through quantum mechanical effects is not
    ultimately the issue here. Certainly quantum mechanics is much more
    hospitable to an information processing view of the universe than the
    older mechanical models. All that's needed, however, is a universe
    whose constitution and dynamics are not reducible to deterministic
    natural laws. Such a universe will produce random events and thus
    have the possibility of producing events that exhibit specified
    complexity (i.e., events that stand out against the backdrop of
    randomness). Now as I've already noted, specified complexity is a
    form of information, albeit a richer form than Shannon information,
    which trades purely in complexity (cf. chapter 6 of my book
    _Intelligent Design_ as well as my forthcoming _No Free Lunch_).
    What's more, as I've argued in _The Design Inference_, specified
    complexity (or specified improbability as I call it there -- the
    concepts are the same) is a reliable empirical marker of actual
    design. Now the beauty is that we live in a non-deterministic
    universe that is open to novel information, that exhibits specified
    complexity, and that therefore gives clear evidence of a designer who
    has imparted it with information.

    It's at this point that critics of design throw up their hands in
    disgust and charge that design theorists are merely evading the issue
    of how a designer introduces design into the world. From the design
    theorists perspective, however, there is no evasion here. Rather
    there is a failure of imagination on the part of the critic (and this
    is not meant as a compliment). In asking for a mechanistic account of
    how the designer imparts information and thereby introduces design,
    the critic of design is like a physicist trained only in Newtonian
    mechanics and desperately looking for a mechanical account of how a
    single particle like an electron can go through two slits
    simultaneously to produce a diffraction pattern on a screen (cf. the
    famous double-slit experiment). On a classical Newtonian view of
    physics, only a mechanical account in terms of sharply localized and
    individuated particles makes sense. And yet nature is unwilling to
    oblige any such mechanical account of the double slit experiment
    (note that the Bohmian approach to quantum mechanics merely shifts
    what's problematic in the classical view to Bohm's quantum
    potential). Richard Feynman was right when he remarked that no one
    understands quantum mechanics. The "mechanics" in "quantum mechanics"
    is nothing like the "mechanics" in "Newtonian mechanics." There are
    no analogies that carry over from the dynamics of macroscopic objects
    to the quantum level. In place of understanding we must content
    ourselves with knowledge. We don't _understand_ how quantum mechanics
    works, but we _know_ that it works. So too, we don't _understand_ how
    a designer imparts information into the world, but we _know_ that a
    designer imparts information.

    It follows that Howard Van Till's riddle to design theorists is
    ill-posed. Van Till asks whether the design that design theorists
    claim to find in natural systems is strictly mind-like (i.e.,
    conceptualized by a mind to accomplish a purpose) or also hand-like
    (i.e., involving a coercive extra-natural mode of assembly). As with
    many forced choices Van Till has ignored a _tertium quid_, namely,
    that design can also be word-like (i.e., imparting information to a
    receptive medium). In the liturgies of most Christian churches, the
    faithful pray that God keep them from sinning in "thought, word, and
    deed." Each element of this tripartite distinction is significant.
    Thoughts left to themselves are inert and never accomplish anything
    outside the mind of the individual who thinks them. Deeds, on the
    other hand, are coercive, forcing physical stuff to move now this way
    and now that way (it's no accident that the concept of _force_ plays
    such a crucial role in the rise of modern science). But between
    thoughts and deeds are words. Words mediate between thoughts and
    deeds. Words give expression to thoughts and thus bring the self in
    contact with the other. On the other hands, words by themselves are
    never coercive (without deeds to back up words, words lose their
    power to threaten). Nonetheless, words have the power to engender
    deeds not by coercion but by persuasion. Process and openness-of-God
    theologians will no doubt find these observations congenial.
    Nonetheless, Christian theologians of a more traditional bent can
    readily sign off on them as well.
    ===================END FORWARDED MESSAGE===================

    --------------------------------------------------------------------------
    "In the final analysis, it is not any specific scientific evidence that convinces
    me that Darwinism is a pseudoscience that will collapse once it becomes
    possible for critics to get a fair hearing. It is the way the Darwinists argue
    their case that makes it apparent that they are afraid to encounter the best
    arguments against their theory. A real science does not employ propaganda
    and legal barriers to prevent relevant questions from being asked, nor does
    it rely on enforcing rules of reasoning that allow no alternative to the
    official story. If the Darwinists had a good case to make, they would
    welcome the critics to an academic forum for open debate, and they would
    want to confront the best critical arguments rather than to caricature them
    as straw men. Instead they have chosen to rely on the dishonorable
    methods of power politics." (Johnson P.E., "The Wedge of Truth: Splitting
    the Foundations of Naturalism," Intervarsity Press: Downers Grove IL.,
    2000, p.141)
    Stephen E. Jones | Ph. +61 8 9448 7439 | http://www.iinet.net.au/~sejones
    --------------------------------------------------------------------------



    This archive was generated by hypermail 2b29 : Tue Nov 21 2000 - 17:57:34 EST