RE: The Aphenomenon of Abiogenesis

From: Josh Bembenek (jbembe@hotmail.com)
Date: Fri Aug 01 2003 - 10:17:35 EDT

  • Next message: George Murphy: "Re: Cambrian Explosion/Aphenomenon (no kidding!)"

    Glenn-

    Your endless diatribe is quite intolerable at this point to me. Rather than
    continue arguing each point of a misconstrued discussion of ID's hypothesis,
    I'll simply cut and paste an article from Dembski. Perhaps a second reading
    will encourage you to rethink your position, and Dembski's own words can
    correct the errors you attribute to him for any innocent bystanders
    corrupted by your impossible strawmen.

    William A. Dembski
      Member
    Member # 7

    posted 29. August 2002 21:26                    

    What Sort of Property is Specified Complexity?
    By William A. Dembski

    NOTE: The following essay is a chapter for a book I'm writing. As usual, I
    hammer away at the bacterial flagellum. Critics of mine may wonder when I'm
    going to let it go. I'll let go once my critics admit that it represents an
    insoluble problem on naturalistic terms. Also, I say in this essay that
    there is no evidence for an indirect Darwinian pathway to the bacterial
    flagellum. Sorry, but the type III secretory system doesn't cut it. In fact,
    Milt Saier's work at UCSD suggests that the type III secretory system, if
    anything, evolved from the flagellum. But even if it could be shown that the
    type III system predated the flagellum, it would at best represent one
    possible step in the indirect Darwinian evolution of the bacterial
    flagellum. To claim otherwise is like saying we can travel by foot from Los
    Angeles to Tokyo because we've discovered the Hawaiian Islands. Evolutionary
    biology needs to do better than that. At any rate, the aim of this essay is
    not to rehash the flagellum, but to come to terms with what sort of property
    specified complexity is. I hope this essay stimulates discussion on that
    question.

    Specified complexity is a property that things can possess or fail to
    possess. Yet in what sense is specified complexity a property? Properties
    come in different varieties. There are objective properties that obtain
    irrespective of who attributes them. Water is such a property. There are
    also subjective properties that depend crucially on who attributes them.
    Beauty is such a property. To be sure, beauty may not be entirely in the eye
    of the beholder (there may be objective aspects to it). But beauty cannot
    make do without the eye of some beholder.

    The distinction between objective and subjective properties has a long
    tradition in philosophy. With Descartes, that distinction became important
    also in science. Descartes made this distinction in terms of primary and
    secondary qualities. For Descartes material objects had one primary quality,
    namely, extension. The other properties of matter, its color or texture for
    instance, were secondary qualities and simply described the effect that
    matter, due to the various ways it was configured or extended, had on us as
    perceivers. Descartes's distinction of primary and secondary qualities has
    required some updating in light of modern physics. Color, for instance, is
    nowadays treated as the wave length of electromagnetic radiation and
    regarded as a primary quality (though the subjective experience of color is
    still regarded as a secondary quality). Even so, the idea that some
    properties are primary or objective and others are secondary or subjective
    remains with us, especially in the sciences.

    The worry, then, is that specified complexity may be entirely a subjective
    property, with no way of grasping nature at its ontological joints and thus
    no way of providing science with a valid tool for inquiry. This worry,
    though misplaced, needs to be addressed. The first thing we need to see is
    that the objective-subjective distinction is not as neat and dichotomous as
    we might at first think. Consider the following three properties: X is
    water, X is married, and X is beautiful (the "X" here denotes a place-holder
    to which the properties apply). X is water, as already noted, is objective.
    Anybody around the world can take a sample of some item in question, subject
    it to a chemical test, and determine whether its composition is that of
    water (i.e., H2O). On the other hand, X is beautiful seems thoroughly
    subjective. Even if objective standards of beauty reside in the mind of God
    or in a Platonic heaven, in practice people differ drastically in their
    assessments of beauty. Indeed, no single object is universally admired as
    beautiful. If specified complexity is subjective in the same way that beauty
    is, then specified complexity cannot be a useful property for science.

    But what about X is married? It certainly is an objective fact about the
    world whether you or I are married. And yet there is an irreducibly
    subjective element to this property as well: Unlike water, which is simply
    part of nature and does not depend for its existence on human subjects,
    marriage is a social institution that depends intimately for its existence
    on human subjects. Whereas water is purely objective and beauty purely
    subjective, marriage is at once objective and subjective. This confluence of
    objectivity and subjectivity for social realities like money, marriage, and
    mortgages is the topic of John Searle's The Construction of Social Reality.
    Social realities are objective in the sense that they command
    intersubjective agreement and express facts (rather than mere opinions)
    about the social world we inhabit. But they exist within a social matrix,
    which in turn presupposes subjects and therefore entails subjectivity.

    Searle therefore supplements the objective-subjective distinction with an
    ontological-epistemic distinction. Accordingly, water is ontologically
    objective -- it depends on the ontological state of nature and is
    irrespective of humans or other subjects. Alternatively, beauty is
    epistemically subjective -- it depends on the epistemic state of humans or
    other subjects, and its assessment is free to vary from subject to subject.
    Properties reflecting social realities like money, marriage, and mortgages,
    on the other hand, are ontologically subjective but epistemically objective.
    Thus marriage is ontologically subjective in that it depends on the social
    conventions of human subjects. At the same time, marriage is epistemically
    objective -- any dispute about somebody being married can be objectively
    settled on the basis of those social conventions.

    How do Searle's categories apply to specified complexity? They apply in two
    parts, corresponding to the two parts that make up specified complexity.
    Specified complexity involves a specification, which is a pattern that is
    conditionally independent of some observed outcome. Specified complexity
    also involves a measure of complexity, which calculates the improbability of
    the event associated with that pattern. Think of an arrow landing in a
    target. The target is an independently given pattern and therefore a
    specification. But the target also represents an event, namely, the arrow
    landing in the target, and that event has a certain probability.

    Specifications, by being conditionally independent of the outcomes they
    describe, are, within Searle's scheme, epistemically objective. Moreover,
    once a specification is given and the event it represents is identified, the
    probability of that outcome is ontologically objective. Consider, for
    instance, a quantum mechanical experiment in which polarized light is sent
    through a polaroid filter whose angel of polarization is at forty-five
    degrees with that of the light. Imagine that the light is sent through the
    filter photon by photon. According to quantum mechanics, the probability of
    any photon getting through the filter is 50 percent and each photon's
    probability of getting through is probabilistically independent of the
    others. This quantum mechanical experiment therefore models the flipping of
    a fair coin (heads = photon passes through the filter; tails = photon
    doesn't pass through the filter), though without the possibility of any
    underlying determinism undermining the randomness (assuming quantum
    mechanics delivers true randomness).

    Suppose now that we represent a photon passing through the filter with a "1"
    and a photon not passing through the filter with a "0." Consider the
    specification 11011101111101111111..., namely, the sequence of prime numbers
    in unary notation (successive 1s separated by a 0 represent each number in
    sequence). For definiteness let's consider the prime numbers between 2 and
    101. This representation of prime numbers is ontologically subjective in the
    sense that it depend on human subjects who know about arithmetic (and
    specifically about prime numbers and unary notation). It is also
    epistemically objective inasmuch as arithmetic is a universal aspect of
    rationality. Moreover, once this specification of primes is in place, the
    precise probability of a sequence of photons passing through the filter and
    matching it is ontologically objective. Indeed, that probability will depend
    solely on the inherent physical properties of photons and polaroid filters.
    Specified complexity therefore is at once epistemically objective (on the
    specification side) and ontologically objective (on the complexity side once
    a specification is in hand).

    Specified complexity therefore avoids the charge of epistemic subjectivity,
    which, if true, would relegate specified complexity to the whim, taste, or
    opinion of subjects. Yet specified complexity does not merely avoid this
    charge. More positively, it also displays certain positive virtues of
    objectivity: specifications are epistemically objective and measures of
    complexity based on those specifications are ontologically objective. Is
    this enough to justify specified complexity as a legitimate tool for
    science? To answer this question, let's consider what could go awry with
    specified complexity to prevent it from functioning as a legitimate tool
    within science.

    Specifications are not the problem. True, specifications, though
    epistemically objective are not ontologically objective. The failure of
    specifications to be ontologically objective, however, does not prevent them
    from playing a legitimate role in the natural sciences. In biology,
    specifications are independently given functional patterns that describe the
    goal-directed behavior of biological systems. A bacterial flagellum, for
    instance, is an outboard rotary motor on the backs of certain bacteria for
    propelling them through their watery environments. This functional
    description is epistemically objective but on any naturalistic construal of
    science must be regarded as ontologically subjective (if nature, as
    naturalism requires, is a closed nexus of undirected natural causes, then
    nature knows nothing about such functional descriptions). And yet biology as
    a science would be impossible without such functional descriptions.
    Functional language is indispensable to biology, and specifications are one
    way to clarify and make precise that language.

    Any problem justifying specified complexity's legitimacy within science
    therefore resides elsewhere. Indeed, the problem resides with complexity.
    Although complexity becomes ontologically objective once a specification is
    in place, our assessment of complexity is just that -- our assessment. And
    the problem with assessments is that they can be wrong. Specifications are
    under our control. We formulate specifications on the basis of background
    knowledge. The complexity denoted by specified complexity, on the other
    hand, resides in nature. This form of complexity is a measure of
    probability, and these probabilities depend on the way nature is
    constituted. There is an objective fact of the matter what these
    probabilities are. But our grasp of these probabilities can be less than
    adequate. The problem, then, with specified complexity legitimately entering
    science is bridging complexity as it exists in nature with our assessments
    of that complexity. Alternatively, the problem is not with specified
    complexity being a valid property for science but with our ability to
    justify any particular attribution of specified complexity to particular
    objects or events in nature.

    To illustrate what's at stake, consider an analogy from mathematics. There
    exist numbers whose decimal expansions are such that every single digit
    between 0 and 9 has relative frequency exactly 10 percent as the decimal
    expansion becomes arbitrarily large (or, as mathematicians would say, "in
    the limit" each single digit has exactly a 10 percent occurrence). The
    simplest such number is perhaps .01234567890123456789... where "0123456789"
    just keeps repeating over and over again. Let's call such numbers regular
    (mathematicians typically prefer a stronger notion of regularity called
    normality, which characterizes the limiting behavior of all finite strings
    of digits and not merely that of single digits; for the purposes of this
    example, however, regularity suffices). The property X is regular therefore
    applies to this number. Regularity is clearly a legitimate mathematical
    property -- it is perfectly well-defined and numbers either are regular or
    fail to be regular.

    But suppose next we want to determine whether the number pi is regular (pi
    equals the ratio of the circumference of a circle to its diameter). Pi has a
    nonrepeating decimal expansion. Over the years mathematicians and computer
    scientists have teamed up to compute as many digits of pi as mathematical
    methods and computer technology permit. The current record stands at
    206,158,430,000 decimal digits of pi and is due to the Japanese researchers
    Yasumasa Kanada and Daisuke Takahashi (the currently standard 40 gigabyte
    harddrive is too small to store this many decimal digits). Each of the
    single digits between 0 and 9 has relative frequency roughly 10 percent
    among these 200 billion decimal digits of pi. Is pi therefore regular?

    Just as there is a physical fact of the matter whether an object or event in
    nature exhibits specified complexity, so there is a mathematical fact of the
    matter whether pi is regular. Pi either is regular or fails to be regular.
    Nonetheless, the determination whether pi is regular is another matter. With
    the number .01234567890123456789..., its regularity is evident by
    inspection. But the decimal digits of pi are nonrepeating, and to date there
    is no theoretical justification of its regularity. The closest thing to a
    justification is to point out that for the standard probability measure on
    the unit interval (i.e., Lebesgue measure), all numbers except for a set of
    probability zero are regular. The presumption, then, is that pi is likely to
    be regular. The problem here, however, is that the numbers we deal with in
    practice are rational numbers and most of these are not regular. Thus most
    of the numbers we deal with in practice belong to that set of probability
    zero. What's more, a simple set theoretic argument shows that among
    irrational numbers like pi, there are as many nonregular ones as regular
    ones (both subsets have cardinality of the continuum). There is thus no
    reason to think that pi was sampled according to Lebesgue probability
    measure and therefore likely to fall among the regular irrational numbers
    (the nonregular irrational numbers having probability zero with respect to
    Lebesgue measure). As a consequence, we have no basis in mathematical
    experience or theory for being confident that pi is regular.

    Even the discovery that the single digits of pi have approximately the right
    relative frequencies for pi's first 200 billion decimal digits provides no
    basis for confidence that pi is regular. However regular the decimal
    expansion of pi looks in some initial segment, it could go haywire
    thereafter, possibly even excluding certain single-digits entirely after a
    certain point. On the other hand, however nonregular the decimal expansion
    of pi looks in some initial segment, the relative frequencies of the single
    digits between 0 and 9 could eventually settle down into the required 10
    percent and pi itself could be regular (any initial segment thereby getting
    swamped by the infinite decimal expansion that lies beyond it). Thus, to be
    confident that pi is regular, mathematicians need a strict mathematical
    proof showing that each single digit between 0 and 9 has a limiting relative
    frequency of exactly 10 percent.

    Now critics of intelligent design demand this same high level of
    justification (i.e., mathematical proof) before they accept specified
    complexity as a legitimate tool for science. Yet a requirement for strict
    proof, though legitimate in mathematics, is entirely wrong-headed in the
    natural sciences. The natural sciences make empirically based claims, and
    such claims are always falsifiable. Errors in measurement, incomplete
    knowledge, and the problem of induction cast a shadow over all scientific
    claims. To be sure, the shadow of falsifiability doesn't incapacitate
    science. But it does make the claims of science (unlike those of
    mathematics) tentative, and it also means that we need to pay special
    attention to how scientific claims are justified. The key question for this
    discussion, therefore, is how to justify ascribing specified complexity to
    natural structures.

    To see what's at stake, consider further the analogy between the regularity
    of numbers and the specified complexity of natural structures. We need to be
    clear where that analogy holds and where it breaks down. The analogy holds
    insofar as both specified complexity and regularity make definite claims
    about some fact of the matter. In the case of regularity, it is a
    mathematical fact of the matter -- the decimal expansions of numbers either
    exemplify or fail to exemplify regularity. In the case of specified
    complexity, it is a physical fact of the matter -- a biological system, for
    instance, either exemplifies or fails to exemplify specified complexity.
    This last point is worth stressing. Attributing specified complexity is
    never a meaningless assertion. On the assumption that no design or teleology
    was involved in the production of some event, that event has a certain
    probability and therefore an associated measure of complexity. Whether that
    level of complexity is high enough to qualify the event as exemplifying
    specified complexity depends on the physical conditions surrounding the
    event. In any case, there is a definite fact of the matter whether specified
    complexity obtains.

    Any problem with ascribing specified complexity to that event therefore
    resides not in its coherence as a meaningful concept -- specified complexity
    is well-defined. If there is a problem, it resides in what philosophers call
    its assertibility. Assertibility refers to our justification for asserting
    the claims we make. A claim is assertible if we are justified asserting it.
    With the regularity of pi, it is possible that pi is regular. Thus in
    asserting that pi is regular, we might be making a true statement. But
    without a mathematical proof of pi's regularity, we have no justification
    for asserting that pi is regular. The regularity of pi is, at least for now,
    unassertible. But what about the specified complexity of various biological
    systems? Are there any biological systems whose specified complexity is
    assertible?

    Critics of intelligent design argue that no attribution of specified
    complexity to any natural system can ever be assertible. The argument runs
    as follows. It starts by noting that if some natural system exemplifies
    specified complexity, then that system must be vastly improbable with
    respect to all purely natural mechanisms that could be operating to produce
    it. But that means calculating a probability for each such mechanism. This,
    so the argument runs, is an impossible task. At best science could show that
    a given natural system is vastly improbable with respect to known mechanisms
    operating in known ways and for which the probability can be estimated. But
    that omits (1) known mechanisms operating in known ways for which the
    probability cannot be estimated, (2) known mechanisms operating in unknown
    ways, and (3) unknown mechanisms.

    Thus, even if it is true that some natural system exemplifies specified
    complexity, we could never legitimately assert its specified complexity,
    much less know it. Accordingly, to assert the specified complexity of any
    natural system constitutes an argument from ignorance. This line of
    reasoning against specified complexity is much like the standard agnostic
    line against theism -- we can't prove that atheism (cf. the total absence of
    specified complexity from nature) holds, but we can show that theism (cf.
    the specified complexity of certain natural systems) cannot be justified and
    is therefore unassertible. This is how skeptics argue that there is no (and
    indeed can be no) evidence for God or design.

    A little reflection, however, makes clear that this attempt by skeptics to
    undo specified complexity cannot be justified on the basis of scientific
    practice. Indeed, the skeptic imposes requirements so stringent that they
    are absent from every other aspect of science. If standards of scientific
    justification are set too high, no interesting scientific work will ever get
    done. Science therefore balances its standards of justification with the
    requirement for self-correction in light of further evidence. The
    possibility of self-correction in light of further evidence is absent in
    mathematics and accounts for mathematics' need for the highest level of
    justification, namely, strict logico-deductive proof. But science does not
    work that way. Science must work with available evidence and on that basis
    (and that basis alone) formulate the best explanation of the phenomenon in
    question.

    Take, for instance, the bacterial flagellum. Despite the thousands of
    research articles on it, no mechanistic account of its origin exists.
    Consequently, there is no evidence against its being complex and specified.
    It is therefore a live possibility that it is complex and specified. But is
    it fair to assert that it is complex and specified, in other words, to
    assert that it exhibits specified complexity? The bacterial flagellum is
    irreducibly complex, meaning that all its components are indispensable for
    its function as a motility structure. What's more, it is minimally complex,
    meaning that any structure performing the bacterial flagellum's function as
    a bidirectional outboard rotary motor cannot make do without certain basic
    components.

    Consequently, no direct Darwinian pathway exists that incrementally adds
    these basic components and therewith evolves a bacterial flagellum. Rather,
    an indirect Darwinian pathway is required, in which precursor systems
    performing different functions evolve by changing functions and components
    over time (Darwinists refer to this as coevolution and co-optation).
    Plausible as this sounds (to the Darwinist), there is no evidence for it.
    What's more, evidence from engineering strongly suggests that tightly
    integrated systems like the bacterial flagellum are not formed by trial and
    error tinkering in which form and function coevolve. Rather, such systems
    are formed by a unifying conception that combines disparate components into
    a functional whole -- in other words, by design.

    Does the bacterial flagellum exhibit specified complexity? Is such a claim
    assertible? Certainly the bacterial flagellum is specified. One way to see
    this is to note that humans developed outboard rotary motors well before
    they figured out that the flagellum was such a machine. This is not to say
    that for the biological function of a system to constitute a specification,
    humans must have independently invented a system that performs the same
    function. Nevertheless, independent invention makes all the more clear that
    the system satisfies independent functional requirements and therefore is
    specified. At any rate, no biologist I know questions whether the functional
    systems that arise in biology are specified. At issue always is whether the
    Darwinian mechanism, by employing natural selection, can overcome the vast
    improbabilities that at first blush seem to arise with such systems, thereby
    breaking a vast improbability into a sequence of more manageable
    probabilities.

    To illustrate what's at stake in breaking vast improbabilities into more
    manageable probabilities, suppose a hundred pennies are tossed. What is the
    probability of getting all one hundred pennies to exhibit heads? The
    probability depends on the chance process by which the pennies are tossed.
    If, for instance, the chance process operates by tossing all the pennies
    simultaneously and does not stop until all the pennies simultaneously
    exhibit heads, it will require on average about a thousand billion billion
    billion such simultaneous tosses for all the pennies to exhibit heads. If,
    on the other hand, the chance process tosses only those pennies that have
    not yet exhibited heads, then after about eight tosses, on average, all the
    pennies will exhibit heads. Darwinists tacitly assume that all instances of
    biological complexity are like the second case, in which a seemingly vast
    improbability can be broken into a sequence of reasonably probable events by
    gradually improving on an existing function (in the case of our pennies,
    improved function would correspond to exhibiting more heads).

    Irreducible and minimal complexity challenge the Darwinian assumption that
    vast improbabilities can always be broken into manageable probabilities.
    What evidence there is suggests that such instances of biological complexity
    must be attained simultaneously (as when the pennies are tossed
    simultaneously) and that gradual Darwinian improvement offers no help in
    overcoming their improbability. Thus, when we analyze structures like the
    bacterial flagellum probabilistically on the basis of known material
    mechanisms operating in known ways, we find that they are highly improbable
    and therefore complex in the sense required by specified complexity.

    Is it therefore fair to assert that the bacterial flagellum exhibits
    specified complexity? Design theorists say yes. Evolutionary biologists say
    no. As far as the evolutionary biologists are concerned, design theorists
    have failed to take into account indirect Darwinian pathways by which the
    bacterial flagellum might have evolved through a series of intermediate
    systems that changed function and structure over time in ways that we do not
    yet understand. But is it that we do not yet understand the indirect
    Darwinian evolution of the bacterial flagellum or that it never happened
    that way in the first place? At this point there is simply no evidence for
    such a indirect Darwinian evolutionary pathways to account for biological
    systems that display irreducible and minimal complexity.

    Is this, then, where the debate ends, with evolutionary biologists chiding
    design theorists for not working hard enough to discover those (unknown)
    indirect Darwinian pathways that lead to the emergence of irreducibly and
    minimally complex biological structures like the bacterial flagellum?
    Alternatively, does it end with design theorists chiding evolutionary
    biologists for deluding themselves that such indirect Darwinian pathways
    exist when all the available evidence suggests that they do not. Although
    this may seem like an impasse, it really isn't. Like compulsive gamblers who
    are constantly hoping that some big score will cancel their debts,
    evolutionary biologists live on promissory notes that show no sign of being
    redeemable. Science must form its conclusions on the basis of available
    evidence, not on the possibility of future evidence. If evolutionary
    biologists can discover or construct detailed, testable, indirect Darwinian
    pathways that account for the emergence of irreducibly and minimally complex
    biological systems like the bacterial flagellum, then more power to them --
    intelligent design will quickly pass into oblivion. But until that happens,
    evolutionary biologists who claim that natural selection accounts for the
    emergence of the bacterial flagellum are worthy of no more credence than
    compulsive gamblers who are forever promising to settle their accounts.

    There is further reason to be skeptical of evolutionary biology and side
    with intelligent design. In the case of the bacterial flagellum, what keeps
    evolutionary biology afloat is the possibility of indirect Darwinian
    pathways that might account for it. Practically speaking, this means that
    even though no slight modification of a bacterial flagellum can continue to
    serve as a motility structure, a slight modification could serve some other
    function. But there is now mounting evidence of biological systems for which
    any slight modification does not merely destroy the system's existing
    function but also destroys the possibility of any function of the system
    whatsoever (cf. Xxx Yyy's work on individual enzymes). For such systems,
    neither direct nor indirect Darwinian pathways could account for them. In
    this case we would be dealing with an in-principle argument showing not
    merely that no known material mechanism is capable of accounting for the
    system but also that any unknown material mechanism is incapable of
    accounting for it as well. The argument here turns on an argument from
    contingency and degrees of freedom outlined in the previous chapter.

    Is the claim that the bacterial flagellum exhibits specified complexity
    assertible? You bet! Science works on the basis of available evidence, not
    on the promise or possibility of future evidence. Our best evidence points
    to the specified complexity (and therefore design) of the bacterial
    flagellum. It is therefore incumbent on the scientific community to admit,
    at least provisionally, that the bacterial flagellum could be the product of
    design. Might there be biological examples for which the claim that they
    exhibit specified complexity is even more assertible? Yes there might.
    Assertibility comes in degrees, corresponding to the strength of evidence
    that justifies a claim. For the bacterial flagellum it is logically
    impossible to rule out the infinity of possible indirect Darwinian pathways
    that might give rise to it (though any such proposed route to the bacterial
    flagellum is at this point a mere conceptual possibility). Yet for other
    systems, like certain enzymes, there can be strong grounds for ruling out
    such indirect Darwinian pathways as well.

    The evidence for intelligent design in biology is therefore destined to grow
    ever stronger. There's only one way evolutionary biology could defeat
    intelligent design, and that is by in fact solving the problem that it
    claimed all along to have solved but in fact never did, namely, to account
    for the emergence of multi-part tightly integrated complex biological
    systems (many of which display irreducible and minimal complexity) apart
    from teleology or design. To claim that the Darwinian mechanism solves this
    problem is false. The Darwinian mechanism is not itself a solution but
    rather describes a reference class of candidate solutions that purport to
    solve this problem. But none of the candidates examined to date indicates
    the slightest capacity to account for the emergence of multi-part tightly
    integrated complex biological systems. That's why molecular biologist James
    Shapiro, who is not a design theorist, writes, "There are no detailed
    Darwinian accounts for the evolution of any fundamental biochemical or
    cellular system, only a variety of wishful speculations." (Quoted from his
    1996 book review of Darwin's Black Box that appeared in The National
    Review.)

    In summary, specified complexity is a well-defined property that applies
    meaningfully to events and objects in nature. Specified complexity is an
    objective property -- specifications are epistemically objective and
    complexity is ontologically objective. Any concern over specified
    complexity's legitimacy within science rests not with its coherence or
    objectivity, but with its assertibility, namely, with whether and the degree
    to which ascribing specified complexity to some natural object or event is
    justified. Any blanket attempt to render specified complexity unassertible
    gives an unfair advantage to naturalism, ensuring that design cannot be
    discovered even if it is present in nature. What's more, science can proceed
    only on available evidence, not on the promise or possibility of future
    evidence. As a consequence, ascriptions of specified complexity to natural
    objects and events, and to biological systems in particular, can be
    assertible. And indeed, there are actual biological systems for which
    ascribing specified complexity -- and therefore design -- is eminently
    assertible.

    _________________________________________________________________
    Help STOP SPAM with the new MSN 8 and get 2 months FREE*
    http://join.msn.com/?page=features/junkmail



    This archive was generated by hypermail 2.1.4 : Fri Aug 01 2003 - 10:17:49 EDT