Dembski's "Explaining Specified Complexity"

Wesley R. Elsberry (welsberr@inia.cls.org)
Wed, 15 Sep 1999 01:32:25 -0500 (CDT)

My review of Dembski's "The Design Inference" was published just
a few months ago. In it, I had this text:

[Quote]

The apparent, but unstated, logic behind the move from design to
agency can be given as follows:

1.There exists an attribute in common of some subset of objects
known to be designed by an intelligent agent.
2.This attribute is never found in objects known not to be
designed by an intelligent agent.
3.The attribute encapsulates the property of directed contingency
or choice.
4.For all objects, if this attribute is found in an object, then
we may conclude that the object was designed by an intelligent agent.

This is an inductive argument. Notice that by the second step, one
must eliminate from consideration precisely those biological phenomena
which Dembski wishes to categorize. In order to conclude intelligent
agency for biological examples, the possibility that intelligent
agency is not operative is excluded a priori. One large problem is
that directed contingency or choice is not solely an attribute of
events due to the intervention of an intelligent agent. The
"actualization-exclusion-specification" triad mentioned above also
fits natural selection rather precisely. One might thus conclude that
Dembski's argument establishes that natural selection can be
recognized as an intelligent agent.

[End Quote]

It would appear that I was spot-on with this critique given
Dembski's post to the "reiterations" mailing list. When
Dembski was challenged with a genetic algorithm example of a
solution to a 100-city tour of the Traveling Salesman Problem
in 1997, his response was that the CSI of the solution was
somehow "infused" by the intelligent agency of the
programmers. This stance, though wrong (see
<http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>),
at least was consistent with Dembski's program of hyping CSI
as a recognizable and measurable attribute that reliably
indicated design and thus intelligent agency.

Now, though, Dembski is apparently abandoning that stance,
perhaps because of reflecting upon its deficiencies. Some of
the best things about TDI were the items that had gotten
trimmed from it prior to publication. Gone from TDI was the
previously promised section on analysis of natural selection.
In 1997, that was referenced as section 6.3 of Dembski's
monograph. In TDI, that referred instead to the magic number
1/2 or something like that. Bill Jefferys' dissection of
Dembski's precis of that analysis given in the 1997 paper may
have had something to do with its disappearance from the
monograph. Gone, too, was the 1997 paper's stance that
functions, algorithms, and natural law could never give rise
to CSI.

But evolutionary algorithms put a crimp into this stance. And
so Dembski has adopted a new stance that still allows him to
claim that algorithms cannot produce CSI: change the definition
of CSI such that algorithms cannot produce it by definition.
This is easy: just add the qualifiers "actual" and "apparent".
"Actual CSI", then, is the CSI that intelligent agents come
up with. "Apparent CSI" is the CSI that algorithms come up
with.

A solution to a problem that is deemed "Actual CSI" when
a human does it may be identical to the solution found
by algorithm that gets labelled as "Apparent CSI". The
solution is just as complex and works just as well in
either case, but now those algorithms don't get in the
way of a good apologetic.

There are some changes that this makes. The attribute from
(1) in my list now becomes "Actual CSI" rather than just
"CSI". One can, unfortunately, no longer simply "Do the
calculation." (TDI, p.228.) One must know the causal story
beforehand in order to know which of the two qualifiers
("Actual" or "Apparent") is to be prepended to "CSI" before
one even gets so far as figuring out whether "CSI" applies.
And this confirms my statement that the possibility that
intelligent design is *not* operative is excluded a priori is
precisely right. Dembski's penultimate paragraph shows this
very clearly, as he inverts the search order of his own
Explanatory Filter, and insists that *Design* has to be
eliminated from consideration first, rather than flowing as a
conclusion from the data.

[Quote]

Does Davies's original problem of finding radically new laws
to generate specified complexity thus turn into the slightly
modified problem of finding find radically new laws that
generate apparent-but not actual-specified complexity in
nature? If so, then the scientific community faces a logically
prior question, namely, whether nature exhibits actual
specified complexity. Only after we have confirmed that nature
does not exhibit actual specified complexity can it be safe to
dispense with design and focus all our attentions on natural
laws and how they might explain the appearance of specified
complexity in nature.

[End Quote -- WA Dembski, "reiterations" post]

Wesley