CSI was [Re: Comment to Bill Hamilton

Brian D Harper (harper.10@osu.edu)
Wed, 05 Mar 1997 14:49:12 -0500

At 11:14 AM 3/5/97 -0500, Burgy wrote:
>
>Bill hamilton wrote:
>
>>Subject: Re: Dembski and CSI [was Re: NTSE Note #5
>
>>>At 11:08 PM 2/28/97, Brian D Harper wrote:
>>>(2) My initial reaction to the above was that Newton's Laws (NL)
>>>would be "fabrications" since they were discovered after the fact
>>>by finding patterns in "information" that had already been actualized.
>
>>BillH>Agreed. And in this case the patterns as originally postualted weren't
>>even accurate, in the sense that they are first-order approximations that
>>break down at relativistic velocities and in the quantum realm.
>
>Burgy>I think I don't agree with either you or Brian at all on this.
>
>Seems to me that the example of Newton's laws are EXACTLY CSI.
>
>Recall the example:
>
>1. Archer shoots at barn. No target. No CSI.
>2. Archer shoots at barn, then draws target. No CSI.
>3. Archer draws target -- then shoots at barn. CSI.
>
>In all cases we are making the investigation AFTER the action.
>How do we tell the difference between cases 2 and 3?
>
>Exactly the way we did with Newton's laws. We ask the archer to demonstrate
>his capability by experimentation. The "proposed law" here is, The archer
>hits what he shoots at."
>

[...]

Thanks to Burgy for some interesting comments.

Let me address my concerns (confusions) in a round about
way by mentioning first Murray Gell-Mann's "effective
complexity" (EC). Previously I indicated that I thought
that Gell-Mann's approach was similar in some ways with
Dembski's. [as a side-light, I view complexity and
information content as being essentially the same
thingies. I can elaborate on this more if needed].

Interestingly, a primary motivation for Gell-Mann [G-M]
coming up with this new complexity measure is precisely
what we have been talking about previously. G-M doesn't
like algorithmic complexity [AC] since, according to AC,
random thingies have the highest complexity (information
content). The idea of EC is to weed out from some object
or observation that part of the description which is
random (lacking any pattern, irreducible). So, first
we separate out the patterned part of the description.
EC then is the algorithmic complexity (shortest description)
of the patterned component.

At this point, I think the similarities with Dembski are
obvious. Dembski goes further and divides what G-M calls
effective complexity into two categories, the good and the
bad according to whether or not the pattern can be identified
independent of its actuation. [Bill missed a good opportunity
here. He could have labeled random as "ugly", then he
could talk about the good, the bad, and the ugly ;-)].

Let me try to summarize what I think are the difficulties
with this approach:

1) In Dembski's examples there is an intelligent agent either
identifying or actually fabricating a pattern. Does Bill
believe that patterns actually exist independent of their
being identified by an intelligent agent? If so, it's hard
to understand the example of drawing the pattern after
the fact. Remembering G-M's EC, patterns can be identified
irrespective of an ability to establish the independence
condition and without their being fabricated. In other
words, Bill's "bad" patterns can be identified without
their being fabricated by the identifier, unless of course
patterns do not exist independently of their identification.
[I believe they do, but would be hard-pressed to prove it ;-)].

2) How to satisfy the independence condition? This was the
point of my NL example. Does it make sense to talk about
NL independent of actualized patterns? My own view is that
we are giving two names to the same thing. NL are just a
compressed description of patterns that are observed.

3) Labeling patterns that cannot be specified independently
of their actualizations as "bad" or as "fabrications" is
highly prejudicial. Such patterns may still correspond to
something functional.

4) Dembki's examples preclude the possibility that the arrow
and the target are not independent, i.e. the arrow and the
target may be part of a system with highly nonlinear interactions
and feed-back etc. The target [useful, functional information]
may be changing constantly as the arrow is in flight. A target
of this type would certainly not be a fabrication even though
the pattern could not be identified independent of its
actualization. [what I'm discussing here is, of course, the
basic idea of an emergent phenomena, Bill's example seems not
to consider this possibility].

Brian Harper
Associate Professor
Applied Mechanics
The Ohio State University

"Should I refuse a good dinner simply because I
do not understand the process of digestion?"
-- Oliver Heaviside