Re: random mutation, selection and low probabilities

From: Don Winterstein <dfwinterstein@msn.com>
Date: Mon Dec 01 2003 - 04:45:40 EST

Glenn,

I haven't personally calculated the kinds of inversions you're talking about, but I do know the systems are always far better constrained than you've indicated, and hence the probabilities of success are much higher than your numbers would indicate. If there's a well in or near the survey area, one can usually get a fairly detailed set of constraints. At sites where companies do 3D surveys, there often is at least one such well.

If there's no well, a lot of constraints are nevertheless obtainable from the seismic data themselves. First, moveout velocities often provide a good picture of the low-frequency velocity structure as a function of time. Second, reflection signs and amplitudes allow for better-than-chance calculations of the high-frequency velocity variations. Third, density is often closely related to velocity, and invertors usually use some standard relationship. Fourth, to a good approximation the inversion calculated for one trace is identical to the inversion of nearest-neighbor traces. Fifth, many inversions of this sort use only stacked section traces; this would cut down hugely on the total number of relevant traces. More sophisticated inversions use amplitude-variation-with-offset (AVO). While these latter would use more traces, the calculations on such traces are mostly deterministic.

Knowledge of the seismic wavelet is essential for unraveling the reflectivity accurately, but stochastic methods are often used to calculate the wavelet itself.

In summary, seismic inversions are better constrained and hence more deterministic than you've indicated. This fact may not nullify your point, but it at least takes off some of your zeroes. On the other hand, I don't really know how you do your inversions, so maybe you really do need all those zeroes! <G>

Don

  ----- Original Message -----
  From: Glenn Morton
  To: asa@calvin.edu
  Sent: Sunday, November 30, 2003 4:34 PM
  Subject: random mutation, selection and low probabilities

  The recent discussion with Walter got me to thinking about a process we use
  in Geophysics and while I was driving home from Thanksgiving, I mentally
  calculated the number of possible acoustical impedance models one gets in
  geophysics. So I wrote it up

  Anti-creationists make a huge deal about the odds against finding a single
  sequence of protein or DNA which they think is useful. The odds against
  finding the workable solution are usually in the 10^-100 to 10^-300 range of
  possibilities. Dembski claims that 10^-150 probability is some sort of
  universal probability bound, in which anything with odds less than that
  simply must be designed. He writes:

  “In The Design Inference I justify a more stringent universal probability
  bound of 10-150 based on the number of elementary particles in the
  observable universe, the duration of the observable universe until its heat
  death and the Planck time. A probability bound of 10-150 translates to 500
  bits of information. Accordingly, specified information of complexity
  greater than 500 bits cannot reasonably be attributed to chance. This
  500-bit ceiling on the amount of specified complexity attributable to chance
  constitutes a universal complexity bound for CSI.” William A. Dembski,
  Intelligent Design, (Downers Grove: Intervarsity Press, 2001), p. 166

  In my business we deal with probabilities which make those numbers
  absolutely pale by comparison. The system I will describe has one chance in
  10^126,000,000,000 of being right. That is 10 raised to the 126 billionth
  power. This is much greater than this supposed probability bound. To help
  you understand, I must explain a bit of geophysics.

  In seismic exploration, we set off airguns or dynamite charges on the
  surface of the ocean or land (respectively for each type of source—dynamite
  is not used offshore) We then listen for the echoes of sound bouncing back
  to the surface off of the various rock layers. This is important. We record
  the sound wave field every 2 milliseconds and we record to 8 seconds or more
  in time. This is equivalent to something like 40,000 feet deep and we end up
  with a seismic trace which consists of a sequence of 4000 numbers which
  represent the amplitude of the sound waves reflected off the rocks under the
  earth. We record the data in such a manner that we end up with a seismic
  trace every 25 meters in one direction and every 40 meters in another
  direction and the size of some surveys is so large that they extend a
  hundred kilometers or more in both directions. Most field seismic data is
  around 10 km by 10 km for 100 sq. km of data. Thus we have 100,000 seismic
  traces, each with 4000 different numbers. This would be a typical 3D seismic
  program over an oil field.

  The reflection of the sound (which is what causes the echo) is controlled by
  the change in acoustic impedance from the upper layer to the lower rock
  layer. Acoustic impedance (AI) is merely the multiplication of the rock’s
  density by the velocity of sound in that rock.

  AI = rho x vel

  AI is what causes the seismic reflections.

             Sound going:
              Down Up
                    \ /
                      \ /
  AI in rock 1 \ /
  -----------------\/-----------------
  AI in rock 2

  We would really like to know the AI rather than what the seismic readily
  offers which is energy of sound reflection. AI is tied to the rock
  lithology and properties so it is more useful than merely knowing how much
  sound energy is reflected back. We use this AI data to help us understand
  the porosity of the rocks and to determine rock type.

  To get to this information we do what is called an inversion. We guess at
  the AI pattern and make a model acoustic impedence trace, then we apply the
  reflection laws to it to produce a model seismic trace, compare that model
  to the real seismic trace, and if it differs by a certain amount, we guess
  again (randomly mutate the model), make a model seismic trace, compare it to
  the real seismic trace etc. We continue this iterative process until the
  model AI produces a model synthetic seismogram which matches closely the
  observed data.

  Now, what are the probabilities of us getting the right AI? All we know is
  the seismic data which we have sampled every 2 milliseconds and have 4000
  numbers for each seismic trace. We know that the density in rocks we are
  interested in generally extend from 2 to 2.5 grams per cc and the velocity
  of sound generally ranges from 5,000 to 12,000 feet per second. Thus, if we
  let the density values go from 2 to 2.5 in steps of .01 and the velocity of
  sound vary from 5,000 feet per second to 12,000 feet per second in
  increments of 1 foot per second, we have 50 x 7000 = 350,000 different
  possibilities for each sample of the seismic data. Remember we have 4000
  samples. So the total possible AI solutions for a given seismic trace is

  Total number of AI solutions = 4000^350,000 = 10^1,260,720.

  This is 10 raised to the 1.2 millionth power!! The odds against finding the
  correct answer is so much smaller than finding the ‘correct’ answer with
  protein that one would bet on the protein long before betting on geophysics.
  Protein probabilities are in the order of one chance in 10^300. But this is
  merely the probability of getting ONE seismic trace AI correct. We have
  100,000 other traces, so that the probability of getting the correct model
  for the entire seismic survey is an astounding one chance in

  10^126,000,000,000.

  Or ten followed by 126 billion zeros.

  If the anti-evolutionary probability arguments were correct, we
  geophysicists would have no chance of finding anything useful in this
  procedure. If one searched 10 quadrillion models per second for the age of
  the universe we would only have searched 10^33 of the models to date. But,
  I will tell you that we always find usable models via this technique. We do
  reduce the number of samples we run the inversion over so in general we only
  use 200 samples but that still gives us one chance in 10 followed by 80
  billion zeros. We always get a useful AI output. Why?

  Well it is because we don’t have to have absolutely the correct answer to
  get a workable and useful answer. Billions upon trillions upon gazillions
  of the AI inversions will give the very same answer (provide the very same
  functionality). In other words, the answers are not unique. This is the
  same reason that the probability arguments given by the anti-evolutionists
  fail to impress. Those who are familiar with such systems know that one
  doesn’t have to find the best solution to have a workable solution.
  Hemoglobin is not the very best oxygen carrier anywhere among all possible
  protein sequences; but it is a workable carrier. Cytochrome c as found in
  humans is not the very best at that functionality; it is certainly a
  workable solution. All through the biopolymers, this statement can be said.
  And experiment shows that this is the case:

  Andrew Ellington and Jack W. Szostak "used small organic dyes as the target.
  They screened 10 13 random-sequence RNAs and found molecules that bound
  tightly and specifically to each of the dyes.
  "Recently they repeated this experiment using random-sequence DNAs and
  arrived at an entirely different set of dye-binding molecules.
   ...
  "That observation reveals an important truth about directed evolution (and
  indeed, about evolution in general): the forms selected are not necessarily
  the best answers to a problem in some ideal sense, only the best answers to
  arise in the evolutionary history of a particular macromolecule."~Gerald F.
  Joyce, "Directed Evolution," Scientific America, Dec. 1992, p. 94-95.

  Low probabilities for finding the correct answer is only a meaningful
  argument if one and only one solution works. We know this to be false in
  geophysics and in biology. Cow, sheep, etc all have different proteins, but
  we often have used their proteins to support our lives because their
  different chemicals work fine and dandy in us and thus when we are sick we
  live. But it means that there are more than one biological solution to the
  functionality in question. The anti-evolutionary arguments simply won’t
  stand up to scrutiny.
Received on Mon Dec 1 04:42:19 2003

This archive was generated by hypermail 2.1.8 : Mon Dec 01 2003 - 04:42:20 EST