RE: random mutation, selection and low probabilities

From: Glenn Morton <glennmorton@entouch.net>
Date: Wed Dec 03 2003 - 20:39:31 EST

Hi Don,

I agree with you that we can take this offline after this post. Or after
your reply.
-----Original Message-----
From: Don Winterstein [mailto:dfwinterstein@msn.com]
Sent: Wednesday, December 03, 2003 6:40 PM
To: Glenn Morton
Cc: asa
Subject: Re: random mutation, selection and low probabilities

>>>You see, it's like this: I made a post peripheral to one of Glenn's, and
ultimately I realized it was pretty irrelevant to the topic being discussed.
If Glenn had just said, "You're irrelevant," I would have understood and
backed off. But instead Glenn responded in great detail. So now I feel a
need to take one more shot. <<<<

Good, I like debate. :-)

>>>>Me: You're right, I'm thinking of the "old 1980s style deterministic
inversion." Sorry about not paying closer attention to what you were
talking about. However, this deterministic inversion is not so obsolete as
your comments would suggest. A colleague down the hall, in fact, was still
enhancing a sophisticated version of this in 1999, and it was getting a lot
of use by operations. To my knowledge, Chevron--the company where this sin
occurred--has never had a reputation in the industry of stagnating in some
technological backwater. If there were better ways, I trust the operations
people would have known about and tried them. <<<<

I said it was the old style inversion, I didn't say none were done. I have
done them. But they are not based upon probabilistic calculations and thus
are not amenable to the type of argument I made.

>>>Furthermore, you'd have to argue long and hard, I'm afraid, to convince
me that an inversion technique that relies totally on random methods is
going to do better as a rule than a deterministic method when there's a
straightforward way to do the deterministic calculations and the data
satisfy the assumptions of the deterministic method. I speak mostly from
ignorance of simulated annealing, but the only cases where I think simulated
annealing is likely to be better are where rock structures are such that the
assumptions of the deterministic method become invalid. I can imagine this
would be the case where...<<<<

I wouldn't argue with you about that. I would let you see the results. As
you know, no inversion is perfect. I have seen the deterministic method miss
oil pay because of the low frequency (velocity) information. I have seen the
statistical process run amok only to be fixed later.

>>within about
100 m from any well, all bets are off in most reservoirs. There are faults,
there are sedimentological differences, saturation differences etc all of
which affect the velocity and density.<< and...

>> We have volcanic
tuffs....<< and...

>>we had injectite sands which have
dips of up to 80 degrees and widths varying from inches to 300 feet. It was
difficult image some of these injectites. In this situation, one trace is
NOT identical to the nearest neighbor.<<

>>>>That is, if subsurface structures are such that you don't have
well-defined reflectors but just a jumbled mass of scatterers, the
deterministic method doesn't stand a chance. I suspect this is the kind of
environment where the simulated annealing method might do better than any
other method. <<<<

Here I would agree with you. Most of the effort I put in over in the UK was
on a field that was the most bizarre field I have ever worked on in a 30
year career. But the simulated annealing process did match the seismic and
the wells.

>>>However, I question Glenn's characterization of "most reservoirs" as
being of a radically varying sort to the degree that a seismic trace is not
similar to its neighbors. That is, variations in reservoirs are often rapid
and large, but in many or most cases seismic imaging smooths out all but
fairly gross features, partly as a result of the severely band-limited
nature of the data. In large parts of the Western Canadian basin, for
example, a stacked seismic trace is practically identical--except for a time
shift--to any trace recorded up to and often beyond a kilometer away. The
features of exploration interest there are usually not far above the visual
detectability limit, if indeed they're detectable at all. Data from many
other areas are not that uniform but still are often not as different from
trace to trace as the stuff Glenn is referring to. (--Well, it's a matter
of degree, and without pictures I don't really know what kinds of variations
Glenn is talking about.) <<<<

I will agree that places like Canada and even Colorado, you are correct that
one trace is much like the next. This is because you have a layercake
geology. The areas I have worked are not like that. The Gulf of Mexico
isn't quite so layer cake. When I first went to work there, I would ask if
the S. abies (a bug) horizon was a sand or a shale. The answer was 'yes'.
Timelines don't match lithology lines in the Gulf. Nor do they in the
Tertiary of the North Sea. Even the Brent section (the Ness formation) is
multi-lithological. That means that the seismic response is quite different
from trace to trace assuming you have high enough frequency.

>>>I acknowledge that some areas really do have very difficult or jumbled
rock structures. Papua New Guinea is an extreme that comes to mind.
Chevroids had a terrible time getting any image there at all, even though
there appeared to be plenty of energy from deep scatterers. Data from the
North Sea that I've seen were most of the time not nearly so challenging.
<<<

Coming from the Gulf, I have a different perspective. I thought the North
Sea data was noisy and uninformative. The Gulf data was really high
frequency and it didn't have much noise.

>>>>One other relevant thought is that in many cases it's extremely
difficult to assess whether a given geophysical technique is doing any good.
That's why the industry has so many wealthy snake-oil salesmen. Did success
result from our scientific brilliance or just luck? Much of the time we
can't tell. Not that simulated annealing is one of those questionable
techniques, but...time will tell (maybe). <<<

Agreed! As Director of Integrated Technology, I get all sorts of snake oil
letters from guys with black boxes that can drive along roads and tell
immediately where oil is.

>>We WERE using stacked traces. So there is no reduction from this
direction.
When did you leave the business?<<

>>>Ya got me there! As soon as I looked a second time at your numbers I
realized you were talking about stacked data. Actually, I didn't leave the
business until 1999. My problem is not that I'm far behind the times but
that I specialized so narrowly that much of the business went its own way
without me. I spent most of my career studying effects of velocity
anisotropy in shear-wave data. In the early '80s I saw a chance to do
something close to real scientific research by focusing on anisotropy, so I
jumped at it. That was by far the most interesting thing going in
exploration geophysics, even though any applications stemming from this
research may not yet have earned a single company a single penny (i.e.,
apart from the contractors, who made big bucks on our experiments). It was
great fun anyway. <<<

Anisotropy is a big deal in the UK. It was a huge problem at another field I
worked at and indeed was one of the reasons they sent me there.

The seismic technology I am using was first shot (by me) in 2000 in the
Gulf, just before I went to the UK. It is a seismic acquisition technology
that in 100 sq. km collected something like 7 terabytes of data which was
processed and thus reduced to about 5 gigabytes. The data density is like
nothing you have ever seen. We got 100 hertz energy at 12,000 feet deep
above -20 dB. For those who don't know, this is kind of the holy grail.

>>Would you accept 10^23,000,000,000? If so, I will be able to sleep
tonight.<<

>>>>Have it your way.<<<,

Good I can now sleep.
Received on Wed Dec 3 20:39:55 2003

This archive was generated by hypermail 2.1.8 : Wed Dec 03 2003 - 20:39:55 EST