Re: [asa] NASA - Climate Simulation Computer Becomes More Powerful

From: wjp <wjp@swcp.com>
Date: Sun Aug 30 2009 - 20:59:08 EDT

Dave:

I believe you understand the roundoff problem, something, as far as I can tell,
most programmers, and certainly most practitioners have little appreciation of.

I once had a co-worker who meticulously went through a turbulence code to try
to discover where it went wild. He ended up re-writing code to attempt to
explicitly instruct the compiler in what order to perform operations.
This appeared to help. Of course, we went to another platform, and all hell
broke lose again.

There are, or have been, people who study the stability of a code.
Generally, this is not possible analytically. It can only be performed
numerically. We use to
develop numerical models that describe the roundoff or random component of a calculation.
It always grew exponentially. Averaging does help. This is why we were
always more confident of integrated quantities. Numerical roundoff is a
type of turbulence. Indeed, it ought to be possible to model it as a turbulence
model.

It would be simple to study the sensitivity of a code to initial conditions.
But how many people even do this?

I can remember little of this now. I only am certain that it was not an active
area of research, not fitting the PI's notion of appropriate publicity.

bill

On Sun, 30 Aug 2009 16:10:57 -0400, Dave Wallace <wmdavid.wallace@gmail.com> wrote:
> Bill Powers wrote:
>>
>> It was the opinion of most of the oldtimers (I'm not quite old enough
>> to be considered one) than numerical simulations were interpolation
>> routines, not simulation routines. And in order to interpolate you
>> something to interpolate between, that is, you need experimental
>> results. I have long suggested that the same is true of the climate
>> simulation models, which is why I have little confidence in their
>> forecasting, esp. long terms forecasting, computations.
> I agree and asked in 2007 if experimental results on even a small scale
> are available to validate the models. I wrote:
>
>> The March 2007 edition of Scientific American page 71 shows a picture
>> of a "RIVER MODEL at the National Center for Earth Dynamics". The
>> constructed model is being used to study how sediments move in rivers.
>> I seem to recall that other such physical models of rivers have been
>> constructed. At first thought it would seem improbable that any
>> reasonably sized scale model of a river could result in useful data.
>> I suspect they also use digital models to aid their understanding of
>> sediment flows.
>>
>> Have any physical models been attempted for climate? If so what results?
>>
>> I understand that the Vehicle Assembly Building at the cape in Florida
>> has some weather effects.
> I do not recall any replies.
>
>
>> It is especially as you add increasing complexity of nonlinear,
>> interrelated physics packages, that you begin to realize the necessity
>> of a smart code user and developer who learns to tweek the code,
>> ignore certain complexities. Some purists believe that the greater
>> detail and more physics we can add to the code, the better off you
>> are. This is by no means clear. The more you do this, the more
>> chaotic the code becomes and the more sensitive to the smallest of
>> variations, including roundoff.
>
> Not that I have strong reason to doubt current understanding but I would
> find it more convincing if more of the physics were understood to
> validate things like the magnitude and sign of feedback factors. Even
> for what the models do simulate I have concerns about chaotic behavior
> but have been told that the averaging that goes on tends to minimize
> that concern. Have never received a strong argument in that area that
> caused me to say "Ah ha I see now were I was wrong". Roundoff occurs
> at each step in the floating point calculations so that even if the
> models were perfect and the input data was known to infinite precision
> one could still get chaotic behavior. When I was doing a Java
> interpreter for Intel Itanium we had one or two old sample incomplete
> implementations and I could see that some hardware (Dec Alpha?)
> supported 128 bit floating point. Rich points out that the models are
> run using 64bit floating point. I would hope that in a few cases the
> same run was performed using 128 bit floating point approximations and
> the result checked against the 64 bit run. I would be uncomfortable if
> the results were more than a small amount different ie an agreement in
> Celsius temperature of 3 or 4 digits would be good.
>
> Rich said that an open source version of a model has been made
> available. That seems to have pluses and minuses. More eyes looking
> at code is always a good thing, however the approvers will have to be
> very careful as something like the following could easily slip by:
>
> Original code
> A = B * (C * D);
>
> Improved
>
> A = B * C * D;
>
> where A, B, C, D are all 64 bit floats.
>
> Sure the improved case gives the same answer if we had infinite
> precision but we don't and floating point arithmetic is not
> associative. I have seen cases where code has been given to less
> competitent programmers or numerical analysts and the kind of change I
> illustrate has occurred. Compiler writers refer to honoring such
> parenthesized expressions as producing safe code because correct results
> are data dependent. When the RT PC first shipped the compilers did not
> produce safe code, as the developers said that numerical values which
> could cause problems would never occur. Guess what they occurred and
> these compilers were withdrawn and replaced by better ones (from my
> group in Toronto). Naturally un safe code is often faster.
>
> Dave W
>
>
> To unsubscribe, send a message to majordomo@calvin.edu with
> "unsubscribe asa" (no quotes) as the body of the message.

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Sun Aug 30 21:00:01 2009

This archive was generated by hypermail 2.1.8 : Sun Aug 30 2009 - 21:00:01 EDT