Re: [asa] NASA - Climate Simulation Computer Becomes More Powerful

From: Rich Blinne <rich.blinne@gmail.com>
Date: Sun Aug 30 2009 - 22:14:12 EDT

On Aug 30, 2009, at 6:59 PM, wjp wrote:

> Dave:
>
> I believe you understand the roundoff problem, something, as far as
> I can tell,
> most programmers, and certainly most practitioners have little
> appreciation of.
>
> I once had a co-worker who meticulously went through a turbulence
> code to try
> to discover where it went wild. He ended up re-writing code to
> attempt to
> explicitly instruct the compiler in what order to perform operations.
> This appeared to help. Of course, we went to another platform, and
> all hell
> broke lose again.
>
> There are, or have been, people who study the stability of a code.
> Generally, this is not possible analytically. It can only be
> performed
> numerically. We use to
> develop numerical models that describe the roundoff or random
> component of a calculation.
> It always grew exponentially. Averaging does help. This is why we
> were
> always more confident of integrated quantities. Numerical roundoff
> is a
> type of turbulence. Indeed, it ought to be possible to model it as
> a turbulence
> model.

This is precisely why these models fall apart when doing weather
forecasting but not climate forecasting. The averaging is done in the
latter.

>
> It would be simple to study the sensitivity of a code to initial
> conditions.
> But how many people even do this?

This is done every time a climate modeling run is done. These are
multiply run with multiple ensembles of randomly varying initial
conditions. For example, for the model validation project for climateprediction.net
, the initial condition perturbations are created by computing next
day differences in the 3D field of potential temperature from the long
run. There are 1741 of these perturbations. Along with 39 starting
conditions, this creates an ensemble of almost 70,000 members. The
spread of the different ensembles gives you a sense of the
"turbulence". There are also quite a few different climate modeling
programs. The open source model mentioned in the Science paper is just
one of those many models. Not only do these models agree with the
instrumental record they agree with each other. This is all well and
good when projecting global average temperature but now we want local
conditions and this means a super-duper long range weather forecast.
That's why we need the compute monsters like that was announced on the
NASA web page.

>
> I can remember little of this now. I only am certain that it was
> not an active
> area of research, not fitting the PI's notion of appropriate
> publicity.

And you would be wrong. For example see the following paper that
looked at the effect of ensemble spread when applied to the issue of
climate sensitivity.
Nature 430, 768-772 (12 August 2004) Quantification of modelling
uncertainties in a large ensemble of climate change simulations http://www.nature.com/nature/journal/v430/n7001/full/nature02771.html

The models are constantly checked for all kinds of uncertainty caused
by this and other factors, too. Many of the 350 papers listed in this
bibliography are done to deal with these uncertainties and deviations
from the instrumental record. The paper from last Friday was one of
the latter category. http://www.ccsm.ucar.edu/publications/bibliography.html

Rich Blinne
Member ASA

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Sun Aug 30 22:15:01 2009

This archive was generated by hypermail 2.1.8 : Sun Aug 30 2009 - 22:15:01 EDT