On 2/26/07, Janice Matchett <janmatch@earthlink.net> wrote:
>
> *@ * That's fine - if you actually "believe" that it's possible
> computers to accurately "model" a chaotic system like climate.
> http://www.calvin.edu/archive/asa/200702/0480.html
>
> To test their "prophecies" let their 100-year computer models run out for
> 10 years and see if actual events match the 10-year predictions. If they
> do, then we can start thinking about making major policy decisions. Until
> then - forget it.
>
Been there, done that, got the tee-shirt. The average temperatures matched
the predictions over the period from 1850 to present but *only if you
include anthropogenic effects*. The difference between climate models and
weather models is not the underlying physics or the structure of the model
which is the same but that which is being predicted. In the case of climate
models, the predictions is global average temperature which is not as
chaotic as whether it will rain tomorrow in Des Moines. Since we don't need
to predict the exact sequence of events you can do a multiple run
ensemble over your parametrization phase space and see what kind of
temperature distribution you get. This is the spread you see in the IPCC
reports where you get a range of answers. Nevertheless, it looks like a
bell-shaped curve with the most likely answer near the middle.
For a detailed description of what is behind climate models a good survey
can be found in this month's Physics World, here:
http://physicsweb.org/articles/world/20/2/3/1
If you want to play with your own climate model you can download one here:
www.climateprediction.net
Some interesting quotes from Physics World:
> But given that weather forecasts are unreliable for more than a few days
> ahead, how can we hope to predict climate, say, tens or hundreds of years
> into the future? Part of the answer lies in climate being the *average* of
> weather conditions over time. We do not need to predict the exact sequence
> of weather in order to predict future climate, just as in thermodynamics we
> do not need to predict the path of every molecule to quantify the average
> properties of gases.
Or to put it another way, even though molecular motion is chaotic it
doesn't keep us from saying:
PV = nRT
> Improvements in computing power since the 1970s have been crucial in
> allowing additional processes to be included. Although current models
> typically contain a million lines of code, we can still simulate years of
> model time per day, allowing us to run simulations many times over with
> slightly different values of physical parameters (see for example
> www.climateprediction.net). This allows us to assess how sensitive the
> predictions of climate models are to uncertainties in these values. As
> computing power and model resolution increase still further, we will be able
> to resolve more processes explicitly, reducing the need for parametrization.
>
>
> The accuracy of climate models can be assessed in a number of ways. One
> important test of a climate model is to simulate a stable "current climate"
> for thousands of years in the absence of forcings. Indeed, models can now
> produce climates with tiny changes in surface temperature per century but
> with year-on-year, seasonal and regional changes that mimic those observed.
> These include jet streams, trade winds, depressions and anticyclones that
> would be difficult for even the most experienced forecaster to distinguish
> from real weather, and even major year-on-year variations like the El
> Niño–Southern Oscillation.
>
> Another crucial test for climate models is that they are able reproduce
> observed climate change in the past. In the mid-1990s Ben Santer at the
> Lawrence Livermore National Laboratory in the US and colleagues strengthened
> the argument that humans are influencing climate by showing that climate
> models successfully simulate the spatial pattern of 20th-century climate
> change only if they include anthropogenic effects. More recently, Peter
> Stott and co-workers at the Hadley Centre showed that this is also true for
> the temporal evolution of global temperature (see figure 3). Such results
> demonstrate the power of climate models in allowing us to add or remove
> forcings one by one to distinguish the effects humans are having on the
> climate.
>
> Climate models can also be tested against very different climatic
> conditions further in the past, such as the last ice age about 9000 years
> ago and the Holocene warm period that followed it. As no instrumental data
> are available from this time, the models are tested against "proxy"
> indicators of temperature change, such as tree rings or ice cores. These
> data are not as reliable as modern-day measurements, but climate models have
> successfully reproduced phenomena inferred from the data, such as the
> southward advance of the Sahara desert over the last 9000 years.
>
So, according to your test we can start making major policy decisions
because we can and have made accurate predictions using these models. We do
it all time, e.g. when Max Mayfield picked up the phone to call Ray Nagin
that Katrina was coming on Friday but because of "uncertainty" Nagin
procrastinated until Sunday. According to your thinking, Nagin should not be
held culpable for failing to evacuate New Orleans because he would be basing
major (and expensive) policy on the self-same model technology. So, if we
want to do policy as good as Nagin we should follow his approach of dealing
with uncertainty by procrastinating.
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Mon Feb 26 15:33:36 2007
This archive was generated by hypermail 2.1.8 : Mon Feb 26 2007 - 15:33:36 EST