Re: [asa] Creation Care

From: David Opderbeck <dopderbeck@gmail.com>
Date: Wed Jan 24 2007 - 22:30:47 EST

*... who are the final arbiters of controversies in scientific debates? I
think we need to educate the public about scientific methodology and the
need to rely on the scientific publication process as part of authoritative
opinion. Without that, there's no resolution.*

Randy, let me push back and play the contrarian a bit here. I hope everyone
will forgive a long response, but I find this fascinating, and important.

Perhaps its because of my background that I'm not quite ready to apply the
word "authoritative" to the scientific publication process or the general
progress of professional science. Early in my legal career, I worked on
big product liability litigation -- asbestos, DES, Prozac, and breast
implants -- on behalf of pharma companies and other manufacturers. As a
result, I think I have some hands-on experience with how the scientific
process works in a politically charged context.

We have recently seen reports of how the peer review and publication process
with respect to pharmaceuticals has been influenced by the industry in
response to regulatory and product liability concerns. Many of the
published studies concerning the safety and efficacy of compounds that
become blockbuster drugs are directly funded by the pharma companies, the
academic centers that produce the research are heavily funded by industry,
and the journal peer reviewers often have ties to the industry. This
doesn't mean the science is all bad, but it does mean that it isn't beyond
criticism or even authoritative simply because it has passed peer review.
Indeed, although when I was working on the Prozac cases the literature
consistently denied any causal link between SSRI-class antidepressants and
suicidal thoughts, in recent years the contrarian position has caught the
FDA's attention, at least as to the use of these drugs in adolescents. The
evidence presented in the recent Vioxx cases also demonstrates pretty
convincingly how the publication and peer review process and the scientific
consensus can be captured by special interests.

This illustrates that, while courts must give heavy weight to scientific
consensus, there is always room to challenge the consensus (this is what the
controversial *Daubert v. Merrill Dow* case on the admissibility of expert
scientific testimony is all about). At the end of the day, a court's
decision is supposed to be based on evidence and reason, not on any expert's
purported authority. Experts assist the court and the jury, but they do not
decide the matter. I think this is exactly as it should be in the courts in
a free and democratic society. It is also, I think, as it should be in the
political process in a free and democratic society. The "final arbiter,"
ultimately, is and must be the people, not any one community, scientific or
not.

Of course, with respect to global warming, the big money interests are the
contrarians, so perhaps that gives us even more reason to trust the
literature in this particular instance. However, I think a *general *principle
of "just trust the literature" ultimately is anti-intellectual and
dangerous.

Let me further illustrate this with an example from a field that, at least
to me, is far more impenetrable than climate science: theoretical physics.
Recently I read Lee Smolin's interesting book "The Trouble With Physics."
Smolin decries the "consensus" among cosmologists that string theory must be
correct. Smolin's chapters entitled "How Do You Fight Sociology," "What is
Science," and "How Science Really Works' are well worth the price of the
book. Here is how Smolin describes the sociology of the string theory
community:

1. Tremendous self-confidence, leading to a sense of entitlement and of
belonging to an elite group of experts.
2. An unusually monolithic community, with a strong sense of consensus,
whether driven by the evidence or not, and an unusual univormity of views on
open questions. These views seem related to the existence of a hierarchical
structure in which the ideas of a few leaders dictate the viewpoint,
strategy, and direction of the field.
3. In some cases, a sense of identification with the group, akin to
identification with a religious faith or political platform.
4. A strong sense of the boundary between the group and other experts.
5. A disregard for and disinterest in the ideas, opinions, and work of
experts who are not part of the group, and a perference for talking only
with other members of the community.
6. A tendency to interpret evidence optimistically, to believe exaggerated
or incorrect statements of results, and to disregard the possibility that
the theory might be wrong. This is coupled with a tendency to believe
results are true because they are "widely believed," even if one has not
checked (or even seen) the proof oneself.
7. A lack of appreciation for the extent to which a research program ought
to involve risk.

(The Trouble With Physics, at 284.) Does this sound familiar? To some
extent, I think each of these points could apply to some people in the
environmentalist community (and dare I say it, I think they also can apply
in many ways to some people in evolutionary biology).

In the chapter "How Science Really Works," Smolin makes the following
observation about university hiring and peer review:

There are certain features of research universities that discourage change.
The first is peer review, the system in which decisions about scientists are
made by other scientists. Just like tenure, peer review has benefits that
explain why it's universially believed to be essential for the practice of
good science. But there are costs, and we need to be aware of them..... An
unintended by-product of peer review is that it can easily become a
mechanism for older scientists to enforce direction on younger scientists.
This is so obvious that I'm surprised at how rarely it is discussed. The
system is set up so that we older scientists can reward those we judge
worthy with good careers and punish those we judge unworthy with banishment
from the community of science. This might be fine if there were clear
standards and a clear methodology to ensure our objectivity, but, at least
in the part of the academy where I work, there is neither.

(The Trouble With Physics, at p. 333) (I should be clear that Smolin seems
to be speaking of "peer review" primarily in terms of departmental hiring
decisons, but I think he intends to cover everything from hiring to what
constitutes an acceptable reasearch agenda for publication).

At the conclusion of his book, Smolin says the following: *"To put it more
bluntly: If you are someone whose first reaction when challenged on your
scientific beliefs is 'What does X think?' or 'How can you say that?
Everybody knows that ...., ' then you are in danger of no longer being a
scientist." * (The Trouble With Physics, at p. 354).

Smolin certainly has a personal axe to grind, since his research agenda
swims against the consensus in his field (he rejects string theory and
promotes somthing called quantum loop gravity). But, IMHO, his observations
are trenchant, particularly when I factor them into my personal experience
with a politically charged scientific consensus that directly impacts public
policy.

A final point, given the ASA's faith perspective: IMHO, it's dangerous to
speak in terms of "authority" when dealing with scientific consensus because
we must recognize that the scientific community, like every other human
community, is deeply affected by sin. I don't think this implies an
anti-science attidude, or YEC thinking or any such thing. It is simply an
appropriately Christian epistemic and social realism. The scientific
community is a human community, which means it is not entirely objective and
free from distorted interests and misplaced priorities.

So I would say this: yes, we must take seriously the consensus of working
scientists in any given field as reflected in the peer reviewed literature.
However, we must also retain the rational and political freedom to evaluate
consensus claims on the merits, being always mindful that the authority of
all human communities, including communities of science, is necessarily
limited by social dynamics and sin. Because of this, it's
*irresponsible*to ignore contrarian views, even if they are not a
significant part of the
peer reviewed literature. This is particularly true where the science in
question is critical to public policy and democratic debate. If the
contrarian view is clearly wrong, that should be demonstrable based on the
rational strength of the consensus view, without resort to arguments from
authority.

On 1/24/07, Randy Isaac <randyisaac@comcast.net> wrote:
>
> Thanks for a great post, Rich.
> I'll probably wrap up this phase of my life and focus on other priorities
> for a bit. But indulge me for one more observation. We all state lots of
> opinions (which is what this is all about) and get lots of links to all
> types of claims and technical assertions. We're more technically educated
> than the general public so we love to dig deeper and spar with anything we
> hear. The question in general is "whom should we trust?" I would suggest
> that first and foremost, we must rely on the peer-reviewed, independently
> reproduced, corroborated technical publications in the most respected
> journal of the relevant field. That must be the reference point. Virtually
> all comments appearing on this forum (certainly all of mine) do not meet
> that criterion. So we must test our statements against the published work.
> Technical sounding papers like the Glassman article to which Janice
> provided a link haven't passed the scrutiny of being published through that
> criterion and can't be taken seriously until it is. All claims need to be
> tested.
>
> This is not to say that all published papers are correct. Probably not.
> But the scientific process is a self-correcting one and in a rather
> reasonable time frame, fraudulent papers, hoaxes, and results biased by
> catering to the will of the funder get weeded out. Work that doesn't pass
> the scrutiny of publication in the respected journals are almost surely
> wrong, though there are exceptions.
>
> It's very important to have such a reference point. We can't base anything
> on who shouts the loudest or who we think has the best argument or who is
> accusing whom of being sold out to the political process. I would suggest
> that we really don't have a reliable basis other than the technical
> publication method. Flawed as it may be, it has worked remarkably well in
> the last couple of hundred years and we need to work with it.
>
> So that means we can weed out anything not published. The next huge
> question is how to communicate and interpret the status of what has been or
> is published to the broader audience. I would suggest that this needs to be
> the responsibility of the members of that group of relevant experts.
> Unfortunately they're not always good at it. I think we know from our own
> field of expertise that the relevant literature is vast and outsiders (even
> scientists in slightly different fields) aren't really familiar with the
> status of the literature. The experts, though, are often too busy to spend
> time dealing with translating all that work.
>
> In most of our fields of expertise, the public has no interest and there's
> no need to communicate the latest results to anyone but graduate students.
> But in climate change the implications seem to be center stage of the public
> debate. Being in the middle of it the last few weeks really made me realize
> that most people don't have a good understanding of who to trust--who are
> the final arbiters of controversies in scientific debates? I think we need
> to educate the public about scientific methodology and the need to rely on
> the scientific publication process as part of authoritative opinion. Without
> that, there's no resolution.
>
> Randy
>
>
>
> ----- Original Message -----
> *From:* Rich Blinne <rich.blinne@gmail.com>
> *To:* Bill Hamilton <williamehamiltonjr@yahoo.com> ; Janice Matchett<janmatch@earthlink.net>;
> PvM <pvm.pandas@gmail.com>
> *Cc:* Don Winterstein <dfwinterstein@msn.com> ; asa <asa@calvin.edu>
> *Sent:* Wednesday, January 24, 2007 11:34 AM
> *Subject:* Re: [asa] Creation Care
>
>
>
>
> On 1/24/07, Bill Hamilton <williamehamiltonjr@yahoo.com> wrote:
>
> > Don makes some good points. I've been involved with engineering projects
> > for
> > the past 35 years, and seldom have I seen an engineering project
> > correspond
> > exactly to the frequently extensive modeling that was done in the design
> > phases. When we could presumably control all aspects of the design, we
> > still
> > had to tweak and modify to account for circumstances that loomed larger
> > in
> > practice than in the models.
>
>
> Like you I have 25 years of experience in engineering modelling. In my
> case, it is semiconductor device simulation. One thing I have seen in either
> Don's or your post is how modelling has changed recently. 25 or 35 years ago
> we made numerous simpifying assumptions just because it would take FOREVER
> to simulate. Now we have machines that are 5-10 order of magnitude faster
> than when we started our careers. This means we can remove the heuristics --
> a fancy scientific word for guess -- decrease our mesh size and time steps
> and get really good results. Why do I have confidence about this? Well, we
> are communicating through devices that have been extensively simulated where
> the measured results match extremely well with the modelled behavior. As
> time went on we modelled more and more effects, e.g. resistance,
> crosstalk, short channel effects, etc. None of these were a great surprise
> from a physics perspective but they required more iron to do the
> simulations. The same holds true with climate modelling. When I look at the
> climate models I see the same well-formed models. In fact, I would have
> killed to get such good correlations at so many corners as these models (and
> I would like access to their computers too :-) ).
>
> One item specific to climate modelling also gives me confidence. The
> satellite data was not meshing with the models in that the high altitude
> temperatures were not as predicted. An error in the data collection was
> fixed and then the models matched. This discrepency caused a literal act of
> Congress to determine why the models were not so good. Since then much has
> been done to make these models much, much better. There is still work to do
> in order to more accurately predict the small-scale implications of global
> warming because these are important for good public policy. But, the models
> are accurate enough now for the upcoming IPCC AR4 to have a 90% confidence
> that we are experiencing global warming and that it's primarily caused by
> anthropogenic greenhouse gases.
>
> Models matching the current (and now extensive) climate data are a
> necessary condition for any alternative hypotheses now. None of Janice's
> skeptics have put forth their own computer model (you can check for this
> youself by doing a Google scholar search for the author) and show that
> matches either the current or paleoclimate data better. It's not that these
> individuals have presented research results that got poo-pooed by the
> mainstream they have offered literally NOTHING. If I was Pim I would call it
> vacuous. :-) From now on, Janice will need to reference papers that have
> these models in them. For example, she needs to show that a model that has
> solar forcing more than 5X smaller than GHG forcing fits the curve better
> than the current model. Climate science is now quantitative and a
> qualitative hand waving no longer cuts it.
>
>
>
>

-- 
David W. Opderbeck
Web:  http://www.davidopderbeck.com
Blog:  http://www.davidopderbeck.com/throughaglass.html
MySpace (Music):  http://www.myspace.com/davidbecke
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Wed Jan 24 22:31:26 2007

This archive was generated by hypermail 2.1.8 : Wed Jan 24 2007 - 22:31:26 EST