*I would suggest that it might help to ask ourselves the following questions
when coming across such an article:*
So Lee Smolin's book on string theory should be binned? It doesn't seem to
pass any of these tests.
I agree that these tests are useful and that they are the measure of a
publication's value *within the relevant scientific community*. It probably
is helpful for the general public to understand how public arguments made by
a scientist stack up against the norms of the scientific community.
However, I'm a little nervous about using these norm as a general test for
what the public should regard as truthful. Mavericks, malcontents and
misfits often have valuable things to say -- sometimes precisely because
they are outsiders. (I can imagine what the Pharisees said about whether
reasonable people should believe Jesus.)
*
*
On 2/12/07, Randy Isaac <randyisaac@comcast.net> wrote:
>
> Rich did a great job of addressing the technical aspects. I think this is
> also a good opportunity to consider how to assess articles like this.
> Whether the topic is global warming, evolution, the age of the earth, ID, or
> any other controversial issue where the public gets into scientific topics,
> articles like this seem to abound on all sides of the controversy. The key
> question is, how do we know whether we can trust the article, or how
> seriously to take their conclusions?
>
> I would suggest that it might help to ask ourselves the following
> questions when coming across such an article:
>
> 1) Does the article describe work that has been published in a
> peer-reviewed, respectable technical journal in that field of expertise?
> a) If so, has the work been independently reproduced by a different
> lab or is it an isolated paper?
> b) If not, was the work rejected by peer review? or simply not
> submitted for publication?
> 2) Does the work provide new data? If so, has the data been independently
> reproduced?
> 3) Does the work offer a new interpretation of previously unexplained
> data? If so, does the interpretation fit with data that are well understood?
> 4) Does the work offer a new interpretation of data previously thought to
> be well understood? If so, does the new model fit all previously explained
> data as well as make verified predictions beyond any previous models?
> 5) Is any lack of acceptance by the technical community blamed on a
> "conspiracy" of peer pressure or funding agency pressure? (this is a typical
> symptom of either disgruntled scientists whose papers were rejected in peer
> review or of amateurs who don't like the mainstream scientific conclusions)
>
> I don't think I need to provide a scoring sheet for any of you. This won't
> tell whether the work presented is true or false but it gives a pretty good
> indication of how seriously to take the work. In the end, the ideas can only
> be assessed on the strength of the technical merits. That takes a lot of
> time and effort and usually requires deep expertise in the field. The above
> perspective merely helps determine whether an idea is worth that effort.
>
> Randy
>
>
>
> ----- Original Message -----
> *From:* Dawsonzhu@aol.com
> *To:* asa@calvin.edu
> *Sent:* Sunday, February 11, 2007 9:19 PM
> *Subject:* Re: [asa] IPCC
>
>
>
> I came across this recently. It doesn't say a whole lot, but the author
> is supposedly a former editor of the New Scientist.
> *
> An experiment that hints we are wrong on climate change*
> http://www.timesonline.co.uk/tol/news/uk/article1363818.ece
>
>
>
-- David W. Opderbeck Web: http://www.davidopderbeck.com Blog: http://www.davidopderbeck.com/throughaglass.html MySpace (Music): http://www.myspace.com/davidbecke To unsubscribe, send a message to majordomo@calvin.edu with "unsubscribe asa" (no quotes) as the body of the message.Received on Wed Feb 14 12:48:38 2007
This archive was generated by hypermail 2.1.8 : Wed Feb 14 2007 - 12:48:38 EST