Doing what Randy suggested can be daunting so here's some techniques I
use to evaluate scientific claims as Randy suggested:
Google Scholar is your friend. scholar.google.com Use it. Particularly
do an author search on the cited author along with keywords associated
with the topic. Also do a general search on the author to get a sense
of his or her CV and interests.
1. See how many and how many recent papers come up.
2. Are they cited by others?
3. Is there are diversity of journals that have published this author
or is it just "published" by the author's web site. Sites like ArXiv
are fine if they are pre-publication reprint of an otherwise published
article. If this is all you see, run away.
4. Follow the citations and see what colleagues think.
5. Do the citations prove or disprove the contentions?
6. Go to the methods section of the paper and evaluate that.
7. Is the data publically available and can you yourself replicate the
results? This was where the Svenmark paper failed, BTW. I was able to
see he was cherry picking the data interval and thus not taking into
account the periodicity of GCR.
8. Did the journal that published the paper do a review article that
can get you up to speed on the area in question? Read that.
9. Is the paper in the expertise of the author?
10. Did the popular article summarize the content properly? This is
probably the most common abuse in that the research itself is sound
but the popularizer totally botches it. Biggest offenders here: Rush
Limbaugh, WSJ Opinion Page, The Daily/Sunday Telegraph. This is not my
politics speaking here because my politics leans center-right. I've
have had just too much experience of running down the stories and find
that those above are often wildly inaccurate.
11. Avoid blogs that don't cite the journal papers, particularly ones
that lean heavy to the right or left and get into conspiracy theories.
If you do this only use them as a jumping off point and go to the
cited works. And PLEASE don't quote them here. Do the leg work and
find what is being referenced and quote that. The same holds for
Wikipedia although I found it surprisingly accurate (just like Nature
did when comparing it to Encyclopedia Britannica). The bottom line
here is not that you don't use these as jumping off points but you
need to follow what is being quoted and verify it. There's a reason
why citing Wikipedia in your high school/college paper automatically
flunks it.
On 2/12/07, Randy Isaac <randyisaac@comcast.net> wrote:
>
>
> Rich did a great job of addressing the technical aspects. I think this is
> also a good opportunity to consider how to assess articles like this.
> Whether the topic is global warming, evolution, the age of the earth, ID, or
> any other controversial issue where the public gets into scientific topics,
> articles like this seem to abound on all sides of the controversy. The key
> question is, how do we know whether we can trust the article, or how
> seriously to take their conclusions?
>
> I would suggest that it might help to ask ourselves the following questions
> when coming across such an article:
>
> 1) Does the article describe work that has been published in a
> peer-reviewed, respectable technical journal in that field of expertise?
> a) If so, has the work been independently reproduced by a different lab
> or is it an isolated paper?
> b) If not, was the work rejected by peer review? or simply not submitted
> for publication?
> 2) Does the work provide new data? If so, has the data been independently
> reproduced?
> 3) Does the work offer a new interpretation of previously unexplained data?
> If so, does the interpretation fit with data that are well understood?
> 4) Does the work offer a new interpretation of data previously thought to be
> well understood? If so, does the new model fit all previously explained data
> as well as make verified predictions beyond any previous models?
> 5) Is any lack of acceptance by the technical community blamed on a
> "conspiracy" of peer pressure or funding agency pressure? (this is a typical
> symptom of either disgruntled scientists whose papers were rejected in peer
> review or of amateurs who don't like the mainstream scientific conclusions)
>
> I don't think I need to provide a scoring sheet for any of you. This won't
> tell whether the work presented is true or false but it gives a pretty good
> indication of how seriously to take the work. In the end, the ideas can only
> be assessed on the strength of the technical merits. That takes a lot of
> time and effort and usually requires deep expertise in the field. The above
> perspective merely helps determine whether an idea is worth that effort.
>
> Randy
>
>
>
> ----- Original Message -----
> From: Dawsonzhu@aol.com
> To: asa@calvin.edu
> Sent: Sunday, February 11, 2007 9:19 PM
> Subject: Re: [asa] IPCC
>
>
> I came across this recently. It doesn't say a whole lot, but the author
> is supposedly a former editor of the New Scientist.
>
> An experiment that hints we are wrong on climate change
> http://www.timesonline.co.uk/tol/news/uk/article1363818.ece
>
>
>
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Tue Feb 13 10:27:36 2007
This archive was generated by hypermail 2.1.8 : Tue Feb 13 2007 - 10:27:36 EST