Bias in Science, Part 1

From: Randy Isaac <rmisaac@bellatlantic.net>
Date: Wed May 11 2005 - 09:22:04 EDT

A few weeks ago there were several posts about the possible presence of bias in the scientific community, particularly in regard to the question of whether a paper by Baumgardner might not be rejected by peer-reviewers due to bias. I believe the issue of bias is very important and is critical to pursuing the ASA "commitment to integrity in science." I'd like to share a few perspectives on the issue of bias in science. Since I may get long-winded, I'll separate it into several parts over several days. In this first part I'd like to make some basic comments on bias. In the second, look specifically at bias in the case of Baumgardner's paper, and in the third and final part, consider the influence of religion on bias in science.

I believe there are two major types of bias in the scientific community that we need to recognize:

1) Prejudicial bias. The tendency for a prejudice, or an a priori desire or preference for a particular result, to influence the analysis and the outcome of a scientific investigation.

2) Scientific bias. The tendency for anomalous results, namely those not expected on the basis of established scientific knowledge, to be rejected, particularly if the results directly contradict previously well-documented results.

The first is to be studiously avoided and the second is to be encouraged and cultivated. The key to achieving both aims is to adhere to a rigorous scientific methodology, developed over several centuries, as being effective and necessary in establishing objective results.

Most people seem to recognize the existence of prejudicial bias, though few do enough to avoid it. I think high-energy physicists are the best at instituting practices that guard against prejudicial bias. Their experiments involve so many people, so much time, so much money, and such complex infrastructure, with so much at stake, that every safeguard possible must be exercised. Typically, the raw data will be deliberately offset by an unknown (by those doing the analysis) parameter, which is removed only at the last moment to reveal the final result. In an industrial lab of the type I have managed, the competitive pressure to succeed is enormous and the tendency toward prejudice in interpreting data is powerful. One of my biggest responsibilities was to be vigilant and ensure avoidance of such prejudice in order to maintain the quality of the lab and the accuracy of the results. Yet, it was not an easy task to accomplish. I can cite several technical and business decisions that, in hindsight, were based on data of which the interpretation was really prejudiced.

But most people don't recognize the validity of scientific bias. There is an assumption that scientists should be completely unbiased and totally objective. There should be no such thing as a scientific bias since everything should be based on sound, objective scientific methodology. I would argue that a scientific bias is appropriate and proper for the very practical reason that a scientist simply does not have the time and opportunity to properly assess the quality of scientific methodology of every anomalous result. To make progress, a scientist can justifiably reject a claim of an anomalous result without taking the time to document the error of scientific methodology that led to the result. If, perchance, the result is correct, it will undoubtedly be reproduced by several reputable labs in the future and there will be sufficient opportunity to change one's mind.

To support my claim in the above paragraph I'd first like to make some general comments and then give a specific example. Scientists build on the work of previous scientists. When theories and concepts have been documented as being the best explanation of experimental results, have been independently reproduced, and are accepted by the scientific community, they are reasonable foundations on which future scientific work can build. An anomalous result can mean either that the previous experiments were wrong, that the scope of validity of the previous experiments needs to be refined, or that the anomalous result was due to an erroneous scientific methodology. I believe a scientist is justified in assuming the last position, even without a clear articulation or understanding of why the new results might be wrong. This can be described as a bias, but one based on centuries of experience where established results are seldom, if ever, revoked. Yes, we all may dream of discovering such a dramatic change, but that is almost always just a dream.

I'd like to share a specific example of where I was singed, and narrowly missed being badly burned, by an experiment influenced by prejudicial bias. I believe it illustrates how difficult it is to identify a flaw in scientific methodology.

A researcher I hired into IBM was well-liked and productive. One day he told me about an exciting idea he had for a new technological approach to a particular problem we had with our computer design. I encouraged him to pursue it and a year later he had some dramatic results. A key parameter was a factor of 2 higher than anything that had been seen in the industry. We were of course all cautious but nevertheless excited. Scrutinizing his work, it all seemed to make sense. There was a plausible physical explanation for the unprecedented result and the data seemed persuasive. I asked several people from our physical sciences group, who were generally skeptical, to review the work, but none had the time for an in-depth look and a cursory review seemed to indicate everything was done correctly. For another year, we organized groups to study how we might take commercial advantage of this significant disruptive advance in technology. Many people in our company became quite excited. But before publication, I finally persuaded a leading physicist to really sit down and crawl through the details with the researcher. After a few days, he came to me to say he had found a fatal flaw. Part of the researcher's analysis involved fitting the experimental data with a parabolic curve and evaluating the gradient at the intercept. No problem there--he had been very clear about this process. But it turns out that the value the researcher actually used in the subsequent calculations was not extracted from a fit to the entire data set, as he had implied, but only to the two data points nearest the intercept! When I confronted him with this peculiarity and pointed out the need to include an error bar assessment (which made clear that it was equally probable that he had no benefit at all), he amazingly claimed that his technical approach was surely correct and that by using this value, the conclusion would be more readily appreciated by those less knowledgeable in the field! It wasn't long before I was informing him that he was no longer welcome in our lab. His response: I and the entire team at IBM were biased against him and simply didn't want him to succeed. He left the company and persuaded a venture capitalist to provide him with $6M to pursue his idea and now he is CEO of his own start-up! This summer he is scheduled to give a talk at a major conference--and he is scrambling to find some short-term products to appease the impatient venture capitalists.

The moral of the story (among several!) is that it is often not easy to identify even a blatant flaw of scientific methodology. It requires expertise, time, and the cooperation of the researcher. Often these elements are not available. A scientist cannot usually afford that much time and must find a way of making progress. The practical approach is to develop a bias which ultimately, in the best scientists, becomes a finely honed instinct of what results are correct and what results should be doubted, if not outright rejected, even when it is not possible to identify flaws in its source.

In summary, prejudicial bias is pervasive and must be actively avoided while scientific bias is a necessary part of the scientific process that helps us build on established knowledge and permits us to expand our knowledge in fruitful directions, and not be distracted by erroneous paths

Randy.
Received on Wed May 11 09:26:52 2005

This archive was generated by hypermail 2.1.8 : Wed May 11 2005 - 09:26:55 EDT