Science in Christian Perspective



Ethical Decisions In Social Science Research*

From: JASA 14 (September 1962): 71-74.

From time to time, intellectuals parade their anxieties just as other men do. And just as other men, intellectuals seek a vocabulary which justifies their action, in this case parading their anxieties. Unlike other men, intellectuals tend to use a polysyllabic vocabulary in their assertions. This is how we know they are truly intellectuals!

One of the commonest attempts to express basic concerns is to wage polemic on the combined topic of science and religion. In the past, combatants allied themselves in a categorical either-or arrangement. The shouting tended to be loud; there were those who shouted the cause of science, and there were those who shouted the cause of religion. Few intellectuals took a "pox on both your houses" stand: man is both a knowing and an evaluating animal. Recently, however, voices are heard attempting to premise assertions on statements connecting science to religion.

We will presently examine some consequences of distinguishing science as a way of knowing how the world is put together and religion as a way of knowing how, at best, it ought to be arrayed. For the moment, let me allude to some significant differences in mood between the two.

From the time of Descartes on, most scientists felt and still feel the mood of doubt to be characteristic of science. One doubts basic facts, ways to get facts, conceptualizations, hypothetical connections between statements of facts, and so on, excepting only that science can answer questions of how. Religion, on the other hand, is seen to be predicated on faith. One has a religious faith; and to those addicted to scientism, science becomes a faith, a religion. In a sense, the scientist has faith in doubt; and the intellectually honest theologian doubts his faith. Those who would bridge the gap between the two are apparently seeking a faithful science.

When man, the active being, is cognizant of alternatives, he is forced to make some choice. I am not here discussing the mechanism of choice. I am merely asserting that somehow or other choice is made. I am asserting, moreover, that the choice between alternatives always involves values. As a matter of fact, a value is that which permits choices to be made.

We are remiss, however, if we lump all values into a general rubric of "value." Often we write as though this were the case-as though the value that leads to a choice of pumpernickel bread for sandwiches is of the same order as that which leads to social martyrdom in defense of the dignity of man. We must begin a hard and stumbling effort to distinguish between various kinds of values that one holds.

As a first approximation, let me resurrect some old words. Let us agree that there is a. meaningful difference between "transcendental" and "instrumental" values. If there is a goal to be achieved, it is predicated on a value. This value will be either a transcendental or a relatively transcendental value, according to some differences soon to be noted. In the achievement of the goal, the person may perceive a number of alternative instruments that can be used and/or a number of alternative ways a given instrument can be used. The choices among the instruments or the ways of using the instrument are instrumental.

For example, I may have a value of rearing my son to be a person who respects the dignity of man. This is not the most transcendental value which I h6ld. Yet it occupies a different position in my thinking from my other values. For given this goal, I must elect strategies as to how best to achieve this goal. In a culture as ours, I may use "money" as an instrument, electing some usages (as in getting him to donate to certain charities) and rejecting others (as in bribing him to behave in a certain way). I may point out parenthetically here that another side issue is the assessment of the values that are rejected as well as of those which are accepted by man.

The distinction between these two realms of values seems to have generated a rather bad "play on words," a sick pun of the word "ought." Consider the classical implications of "ought": when generally used, we tend to connect it with transcendental values. When so connected, we say simply that science cannot tell one what he ought to do. However, with the introduction of so-called scientific management, we find another claim: science can tell management what it ought to do. It turns out, of course, that certain ends are given (e.g., to make a profit), and certain instruments are available, so the "ought" refers to the instrumental value. Unless one makes the distinction we have, it sounds as if we are at last able to have a science of ethics.

In the sociology of knowledge,~ there are various ways of viewing science. A traditional way is to view it as a
body of knowledge. This requires an assessment of the kinds of statements comprising science and how these
statements get related to each other. From this point of view, one is likely to read that "the goal of science is a
theory. A theory is a generalization of high order which, in some sense, explains observed phenomena."

Yet when we ask a more modern question, "What is it that scientists do?" we find the answer in terms of decision-making. The basic scientific act is a decision. The scientist must decide about the truth-value of some

*Revised version of a paper presented at the 8th regional meeting of the North 'Central Section of the ASA, April 7, 1962,,as part of a symposium on "Critical Ethical Decisions in, Science."

**Dr. Francis is Professor of Sociology and Statistics, Dept. of Sociology, University of Minnesota.

statement which he is considering. The statement may be about the world; it may be a statement about a statement about the world; or it may be procedural, or whatever. Insofar as the scientist can consider different questions, insofar as he can doubt the truth-value of any number of statements, he makes decisions. These decisions resolve possible alternative forms of behavior; certain value premises adhere in the resolution of alternatives. Some value decisions are peculiar to science. Others are not.

All research is conducted in a social situation. This involves us in two considerations: (a) communication and (b) the network of human relations involving and impinging on science.

Social behavior always involves some element of communication. All interaction involves a transaction of meanings. Meanings, themselves, emerge out of and tend to shape interaction. A word has meaning only as it-and to the extent that it---evokes basically the same image on the part of the speaker as it does on the part of the listener. By behavior, one may propose a meaning to a word or other symbol; by reaction, others validate that meaning or reject it. Thus words have shades of meaning varying according to time and place. Moreover, words also change over time. We must stress that these assertions are as true of science as they are of any other form of human behavior. Consider, for example, the meanings possible to the word "correlation": unless it implies the same behavior to you (e.g.' how to collect certain data, how to tabulate it, how to manipulate it, how to interpret it) as it does to me, the word has no meaning.

As a strategy to decide about certain sentences, science makes use of a number of people with different skills. There is the patron, whether it is an official in a foundation, a member of some committee created by Congress or some other source of tax moneys, or a commercial enterprise seeking hired help. There are the technicians. There are those who train technicians. There are colleagues. There are editors of scientific journals. There are members of the various scientific associations. There thus are many people involved. Some are concerned with science as a vocation; others are "consumers of science"; others are mere on-lookers.

It is in this context that we must view the character of certain ethical decisions of social science. Three decisions related to science in its broader social context and hence not restricted to social science are critical. They are: (1) the decision to study a certain problem; (2) the decision to acquire certain data; and (3) the decision to publish certain findings, In simplest form, the decision is one of action or inaction. In more complex forms, the decision is which of various alternatives one ought to elect. The choice depends on the values that one holds.

A. The choice of a problem. The number of problems one may study is, perhaps, infinite. Certain of them are more crucial to one's theory than others. One's theory may hence imply a certain priority. Yet one may properly ask whether or not, since the study will be done in a broader social context, the public which is to be studied has any "say" in the kind of problem to be pursued. It may be that the argument is largely over how the problem is stated, but consider the following example.

Suicide, as a form of behavior, is a topic, but it is not in itself a problem. Aspects of suicide become or do not become problematic depending upon which theory one uses. One may observe, for example, that older people are more likely to commit suicide than younger ones. In some social environments, the topic is taboo, the behavior sinful. Does a scientist have the right to impose his views on the people whose behavior he wishes to study ? If it were possible, has he the right to force people to talk about and to think about a topic which from the perspective of their values is taboo?

In part, we are facing a conflict of values-the values that motivate a scientist and those of the larger society. But note that the value-conflict is a muddied affair. What value is motivating the scientist? Is his decision flowing simply from abstract commitment to knowledge for its own sake? Has he some idiosyncratic psychological need which is being satisfied by his interest in the topic? Is he seeking fame in his field, power in his department? Of the possible values on which his action is premised, which is superior to those of the public he would study?

Moreover, since knowledge goes in two directions, a society may decide that certain kinds of knowledge are unworthy of man, that some kind of danger lies hidden in the knowledge itself, as in physics with the possibility of destroying the universe itself.

Specifically, though, one may quarrel with a research problem simply because it gives certain power to certain people, contrary to, say, democratic values. Consid er studies in political behavior. A person may be asked his opinion on certain issues only to find later that he has given those he opposes materials that can be used against his own interests. Consider market behavior, and e whole range of subliminal communication. In a democracy, is it proper to study and find out how people can be manipulated without their knowing it? The choice of a problem may well involve a value decision.

B. The selection of data. A hypothesis whose data cannot be gathered cannot be tested. If a hypothesis is a proposed solution of a problem, the inability to test a hypothesis implies an inability to work on a certain problem. The availability of data is crucial to modern science.

The example we gave of suicide as a topic involves the question of the availability of data. In the context of hypothesis testing, however, we can make a stronger point. Suppose one has a hypothesis that certain types of people are highly likely to commit suicide when placed in certain social situations. Do we have the right to test this hypothesis? Recently, a psychologist at the University of Oregon manipulated mid-quarter grades of his students in an experiment he was conducting. No student knew that he was taking part in an experiment. Some students quit the course when they got failing grades, although they were in fact passing. We must imagine that some "A" students got "F's" in the experiment. Is this a proper use of academic freedom?

We may have a hypothesis that says that adolescents who hate their parents are likely to cheat on exams. Do we have the right to convince students they hate their parents just to see what happens?

When the scientist manipulates or modifies others, he is making an ethical decision. He is as accountable for his behavior as is anyone else. He can make no superior claim unless he holds that nothing can stand in the way of acquiring scientifically interesting data. This kind of ethic justifies physical and psychological brutality in the name of science. Western man has long fought to be freed from tyrants. He has died for such things as privacy of the person, the sacredness of the home, the integrity of human dignity. It makes little difference whether the tyrant is a scientist or a politician or a militarist. In a democracy, the individual is a sacred being who cannot be required to testify against himself, who cannot be deprived of life, liberty, and the pursuit of happiness without due process of law. The scientist cannot claim for himself the right to impose limits on the way he can acquire data-unless he is willing to forfeit the values inherent in a democratic society.

We have often argued that science is indifferent about how its knowledge will be used. Germ theory may generate bacteriological warfare-or it may result in improvements in sanitation and sewage disposal. The application of knowledge follows from its publication, a point we will discuss later, and is predicated on nonscientific values.   

Now let us consider another aspect of the ethics of science: who could possibly manipulate the kind of variables studied? Anthropologists and sociologists study cultures; institutions and social systems are part and parcel of their ways of thinking. But if human behavior is a function of social systems, it would take something other than an individual to manipulate them and hence to control behavior. This means that, if science is to be useful, the kinds of data making up the variables being studied implies something about the kind of social unit to whom utility and hence control is to be granted.

Consider, contrarily, the possibility of selecting variables which an individual can manipulate. Here something other than a political unit could utilize the results of knowledge. Consider those statements of a problem which permit certain people in a society to manipulate others: the doctor-patient relation-enabling the physician to manipulate the patient in his own interests, presumably; labor-management relations-enabling management to manipulate the worker for the best interests of management. In selecting data for analysis, the scientist frequently unwittingly determines who can possibly use the results of his inquiry. This, in turn, carries implicit assumptions about how a rational world, using scientific knowledge, ought to be put together.

C. The publication of findings. The use of statistical parameters and hypothetical cases ("let's call him A") assist in providing anonymity for subjects of scientific studies. Sometimes the locale is so unique that its identity cannot be hidden: does its revelation not imply an ethical choice on the part of the scientist? Indeed it does: he must choose to be honoring a trust or to be seeking recognition for his work.

But more than that: the publication of social data changes the world one studies. The findings become a part of "culture"; they thereby change that which was initially studied. In addition, "a little bit of knowledge is a dangerous thing." Suppose that a simple way to induce hypnosis could be developed; ought this be made generally available? It would materially change the mating and dating patterns on most college campuses! It is likely to change the pattern of criminality.

Science, as an attempt to look at the world, involves one set of values. But publication and communication beyond scientific peers changes the world and hence involves ethical premises not germane to science. One can know how to open and slam a door without doing either.

We must not think that everyone has equal access to scientific knowledge. In a democracy, unless we wish to have a special class eventually dominate society, access to knowledge must be open to all. This is not only in terms of traditionally published findings; it applies also to the classroom situation. We must admit that, though many researching scientists are hired by universities and colleges, many of them are also engaged in teaching. Now the professor- student relation is a social enterprise. In this relation, the professor frequently modifies other relations which the student might have had.

It so happens that common sense is frequently contrary to social science. The proof that common sense is faulty usually carries the implication that the source of common sense is also faulty. The incorrect notions gathered from one's parents, one's minister, one's friends, flow from two possible reasons. The source may have been in error, or he may have deliberately deceived the student. How frequently must one, in teaching a scientifically defensible position, imply that the student's parents are somehow inadequate?

In a course, for example, in inter-group relations, one cannot possibly discuss all that has been researched. One must choose. On what grounds? On grounds of theoretical relevance? Or because of certain pet values of the instructor?

The fact that we academicians tend to feel that knowledge is pretty good in its own right, and that students ought to have a more rationally defensible view of the world than that of common sense, does not change the character of the issue. It is stiff a value premise. The fact that the majority of us share a particular value does not mean that somehow the value has become a fact

This, I think, is the treachery in social science research. Like the adolescent, we may think that just because "everyone is doing it," it is the right thing to do and we are freed from moral responsibility. I like to think otherwise. I prefer to believe that the highest form of human behavior, creative science, is predicated on man's self-awareness as a moral creature with moral commitments and moral responsibilities.