Science in Christian Perspective

 

 

PROBABILITY CONSIDERATIONS IN SCIENCE AND THEIR MEANING
Aldert van der Ziel

From: JASA 17 (March 1965): 23-27.

It is here investigated how probability considerations arise in science. The use of probability concepts in statistical mechanics and wave mechanics is discussed. The implications of the second law of thermodynamics are dealt with. The concept of "random event" is discussed, with special application to events in the biological domain. Caution is expressed against drawing unwarranted conclusions from the use of probability concepts in science. For example, it is shown that the use of probability concepts does not imply that the world is governed by chance.


1. Introduction.

Probability considerations are used in physics and in biology. In physics they occur in statistical mechanics, the theory of heat, in the atomic and subatomic domain, etc. In biology they occur in the theory of mutations, the theory of survival rates, etc. 

This is often interpreted as meaning that the world is indeterministic and that many phenomena in nature are governed by chance. This in turn is sometimes thought to have religious consequences in that it poses the question how the concept of an omnipotent God can be squared with the concept of chance phenomena.

If a careful analysis is made of the occurrence of probability considerations in science, it is seen that science is incapable to decide whether or not the world is deterministic, nor can it deduce whether or not the world is governed by chance. Far flung philosophical and theological deductions based on these questions are thus without foundation.

Those working in science feel the need of explaining to the laymen what they are doing and why they are doing it. This is a legitimate part of their scientific work. They also have a legitimate interest in connecting their results to the results of others and in relating their field of endeavor to other fields. But if science is used in order to draw far-reaching philosophical or religious consequences from it, science is misused and misinterpreted and the minds of the laymen are confused.


Aldert van der Ziel Is professor In the Electrical Engineering Department, University of Minnesota, Minneapolis, Minn.



True enough, it is flattering to one's ego to proclaim that one is working at the burning issues that torment man's mind, but It is not what science tries to accomplish. It is much better if scientists modestly explain what science is all about, and what it has accomplished, especially if it is done in such a manner that the actual scope of science as well as Oejimitations of science become visible. We shall try to do this in this paper for the problem of probability considerations in science.

2. The reasons for the occurrence of probability considerations in science.

When studying physical phenomena, the world around us is represented by a "model". This "model" Is often an idealized situation in which disturbing influences are eliminated. For example, in the formulation of the laws of motion, one extrapolates first to the case of zero friction, since the laws of motion then attain their simplest form. Having formulated the "laws" governing the idealized case one can afterwards improve the model by taking the effect of the disturbing influences (in our example the effect of friction) into account. Usually they are taken into account one by one.

This procedure is followed because we live in a very complex universe. If everything in the universe interacted strongly with everything else, science would be extremely complicated. Fortunately, most of the interactions are extremely weak, so much so, that one can often start with a model in which only one interaction is taken into account. Next this simple model is improved by introducing other interactions as small disturbances of the model. Often only few of these interactions need to be taken into account for a reasonably accurate description of a whole range of phenomena.

The "model" can be described in a causal fashion. That is, if one knows the laws governing the model and if one knows the initial conditions of the system under consideration, then one can predict the future behavior of the system for long times. I purposely did not say "for all times", because there are cases where a branch of mathematics, known as perturbation theory, must be used. When that is the case, it may happen that the predictions of the theory become inaccurate for very long times. This is, e.g., the case in the theory of planetary motion.

The "models" by which we represent reality are fully determined. Is the world around us also fully determined? We will never know, for in order to do so we must know the laws with unlimited accuracy and the initial conditions with unlimited accuracy. But in all our measurements our accuracy is limited. Some methods of measurements are more accurate than others, but all methods of measurement have in common that there is some limitation to the accuracy. It is for this reason that the deterministic program can never be fully executed, it can only be approximated. This is one of the reasons for probability considerations in science.

Probability theory offers a way out of this difficulty. One determines the value of a physical quantity x in a large series of n independent measurements and then evaluates the quantities


X ave  = X1 + X 2  + ...... X =  [  the usual exuation]

Then the most probable value of x is R and the probable error in this average is 0.6745 5/Vn

The difficulty is that it takes an infinite number of measurements to obtain unlimited accuracy and there is not enough time for that. Moreover, the above procedure implies that all the errors introduced in the measurement are of a random nature and that no systematic errors are involved in the measurements.

Since we cannot know the laws with unlimited accuracy or the initial conditions with unlimited accuracy, we cannot predict the future of the system with unlimited accuracy. For that reason all our predictions contain a certain margin of error, sometimes larger, sometimes smaller, but never absent. This is one reason for the occurrence of probability considerations in science and this occurs in all our predictions.

Since there is no fundamental law against making measurements more accurate, this occurrence of probability considerations in science is non-fundamental. It merely indicates that the estimate of the last decimal of the individual meter readings is subject to error. It does not mean that the world is governed by chance; only our estimate of the last decimal is.

In some cases there are limitations of a more fundamental nature. We can list them as follows:

1. Systems can be so complex that it is humanly impossible to know all the initial conditions with sufficient accuracy to make accurate predictions at a microscopic level. In view of what was said before, this means that one can only make statistical predictions about macroscopic quantities.

2. When one pushes the accuracy of the measurements farther and farther one comes to limitations set by the atomistic structure of matter. The meter readings are not constant in that case but fluctuate around an average value. This sets a limit to the accuracy of the measurements that has nothing to do with the accuracy of meter readings as such.

3. In the atomic or sub-atomic domain there are limitations set by quantum effects. These effects make it impossible to know all the initial conditions with arbitrary accuracy and as a consequence many predictions must be of a statistical nature.

We shall now discuss these three possibilities in greater detail.

As an example of the first possibility consider the following problem taken from the kinetic theory of gases. A cubic foot of gas at atmospheric pressure

contains about 1024 (1 million billion billion) molecules. This number is so huge that one can never hope to know the initial conditions of all the molecules. And to predict the future motion of the molecules accurately, one would have to know all the initial conditions extremely accurately; since the slightest error in the initial condition of one molecule might make it uncertain whether or not a certain collision will take place.

But fortunately physical measurements are not performed at the microscopic level but at the macroscopic level. The quantities one is interested in are the average pressure of the gas, the average volume of the gas or the average temperature of the gas. These average quantities can be calculated in a relatively simple manner. For example, the average pressure of the gas is the average force per unit area on the wall. And this average force comes about because of the collisions of the individual molecules of the gas and the wall. These quantities are average quantities and hence statistical considerations apply.

Apart from this, the same situation mentioned before applies to this case. One starts with the simplest of models and then gradually improves it. In first approximation one can neglect the interactions of the molecules, and one then obtains the simple gas law. Next the various interactions between the molecules (finite volume of the molecule, the attractive force between molecules at short distance) are taken into account. Finally one ends up with a rather accurate prediction of all the macroscopic properties of the gas. The success of the theory comes about because one ignores the microscopic picture to a major extent and applies statistical considerations.

This does not mean that the mircoscopic picture is not there, of course. As a matter of fact, many manifestations of the microscopic behavior can be made observable. For example, if one makes very accurate pressure measurements, one finds that the pressure fluctuates around an average value. These fluctuations can be observed with a very sensitive microphone which transforms the pressure fluctuations into electrical signals that can be amplified electronically and made audible as a hissing sound (noise) by feeding the amplified signal into a loudspeaker. The theory can give the mean square value of the pressure fluctuations and again, that is exactly what the measurements yield. The spontaneous pressure fluctuations can then be accounted for.

As an example of the second possibility consider the measurement of small electric currents with a sensitive galvanometer. Here the limit is set by the small current fluctuations in the electric circuit of which the galvanometer forms a part; these fluctuations are caused by the random motion of the electrons in the conductors of the circuit. As a consequence, the galvanometer reading is not steady but fluctuates around an equilibrium value. If the current to be determined is small in comparison with these current fluctuations, it cannot be measured. There is here a limitation set, not by our inability to estimate the last decimal of a meter reading accurately but by the spontaneous fluctuations of the galvanometer deflection.

In the atomic or sub-atomic domain the problem arises that one cannot know' the initial conditions at a given time with deliberate accuracy. According to Heisenberg's uncertainty principle the product of the inaccuracy A x in the position of an atomic or sub-atomic particle and the inaccuracy A p in the momentum of that particle exceeds the value h/ (2 7 where h is Planck's constant. As a consequence one cannot predict the future with complete accuracy; the inaccuracy in the final result reflects the inaccuracy in the initial conditions. In other words atomic theories give probabilities of events.

At first sight this case seems different from the first case. There one purposely ignored a major part of the information to make the problem soluble. Here it is physically impossible to know the initial conditions with deliberate accuracy. But in either case the same end result occurs: the theory cannot give accurate predictions but yields probabilities.

Let this be illustrated with the case of a-decay of radio-active nuclei. In this radioactive decay the nu clei emit helium nuclei ( a -particles) at a certain rate. Apparently the a -particle is present in the nucleus and bound to the nucleus, for otherwise the nuclei would decay instantaneously. How then can the a-particle escape? Classically speaking, it cannot, but from the wave-mechanical point of view escape is possible.

To understand this we represent the radio-active nucleus by the following model. A marble oscillates in a bowl without friction with so little energy that it cannot reach the rim of the bowl. Hence according to the laws of classical physics the marble should stay in the bowl forever. But in fact the motion of the a -particle in the nucleus must be represented as a wave motion. If in our picture the motion of the marble is considered as a wave motion, then the wave does not have to pass over the rim of the bowl, but can pass through the wall of the bowl. In the radioactive nucleus the "wall of the bowl" is thin enough to give this event a certain probability. In other words the theory gives the rate of radioactive decay. This example also shows how the statistical character of the predictions made by wave mechanics occurs.

There are two equivalent ways of interpreting Heisenberg's uncertainty principle. One way represents the motion of a particle as the motion of a wave packet. Heisenberg's uncertainty principle then follows directly from the consideration of these wave packets. Another way looks more at the physics of measurements in the atomic or sub-atomic domain. For example, if one wants to measure the position of a particle accurately, one uses a y -ray microscope. The particle scatters -y -ray light into the microscope and thus becomes observable. But in doing so, the particle receives momentum from the scattered quanta and hence its momentum after the measurement differs from its momentum before the measurement. If one works out the details, one ends up with Heisenberg's uncertainty principle.

3. Spontaneous fluctuations and random events.

We encountered fluctuations in various instances already. They could be amplified by many more examples. Practically everywhere in physics one encounters spontaneous fluctuations of one form or the other.

These spontaneous fluctuations appear to us as random. The reason is that we cannot observe at the microscopic level. If we could, we would see the microscopic phenomena and the spontaneous fluctuations would thereby find their causal explanation. As long as one observes at the macroscopic level, the causes of the fluctuations escape our notice and they appear as random.

The occurrence of random phenomena does not indicate that the world is governed by chance. Rather it is an indication that one is operating at a level of investigation that is disturbed by phenomena occurring at a deeper lying level. At the microscopic level everything has its causal explanation; at the macroscopic level this is not the case.

If the explanations of Heisenberg's uncertainty principle are taken seriously, it would seem that the randomness encountered in the atomic domain is of a different nature. But Bohm has suggested that this is not the case. He has proposed that in atomic experiments the observations are disturbed by phenomena occurring at a deeper lying level so that the observations show random fluctuations. These phenomena at a deeper lying level he proposes to describe by "hidden" variables. If one could know, these hidden variables, a strictly causal description of the observations could be given. Since one does not know them, the phenomena appear as random.

This randomness shows itself in the case of so-called "elementary events", such as a collision between an electron and a particle, the radioactive decay of a nucleus, the transition of an atom from a higher energy state to a lower energy state, etc. The theory cannot predict when an elementary event is going to occur; it only can predict the rate of occurrence. Our description of atomic phenomena does not penetrate into the nature of things. Our predictions are as causal as Heisenberg's uncertainty principle permits, and beyond it we cannot go. There is, however, no justification for calling elementary events "events without a cause", as is sometimes done. For we do not know whether or not the event has a cause, it merely appears to us as random.

If Bohm is correct, then the hidden variables are responsible for the random behavior in the atomic domain. His proposal has the merit that it provides continuity and that it Parallels phenomena in the atomic domain with other known microscopic phenomena. It also makes one more cautious not to ascribe great philosophical significance to modern atomic theories.

In my opinion any interpretation of Heisenberg's uncertainty principle, including Bohm!s, is optional. One can take it or one can leave it. All that really matters is that the equations upon which Heisenberg's uncertainty principle is based are maintained. If somebody feels better by adopting Bohm's proposal, let him go ahead. If somebody feels better by rejecting Bohm's proposal, he may do so. In my opinion it is unwarranted, however, to draw sweeping philosophical or religious conclusions from Heisenberg's principle, since this is a misuse of science that obscures the scope and the goals of science.

4. The second law of thermodynamics.

The second law of thermodynamics states the general direction into which processes will go spontaneously. We give here its formulation in terms of probability concepts as follows: "A closed system, left to itself, will go spontaneously from a less probable to a more probable situation."

Let us illustrate this with some examples. A hot body to which no heat is supplied will cool down to the temperature of its environment, since it is more probable that heat is distributed evenly than that. it is concentrated in one body. If a gas line is opened for a short time interval, then the gas, which is at first concentrated near the opening, will gradually distribute itself evenly through the room, since a uniform distribution of the gas is more probable than its concentration in some small volume of space.

If one measures carefully, one finds that the temperature of the cooled-down body is not exactly equal to the temperature of its environment but fluctuates around it. In the same way the distribution of the gas molecules through the room is not exactly uniform, but the concentration in a small volume element of the room fluctuates around its equilibrium value. Large deviations from equilibrium, though not impossible, are extremely unlikely, however. The second law of thermodynamics predicts the tendency to reach the equilibrium condition but does not explain the spontaneous fluctuations around equilibrium.

The second law predicts the future of many systems but it cannot predict the past. For if one tries to do that, one obtains that the system must have come from a state of larger to a state of smaller probability. The reason lies in the words "left to itself." A system "left to itself" in the past is a system to which nothing was done earlier. If that is the case, then the improbable state at time zero must have come from a very improbable spontaneous fluctuation. If one does not want to accept this, and there is no reason why one should, then one must conclude that the second law cannot always predict the course of past events.

5. Random phenomena in the biological domain.

We shall not dwell upon the pros and cons of the theory of evolution; that I gladly leave to others. There are two aspects of the theory that have a bearing on our subject: the occurrence of mutations and survival rates.

It is presently well known that mutations are caused by a rearrangement of molecular groups in the chromosomes. In some eases a molecular group breaks loose and reattaches itself to another part of the chromosome. In other cases a chromosome breaks and the broken-off part is connected to another chromosome. Both processes alter the genetic code and thus change the outward appearance of the organisms in question. Such mutations can either occur because of the thermal vibrations of the molecules, or by ionization caused by y -rays, electrons from radioactive decay or cosmic ray particles.

In the first case molecule groups are shaken loose by the thermal agitation of the molecules and they reattach themselves somewhere else. In that cane the mutation rate has a very characteristic temperature dependence that gladdens any physicist's heart. In the second case the ionization results in a break-up of the chromosome followed by a subsequent rearrangement of its parts. In that case the mutation rate is proportional to the intensity of the incoming radiation.

Both processes are treated at random, and for a very good reason. Both processes involve the events of ionization and thermal dissociation, and one can argue that they are elementary events describable by wave mechanics. But even if they could be described on a classical basis, one could not predict in advance when an ionizing particle would strike a chromosome or when a molecular group would be shaken loose by thermal agitation. It is not warranted to call these processes "without cause", nor do they indicate that biological events at the molecular level are governed by chance. We can only say that these events appear to us as random, as long as we do not fully observe at the microscopic level.

Next something about survival rates. A salmon lays about 20,000 eggs, I have been told. And of these eggs only two need to reach maturity to supplant the parents from which they came! All the others are eliminated either because the egg does not develop or is eaten or because the young salmon is eaten or dies before reaching maturity. These processes, are to be treated as random processes, not because they are elementary events in the wave-mechanical sense but because one cannot predict in advance, other than on statistical terms, what will happen to the individual eggs.

We conclude therefore that the random processes in biology partly reflect our ignorance about the future and partly indicate our inability to overcome Heisenberg's uncertainty principle.

6. Canclusion.

It has been argued that the statistical considerations open up the possibility for the occurrence of miracles. For statistically there are no impossible events; there are only probable and improbable events. It is thus possible that something occurs that goes against the existing order and against common experience. If such phenomena are called "miracles", then such miracles can and even must occur if one waits long enough. These miracles are thus very rare spontaneous fluctuations. They occur by the "grace of statistics", whereas biblically speaking they occur by the "grace of God". That is a step backward rather than forward.

It has been argued that the second law of thermodynamics points toward a creation. For if the present improbable state of the universe is not caused by a spontaneous fluctuation, then an improbable initial condition must have been set in the past. If one calls this "creation", then the Creator thus introduced looks more like a retired engineer than like the God of which the Bible speaks.

It has been argued that Heisenberg's uncertainty principle allows the human will to be free and allows God to act in His freedom. Now, Planck's constant is a very small quantity. Does this mean that the human will only has rather narrow limits of freedom and that God may not be so very free? I maintain that Heisenberg's principle has nothing to do with the human will nor with God's freedom; God is free because He is God.

Statistical considerations are extremely useful in science but their limitations should not be overlooked and one should not draw far-reaching conclusions from them. What we consider random is not necessarily random in fact, nor is it necessarily random in God's sight. To bear this in mind prevents unwarranted sweeping conclusions drawn from science.