Science in Christian Perspective



Some Implications of Artificial Intelligence Research
Innovatia Laboratories
5275 Crown Street
West Linn, Oregon 97068

From: JASA 34 June 1982): 71-76.

Electronic computers have given rise to artificial intelligence research and questions about the meaning of intelligence. A complete account of intelligence must include and adequately relate both its objective, scientific meaning and its personal, existential meaning. Also, the biblical doctrine of man is adequate in accountingfor human significance in view of the emergence of machine intelligence.

Over the last thirty years, with the advent of electronic computing machinery and its capability of performing some tasks much more effectively than human reason can, researchers have begun to inquire about the meaning and nature of intelligence and how it applies to machines. This new direction of inquiry, known as Cognitive Science or Artificial Intelligence (AI), is summarized by one of the leading researchers in this area, Nils Nilsson of Stanford Research Institute:9

The field of Artificial Intelligence (AI) has as its main tenet that there are indeed common processes that underlie thinking and perceiving, and furthermore that these processes can be understood and studied scientifically.

Nilsson continues and summarizes Al achievements:9

While attempting to discover and understand the basic mechanisms of intelligence, these researchers have produced working models in the form of computer programs capable of some rather impressive feats: playing competent chess, engaging in limited dialogs with  humans in English, proving reasonably difficult mathematical theorems in set theory, analysis, and topology, guessing (correctly) the structure of complex organic molecules from mass-spectrogram data, assembling mechanical equipment with a robot hand, and proving the correctness of small computer programs.

Whether the activities of these workers constitute a new scientific field or not, at the very least Al is a major campaign to produce some truly remarkable computer abilities. Like going to the moon or creating life, it is one of man's grandest enterprises. As with all grand enterprises, it will have profound influences on man's way of life and on the way in which he views himself.

In this paper, questions about the meaning of intelligence as it applies to machines are briefly investigated. To raise these questions is no longer an irrelevant exercise for as Nilsson's examples illustrate, Al techniques are beginning  to be used for practical purposes. Although the science-fiction scenario of society becoming captive to machines that  it creates is not an immediate threat, the more subtle and insidious problems of the relation between men and machines  needs to be examined.

What Does "Intelligent" Mean? 

The word intelligent in itself is imprecise because it can be used in (at least) two quite different ways, as the following statement exemplifies:

Any intelligent computer scientist recognizes the difficulties in defining intelligent. (Statement 1)

The first use of intelligent in this statement is a common sense assessment of another computer scientist. Such an assessment is the product of my introspection of an indeterminate range of meanings by which I know the use of the word intelligent. It is by an act of personal judgment for which I cannot state all the rules by which I come to know a colleague to be intelligent.

Such lack of formal precision is an inescapable part of our knowledge.
I I Whenever we conceptualize, we strive to achieve a more objective representation of what we mean by a key word (such as intelligent) so that our meaning may be more accurately shared by others who accept its definition. But to define is to "show limits"-to reduce the
possible range of meanings a word can stand for until they are manageable. Defining is an act of naming-that is, of naming the essential characteristics that must be true of something before we can label it with the defined word. For example, if a car were defined as a machine used for transportation with an engine, four wheels, and a passenger compartment, then by this "definition" a truck could be a car because it also has all of these characteristics. Sufficient limits have not been shown since all the essential characteristics of a car have not been named; a description has been given but not a definition. It is still at this stage in the activity of defining, an idea and not yet a concept, because it cannot be distinguished from similar but distinct meanings, such as a truck.

This activity of striving to achieve concepts by reflecting upon ideas-by examining the results of intuition-is inherent in any scientific or scholarly pursuit. In Statement 1, the second use of the word intelligent is conceptual and therefore different from its first use, which is intuitive. The ultimate goal of cognitive science is the complete, or at least the essential, conceptualization of the idea of intelligence. The common-sense idea we have about it and the ways in which we use the word in ordinary speech are not satisfactory for scientific purposes and must be clarified by definition, as McCarthy and Hayes have attempted to do. They actually have given two definitions, the first conforming to the theory of knowledge (epistemology)-of determining what intelligence is in itself, and the latter, based on the operational behavior of something intelligent:19

1. epistemological-" the representation of the world in such a form that the solution of problems follows from the facts expressed in the representation"

2. heuristic-"the mechanism that on the basis of the information
solves the problem and decides what to do"

These definitions in themselves show that McCarthy and Hayes were attempting to reduce the range of meanings of intelligence to a more objective status. Words such as representation, information, problem, and mechanism trigger common shared meanings in the minds of computer scientists, so that by their assumed clarity they could be used acritically in setting forth a more objective and conceptual meaning of intelligence.

Can Created Persons Create Intelligent Devices?

It is at this point that we encounter one of the main controversies about intelligence as it applies to machines. For some, the idea that a device created by men and devised by them for human purposes could be intelligent is repulsive and incredible. It is repulsive because it accedes to man the ability to create that which also has a characteristic of man highly valued by those repulsed. This readily implies that what was uniquely valued in man now must be shared with a creation of man. The significance of man's previously unique characteristic among creatures has been diluted. It is incredible because of a clear lack of precedent and the perceived enormity of the task. It is not granted that progress toward a machine intelligence will not be finally thwarted by unknown fundamental limitations, such as Heisenberg's Uncertainty Principle was to the success of a fully deterministic explanation in mechanics. And it may well be both repulsive and incredible if it is assumed on theological grounds that God's creation of intelligent beings is reserved for him alone to do. To others, namely those committed to Nilsson's main tenet, the ultimate hope and goal of Al research is a scientifically adequate understanding of intelligence, and additionally, for those involved in a corresponding technology-perhaps it could be called "knowledge engineering" -the successful construction of intelligent machines." Whatever the personal beliefs of those in this latter camp are on wider issues, when it comes to their working attitude toward intelligence, they are committed to the success of their program. The implications of this attitude, as seen in the Al literature, are statements which those in the opposing camp find incompatible with their view of intelligence.12,13,16

Like other controversies of our day on issues with a subject-matter common to both science and theology or philosophy, I believe the basis of such controversies is often much like that between the blind men of Hindustan, each discovering for himself what an elephant was like by individually feeling different parts of its anatomy. To its own extent, the personal experience of the elephant by each blind man was for him correct, but limited. No one account of the elephant could adequately represent the experiences which led to the other accounts. A more general theory that adequately inferred the experiences of all the blind men would also render any one of their accounts as true but inadequate or even false in general.

Like the elephant, the subject-matter of intelligence has a different feel to different parts of its "anatomy." The meaning of intelligence to the Al community will most likely be different from its meaning to those opposed. Because the actual positions that are taken contain true, though limited insights mixed with error, to show the essential harmony between true statements from each camp, an identification of errors is a necessary part of my program. These errors, as with the Hindustanis, are almost always in the form of excessive generalization. Along with this I hope to provide a further explication (and, hopefully, clarification) of those limited insights from each camp that I regard as true. Because of my personal orientation, two camps with which I share values, goals, and general presuppositions are the Al community and the historic Christian community. I am committed to the basic ideals of each but cannot accept all of the actual views within either community or the means by which some of these views have been brought forth. As the HAL 9000 computer from Arthur Clark's 2001: A Space Odyessy said, "Problems like this have cropped up before, and they have always been due to human error."

The Artificial Intelligence Perspective

My program here is to first examine the Al perspective and then the biblical/theological point-of-view. To examine the main strength-and, in a different sense-weakness of attitude in the Al community, I here return to where I paused in describing the importance of striving to achieve a conceptual, scientific understanding of intelligence. The methods of science have been unquestionably successful relative to their purpose of producing an objective, conceptual account of physical phenomena. Thus, it is fitting, given that intelligence is manifested through some physical means, to study the nature of that physical embodiment, or cognitive mechanism (as MacKay calls it). On the basis of scientific precedent, it is not unreasonable to hope that such an effort will be fruitful. Although the early overoptimism of pioneers in cognitive science has drawn some justifiable criticism, it is in the spirit of any grand enterprise-scientific included-to express a kind of hope equal to the immense sacrifice demanded to succeed at the task. It is this intense effort by Al researchers that is both essential to progress and, in a different way, presents an occupational hazard. For, in dwelling deeply on one's work according to the habit of mind most likely to achieve results, this habit becomes a familiar and comfortable frame in which to rest one's thoughts. On a psychological level, it is the seed for excessive generalization, since the tendency to use a way of thinking that is successful in one area of experience for dealing with every kind of experience accompanies intense, specialized mental effort. Of course, one need not give in to this, and personal activities which provide a healthy counter-perspective not only assuage such a tendency put also provide a wider perspective from which one's intense work can be reviewed. This balance between reflective and assertive mental activity results in a more comprehensive understanding of the varied ways of regarding the subject-matter under consideration. For Al research, the intense scientific motivation, with its goal of exhaustive formalization and objective, symbolic representation of knowledge about intelligence, may tend to lead one committed to its ideals to assume that all that may be known about intelligence can be cast in such a form. Such is not the case, and could not be even in principle.

This impossibility is not the result of incompetence of Al researchers or even of the newness of the field. It is due to fundamental limitations on the activity of conceptualizing. When one sets out to define a concept, a maze of incoherent ideas are explored in an attempt to choose those of interest and establish systematically relations to previously known concepts. In doing do, these discovered relations provide a context in which the individual ideas begin to acquire a conceptual meaning. What is already clearly known while conceptualizing is the ground against which the newly-forming concepts may be seen. This figure/ground dualism which accompanies all our thoughts is inescapable.

Because we cannot clearly hold in mind the entire set of specific potential meanings that is an idea, we are required to choose some small subset that we are capable of holding in mind symbolically. The symbol-system used is commonly our native language. In science, it is the language convenient for expressing the scientific concepts with which we are occupied. In Al research, this is the language of' mathematics and also computer languages. The individual symbols that are used to represent our different meanings provide a convenient way in which to manage-to identify and categorize-the vast numbers of them that our minds hold. In language, these symbols are words that can trigger their corresponding meaning in our minds when we encounter them. To discover a coherent pattern of relations among various meanings, the scientific or analytic mind attempts to reduce this vast array to a form in which each can be represented by a set of clear, distinct ("mathematically orthogonal") basic meanings labeled by symbols. These are concepts from which all our other meanings may be built up and represented symbolically, resulting in an orderly pattern for our thoughts. Meanings which result from perception (from observing the physical subject-matter) and useful ideas from imagining about the subject-matter, are refined by means of reasoning (or thinking) to conceptual representations, which are more easily manipulated mentally than the original raw impressions.

But this activity of conceptualizing has its limits. For even our most well-established basic concepts are known to us only through intuition. The personal act of introspecting the meaning of a concept by reviewing its definition is not the definition itself. Concepts, by their manageability, provide for a more effective intuitive knowledge; but concepts cannot replace their more basic foundations in personal, existential consciousness. Even words labeling concepts must be defined using other words, ad infinitum. This infinite regression, which results from attempting an exhaustive conceptualization, leads back to intuitive knowledge; it

Dennis L. Feucht is involved in the development of electronic instruments with knowledge-based reasoning capabilities, in the Applied Research Group of Tektronix Laboratories. He is also developing a tactile sensorfor robots, and at Innovatia Laboratories, which he started in 1978, licenses R & D in robotics. He holds a B. S.E.E. (Computer Science) from Oregon State University, 1972, and since then has designed a variety of test and measurement equipment at Tektronix.

is a kind of knowledge which is undisclosed to the knower while he contemplates the conceptual object of this knowledge.

Furthermore, this kind of intuition, or personal knowledge, because it is the basis for conceptualizing is unable to be shared with others or formalized in any way by the knower. To turn his attention toward it is to reflect upon his own mental activity, but even then the mental activity he has while reflecting is not the mental activity upon which he is reflecting. In choosing what to reflect upon-which of the meanings latent in an idea to select-he is guided by a purpose which arises out of personal knowledge. For the scientist, this purpose guides him in the exercise of his scientific judgment, in discerning which aspects of the problem of understanding intelligence are most germane or relevant and which data are critical or superfluous.

For those who use the word intelligent as I did the first time in Statement 1, regarding as essential to its meaning the inclusion of personal knowledge, then even a complete scientific account of intelligence-my second meaning for it in Statement I-fails to include this existential aspect. The scientific account explains its existence quite adequately,

Care must be exercised in distinguishing between scientific and personal knowledje to avoid speaking of either in the wrong context. Both must be given their proper place in a theory of intelligence.

but the explanation does not contain it, for it is present only in the knower himself. This personally introspected awareness, that behind my thoughts is the "I" who am thinking them, is represented in the complete scientific account of my intelligence, but is not itself there. It is this personal aspect of knowing that I recognize in myself, and by inference recognize in others, that allows me to discern intelligence and assess others to be intelligent or not, whether they be man or machine. Even if I had a reliable scientific means for impersonally determining whether a being was intelligent or not, though my scientific questioning would be satisfied by its result, I would not be personally satisfied without direct experience (through interaction with the being in question) that would provide grounds for a personal assessment.

In the early 1950's Alan Turing proposed a test, known as the Turing test, for determining whether a machine could be considered intelligent.14 In simplified form, his test was to communicate with another entity in such a way that the medium of communication would not reveal whether the entity was a man or a machine. For example, by "talking" with this entity via a computer terminal, if one could never decide for sure whether a computer or human being was at the other end, Turing's test would indicate that the entity was intelligent.

By Turing's test, the assessment of intelligence by an examiner is based on his personal knowledge of the one with whom he is in dialog.

This difference is recognized, for example, in the German language by the use of two different words for "to know", namely wissen, the objective knowledge of someone or thing and kennen, subjective or personal knowledge from experience itself. It is the difference in knowing about someone versus knowing them personally. Both kinds of knowledge have validity as complementary aspects of the whole of knowledge and neither can be reduced to being a part of the other.

It is this irreducibility that is often not recognized as a logical limitation by those who prefer the second use of intelligent in Statement 1. MacKay8 has shown that what would be correct for an observer of an agent-in this case a hypothetically intelligent being-to believe about him would not be both correct for the agent himself to believe and incorrect to disbelieve. The system of true statements of both the observer and agent is logically indeterminate because they reference each other in antinomic fashion (i.e., the "paradox of the liar" applies). If a complete scientific account could be given of the agent's cognitive mechanism (that is, the physical embodiment of his intelligence), the agent would be mistaken to believe it was true because by believing it, his cognitive mechanism would be changed so that it could not be the one described in the scientific account. Conversely, if the account were modified so as to come true by his believing it, he would not be mistaken to disbelieve it. In a logical sense, whether an account of his cognitive mechanism is true or not depends on whether his choice of believing it or not renders it true or false; the choice is up to the agent. Because the correctness of the scientific account depends on the choice of the agent it defines, the knowledge of the agent is not logically the same as the observer's knowledge of him. If his intelligence consists at all in what for him he would be correct to believe and incorrect to disbelieve, then it cannot be the same as a scientific account of his intelligence as known by an observer. His personal involvement in what he knows is logically not the same as knowledge of his cognitive mechanism by an outside observer.

Therefore, care must be excercised in distinguishing between scientific and personal knowledge to avoid speaking of either in the wrong context. Also, both must be given their proper place in a theory of intelligence. These necessary distinctions, I believe, go a long way in resolving some of the more serious conflicts over the meaning of intelligence.

  Machine Intelligence

Thus far, I have attempted to establish the place of personal knowledge in a theory of intelligence and how its absence in a scientific account of intelligence does not reduce its necessity, logically or experientially. But what of machine intelligence? A machine is a humanly devised too or artifact that is a concrete instantiation of a form system .7 To design and build a machine requires formal knowledge by the designer of its operating principles, and the more complex the machine the greater is this requirement. Before an intelligent machine can be constructed, its designers must have adequate formal knowledge of intelligence to embody it in a machine. Whether such knowledge is attainable is a scientifically open question. So little is known about the human brain in this context that we are left with the present achievements of machines of Al from which to venture a prediction. But such a prediction is mere speculation considering their present limited range of capabilities. in the 1960's work in Al was directed toward the development of general problem-solvers but it was learned that insufficient understanding of intelligence suggested the development of specialized, limited-domain "expert" problem-solvers. This was the main work in the 1970's (of which Nilsson gave several examples) and this work resulted in more sophisticated Al techniques.10, 17, 18 Although more powerful computers increase the performance of Al programs, this increase is marginal since the order of complexity of the problems solved by the programs far outstrips even significant improvement in the hardware. What is needed are conceptual breakthroughs in AI models and computer architecture. For the Al researcher, though the scientific possibility of machine intelligence is an open question, the working assumption or belief is that it is possible and worth the effort.

The Biblical Perspective

I intend now to turn my attention largely toward the other camp, which is antagonistic toward this position of the AI community. I will dispense with all non-theological arguments against machine intelligence by reference to the classic article on this subject by Alan Turing: Computing Machinery and Intelligence.14 Turing considers in his list of counter-arguments the theological argument, which largely presupposes the position of Thomas Aquinas. Aquinas, of course, stood in the Aristotelian tradition of the medieval scholastics and projects more of a Greek than a Hebrew view of man. Turing's theological commentary on machine intelligence is accordingly incomplete. I intend here to contribute to its coverage.

In the cosmogony of the Bible, man is given a unique place by God in his creation. Man's uniqueness and dignity is not threatened by intelligent machines from a biblical point-of-view since God created Adam and his descendents for purposes unique to them. Because it is with man alone that God has entered into a covenantal relationship by which man's uniqueness and significance is established, all that follows from such a relationship applies to no other entities, whether they are intelligent or not, or created by man or not. Man is given dignity and uniqueness relationally rather than ontologically.2 If an ontological approach to this problem is taken, whereby man's uniqueness is found in what he is in himself-in some higher quality of man incapable of being shared by non-man-then further advances in machine intelligence will require finer and finer distinctions between man and machine. Whatever significance is attributed to man's uniqueness with this approach diminishes with technological progress. It is equivalent to, and suffers from the same weakness as the god-of-the-gaps approach to the problem of God's activity in the world.3 In the context of previous discussion, a purely ontological view of man corresponds to a scientific approach, where man himself is studied and differences with machines noted. As these differences diminish, the ontological argument for man's unique place is correspondingly weakened.

This problem is also suggested by other biblical data. In the creation we find not only man and God as the intelligent beings of the universe, but also angels (and other beings as well). The existence of angels has not been viewed as a threat to man since the Scriptures are clear about their purpose. Even so, they also are portrayed as beings "higher" than man (Psalm 8:4-9). Also, since angels are not often manifest, they are equally not available for study as a machine. This may be irrelevant to those who find manmade intelligence offensive. I simply want to point out that man's uniqueness does not consist biblically in being the only intelligent creature.

As for man-made or machine intelligence, I am willing to wait for future developments. I find that the biblical doctrine of man neither depends upon nor is threatened by the possible answers to the question: is machine intelligence possible? The question is meaningful, but is not theologically critical because the validity of the biblical doctrine of man is not affected by its answer.

Although man's uniqueness is to be found in his covenantal relationship with God, how does this account for man being made in God's image? Biblically, man was created "in the image of God" (Gen. 1:27) and, according to Paul, is intuitively aware of it (Romans 1: 19, 20).1 Because the Bible is not explicit about what constitutes this imago Dei, the different characteristics of man to which it is attributed lead to differing views on the theological significance of machine intelligence. To see the image of God as consisting in human intelligence gives cause to view machine intelligence as a threat, but such a conception is more in accordance with Greek than with biblical thought.' The wholistic view of man in the Bible offers a clue to how this image may be recognized.5,15

Earlier, I argued that a purpose underlies the direction our conceptualizing takes. This purpose integrates our sequence of thoughts so that together, as parts of a whole, they reveal comprehensive features of the subject-matter we are thinking about. Further acts of comprehension may in turn depend upon previous comprehensive insights which become parts in a still larger whole. Consequently, our understanding of the subject-matter increases in its generality, guided by our even more general purpose for understanding. It is man's recognized purpose for his existence that his relation to God is made known to him, and from this awareness follows all the more detailed levels of understanding and acting. What results is a continuum from the most basic and profound level of awareness of God as our purposive Creator, to the most immediate and direct level of physical actions by us. Body and mind or spirit are, in this sense, one comprehensive whole that is man.


What is meant by intelligence can be either an objective account of my cognitive mechanism or introspection of my own conscious experience. Although the former meaning is the required one for science, it is not sufficient for the intelligent being it describes. The controversies over the meaning of machine intelligence often overlook this distinction. For both scientists and those upholding man's significance (as is expected of theologians) to assume that their individual accounts are sufficient is inadequate. Both personal and scientific aspects of intelligence are necessary to a more complete understanding of it.


1Berkhof, L., Systematic Theology, Eerdmans, 1941, p. 206
2Brinsmead, Robert D., "Man (Part 1)", Verdict, vol. 1, no. 1, August
1978, pp. 6-26
3Harris, Laird R., "The God of the Gaps", Journal ASA, vol. 15, no. 4, December 1963, p. 10iff
4Hofstadter, Douglas R., Gbdel, Escher, and Bach: An Eternal Golden Braid, Basic Books, 1979, pp. 358, 359; pp. 384, 471
Jennings, George J., "Some Comments on the Soul as Developed in Orthodox Christianity", Journal ASA, vol. 19, no. 1, March 1967, p. 7ff
6Ladd, George Eldon, "The Greek Versus the Hebrew View of Man", Present Truth, vol. 6, no. 1, Feb. 1977, pp. 6-18
Lucas, J.R., "Minds, Machines and Gbdel", Philosophy, vol. XXXVI (1961)
MacKay, Donald, The Clockwork Image, IVP, pp. 66-83, 111
Nilsson, Nils, "Artificial Intelligence", Information Processing 74: Proc. of IMP Congress 74, p. 778ff
Nilsson, Nils, Principles ofArtificial Intelligence, Tioga, 1980, pp. 10-14
11Polanyi, Michael, The Study of Man, U. of Chicago Press, 1959, p. 23ff
12Schriven, Michael, "The Mechanical Concept of Mind", Mind, vol. LXII, no. 246 (1953), reprinted in Minds and Machines, Alan Ross Anderson, Prentice-Hall, 1958, pp. 36-39
Taube, Mortimer, Computers and Common Sense, McGraw-Hill, Colombia University Press, 1961
Turing, Alan M., Computing Machinery and Intelligence in Computers and Thought, ed. by Edward Feigenbaum and Julian Feldman, McGraw-Hill, 1963, pp. 11-35
15Van Vliet, K.M., "What is the Meaning of Soul and its Connection to the Body?", Journal ASA, vol. 19, no. 1, March 1967, p. 2ff

G Weizenbaum, Joseph, Computer Power and Human Reason: From Judgment to Calculation, Freeman, 197617Williams, Leland H., "A Christian View of the Computer Revolution", Journal ASA, vol. 18, no. 2, June 1966, pp. 36, 37
Winston, Patrick Henry, Artificial Intelligence, Addison-Wesley, 1977,
pp. 6-12; 235-255
19McCarthy, J., P.J. Hayes, "Some Philosophical Problems from the Standpoint of Al", Machine Intelligence 4.