How Smart are Computers? (Nov, 1961)

I wonder what he’d have thought of Watson.

How Smart are Computers?

BY J. R. PIERCE

ACCORDING TO DOCTOR PIERCE: “The chief charm of the computer comes from the mental skill, agility and insight which men called programmers exercise in causing it to solve difficult and fascinating problems. ”
“Though faster, computers are less versatile than human beings, because they are formed of fewer parts. ”
“The principal limiting factor of the computer is human direction in the form of a program which will guide the machine in a given task.”

When I leave my office to confer with the mathematicians at the Bell Labs, I walk along a corridor with a long glass window on the right-hand wall. Through the window I look into a room about ninety feet long and forty feet wide. There I see a digital computer.

I see fourteen glass-fronted white and blue and gray cabinets containing reels of the familiar magnetic tape. These reels occasionally spin backward or forward as the computer reads numbers from or records numbers on a tape. At one end of the computer is a device that prints numbers and letters on paper, a line at a time. The heart of the computer, the central calculator, is hidden behind the tape units, and so are the memory units in which the numbers are stored during computations. I cannot tell what the computer is doing by looking at it. Even if I could, I couldn’t find words to describe its actions as fast as they occur. During one recent afternoon the Bell computer:
• Checked parts of a computer program used in connection with machine methods for processing manufacturing information.

• Processed and analyzed data on telephone transmission which had been transmitted to the laboratories by teletypewriter and automatically punched on cards for computer processing.

• Solved a partial differential equation.

• Computed details of the earth’s magnetic field.

• Checked part of a program used to handle programing cards.

• Fitted curves to some data by translating numerical values into graphs.

• Located an error in a program designed to process psychological data.

All this took the computer three minutes. On another occasion, in eight and a half minutes, the computer proved all of the 350 theorems in mathematical logic which occur in the first thirteen chapters of Principia Mathematica, by Russel and Whitehead. The machine has also computed which classes of telephone service would be most economical for selected groups of customers under a new rate structure and what their bills would be. It has simulated the flight of a missile. It has specified the arrangement of electric subassemblies with the shortest interconnecting leads and made the necessary diagrams of the mounting positions and wiring. It has generated musical sounds and artificial speech and song.

These are some of the things men do with computers, and it is what men do with computers that makes them interesting. To draw an analogy, an automatic dishwasher is more complicated that a Stradivarius, but its performance is as dull as dishwater compared with that of the violin in the hands of an artist. As an electrical device, a computer is less various, less complicated and less interesting to an engineer than the telephone system of a large city or the SAGE air-defense system. The chief charm of the computer comes from the mental skill, agility and insight which men called programmers exercise in causing it to solve difficult and fascinating problems.

Of course, the computer itself is just as worthy of thought and respect as a fine violin. We must understand something about the construction of a computer in order to understand its use. But it is really the manner of use that counts.

The computer is a device for manipulating numbers. So, in an elementary way, is an adding machine. In an adding machine numbers are represented by means of little wheels. Each wheel can be in any of ten positions, exposing one of the digits, 0 through 9. Instead of such mechanical wheels, a computer makes use of electronic devices: wires passing through magnetic rings, or “cores,” which can be magnetized in the direction of one pole or the other; magnetic tapes on which a small region can be similarly magnetized; transistor or vacuum-tube circuits which can be either “off” or “on”; wires in which a current can flow in one direction or the other, and perforated cards or tapes which activate the machine.

We see that all of these computer devices have a binary character—that is, in each step there are two choices: north or south, off or on, hole or no hole. This does not lend itself to the ten digits of the decimal number system we use in our everyday computations, but it does to the two digits, 0 and 1, of the binary number system. For this reason, computers are built to use binary numbers.

Let us see how this applies. In a decimal number the rightmost digit specifies the number of units, the next to the left the number of ten’s, the next the number of 100′s, and so on. Thus, 13 is three units and one ten. In the binary system the rightmost digit specifies the number of units, the next to the left the number of two’s, the next to the left the number of four’s, the next to the left the number of eight’s, and so on. To illustrate: One eight plus 1 four plus 0 two’s plus 1 unit is 1101 in binary notation—adding up to 13 in the decimal system. There are simple rules for adding, subtracting, multiplying and dividing binary numbers which make these processes applicable to the “either-or” operation of the computer.

A computer is equipped with a “memory” made up of groups of magnetic cores. Each core of a group can store one digit of a binary number, by being magnetized in one direction for a 0 or in the other direction for a 1. The digit can be read into or read out of the core by passing a current through a wire which threads the core. The digit remains in the core (that is, it is memorized) until expunged by a subsequent suitable current.

The first requirement in using a computer is to put appropriate binary numbers into the core memory. This is usually done in a sequence of steps. The numbers are first written on a sheet of paper, punched into cards, and transcribed to magnetic tape, from which the computer reads them into its memory. At the end of a computation the results, having been stored in the computer’s memory, are printed out on magnetic tape.

The computer can use the numbers in its memory in two ways. Some numbers represent the data to be processed, and the computer stores the intermediate and the final results of its computations in its memory. Other memory positions are reserved for numbers representing commands to act upon the data.

In making a calculation, the central calculator will take its first command from the first of a particular set of memory locations. This command, for example, might be to take a number out of a designated memory position and put it into a register in the central calculator. The computer would then take the next command from the next memory location. This command might make the computer take another number from another memory location and multiply it by the number that had been put into the register. Another command would then store the product in a specified memory location.

In this simple step-by-step process, each command involves only the number in the register and the number in one designated memory location. The command is to perform one of a number of elementary operations such as addition, subtraction, multiplication, division, or any one of a number of other operations.

Though the individual operation is simple, the over-all result can be very complicated. The command itself may tell the computer where to look in the memory for the next command. For instance, the command may tell the computer to go back to some earlier part of the program, or examine the result of a step, such as a subtraction, and subsequently go to one or another place, or to use some standard program, which is always available on magnetic tape, to perform a number of steps in its computations.

Computers can solve numerical problems too complicated and tedious for a man to solve even if he had a thousand lifetimes. They can excel man in arranging parts of an apparatus in favorable locations or in scheduling production. They can design certain parts of other computers more swiftly and accurately than a man could. They can imitate the behavior of machines and mechanical processes. If we could formulate accurate rules describing the behavior of our economic system, a computer could be made to simulate that behavior and so foretell the effects of proposed business decisions or government policies.

One might think that a computer would be limited because it deals only with numbers. The numbers, however, can stand for letters, words, mathematical signs, musical notes. Similarly, one might expect any particular computer to have procedural limitations in handling numbers, no matter what they represent. This is not the case. The late A. M. Turing, an English mathematical logician, has shown that a simple computer can match the performance of the most elaborate computer.

Turing made a mathematical study of the capabilities of a simple computer called a “Turing machine.” This hypothetical computer employs an infinitely long tape which the machine can move back or forth one step at a time. At each step the machine can read the symbol written on that particular spot on the tape, erase the symbol and write a new one.

Turing proved that a machine of this type, which is called a “universal Turing machine,” could “compute any computable number.” (We have seen that a number can designate a letter, word or anything else.) “Computing any computable number” should be understood as “arriving at any answer if we can specify clearly and exactly a procedure, however complicated, for arriving at that answer.” The result of Turing’s theorizing is extremely important, for it shows that the operational limitations of present computers are not mechanical or electronic— though they can, of course, lack sufficient speed or memory. The principal limiting factor of the computer is human direction in the form of a program which will guide the machine in a given task.

Thus programing is the greatest intellectual challenge in the world of computers. Without a program, a computer is a senseless heap of complexity. With a program, a computer may be an accounting machine, a model of a battle or a checkers-playing machine. A computer can carry out only those operations specified by the programmer. If the programmer knows the sequence of steps which will lead to a desired result, he can direct the computer to reach that result. If he does not, the machine is useless. The “how” must come from the human mind.

Certain procedures can be stored in a computer. Also, by using programs called compilers, a computer can be directed to carry out some of the routine chores of programing. A computer can, for instance, translate one form of symbolism to another. The attached table gives an example of part of a program for adding the numbers in two columns, expressed first in a compiler language called FORTRAN (Formula Translation), then as translated by the computer into a language called FAP (FORTRAN Assembly Program), and finally as a translation made by the computer of the FAP commands into the binary machine language.

This is literally translating one language into another, but the languages are not natural languages; they are artificial languages devised to be completely unambiguous and logical. In the course of this translation the computer makes an orderly assignment of input data to locations in the memory and so relieves the programmer of this chore. But someone has to write the compiler program which directs the computer to do this; the computer does exactly what it is told to do. Because programs are complicated and because each step of a solution depends on the outcome of previous steps, a programmer may not be able to predict the outcome of a computation. But he knows that the rules used in the various operations will be carried out, for he specified them. A successful outcome depends on whether he specified correctly.

The advent of sophisticated computers has led speculative men to attempt the programing of particularly challenging and difficult tasks such as language translation, recognition of spoken words, proving theorems—with curious results.

While computers can translate one artificial, unambiguous, logical mathematical language into another with ease, efforts to make them translate, say, Russian into English have been disappointing. Dr. J. Francis Weiss of the Library of Congress found that with his knowledge of chemistry he could understand a machine translation of a Russian article on chemistry, but it took him about four times as long to read it as it would to read a conventional translation. Computers can be equipped with microphones to pick up sounds, or with photocells and other scanning devices that can “see”‘ objects. By using such inputs, computers have been made to recognize a few words spoken by a few people, but not many words spoken by many people. The electronic voice-operated typewriter is still far from a reality. Computers have been made to recognize simple, uniform print, but they cannot read fanciful varieties of type face. Also, they cannot recognize a person in a photograph.

Computers can be programed to play simple games, such as tick tack toe, perfectly. A multimillion-dollar computer has been made to play checkers better than most people, but not as well as an expert. Computer-played chess is poor stuff. Some computer programs can prove some mathematical theorems, but rather slowly. One computer proves very simple theorems, such as those in Principia Mathematica, very rapidly. None can compete with a good mathematician.

Such experience has a profound effect on the attitude of men who work with computers and other complicated machines. They realize that just as men are not machines, machines are certainly not men; that as the capabilities of airplanes and automobiles differ from those of birds and horses, so do the capabilities of computers differ from those of intelligent beings. Though faster, computers are less versatile than human beings, because they are formed of fewer parts. Moreover, the brains of men and animals are organized in a very complicated way, which we scarcely understand. We can study many of the complicated phenomena of living organisms, but we do not know how they come into play.

If, for example, a computer is to translate faithfully, it must be able to parse a sentence. It must be able to indicate that some combinations of words form clauses, others phrases, and to associate modifiers with the words, phrases or clauses they modify. It must be able to identify the subject and the predicate of the sentence.

To do all this the computer requires explicit rules. Grammar books give rules, but these leave a good deal to human judgment. In a particular sentence, for instance, are the letters l-e-a-d to be interpreted as a noun or a verb? This presents no difficulty to a reader, for he sees that the sentence is grammatical and meaningful with one interpretation and not the other. We find similar difficulties in all parsing. The rules remind us what to do, but they do not tell us in detail how to do it. In view of this handicap, it is remarkable that programmers have made substantial progress in making computers parse simple English text.

Attempts to make computers do various difficult tasks have shown us how many different abilities we ourselves must use, in complicated combinations, to accomplish such tasks. I have noted, for instance, that computers can identify a few words spoken by a few speakers, but that the identification becomes inaccurate when many words or many speakers are involved. We can understand the words of a speaker with a heavy accent. When we listen to speech in a noisy room we can make use of context and meaning in distinguishing a word, even though the noise obscures it. We cannot give explicit rules to make a computer do this. We can say in general concerning the human performance of complicated tasks that we understand neither the simple abilities involved nor how they are combined to aid and check one another.

In the past a rule was acceptable if it proved useful to men. A theory purporting to explain some human behavior or ability was judged sound if it satisfied our requirements. We now see that many rules and theories serve merely as guide lines for our thinking. Computers and other complicated machines have set a new and higher standard of validity; rules and theories are complete only if they enable us to write a program which will make a computer do the task. A grammar is satisfactory when it is complete and explicit for use in programing a computer to parse a sentence. A theory of human behavior—say, of learning—is valid only if it enables us to program a computer to learn.

It seems to me that this new realization of our lack of understanding and this new and stringent test of the validity of rules and theories are two of the most profound and valuable outcomes of work with computers. Machines can follow only explicit and complete rules and theories. Men should be satisfied only with complete and explicit rules and theories, whether fallible or infallible.

The question of fallibility is important, for there are two courses open in programing computers to do some very complicated tasks, such as proving mathematical theorems. One course is to seek a set of rules which guarantees success, if success is possible. Another is to use what computer experts call heuristic rules. By this they mean that during its work the computer should examine what it has already accomplished and on this basis try certain prescribed procedures that have in the past sometimes led to success, even though it cannot be shown logically that the procedures will lead to success on this occasion. \ Proponents of the heuristic approach claim that it corresponds to human behavior, that it is necessary to solve complicated problems, and that its use will enable the machines to solve problems quickly. These assertions may or may not be true; certainly they have not yet been demonstrated. What further work will show is one of the most fascinating questions involving computers.

Among the challenging possibilities of the future is an increased linking of machines through electrical communication. From business offices they could be made to transmit information to large computers or headquarters. The problems involved resemble those of our automatic telephone exchanges, for in dialing a long-distance number we control “computers” in a central office, and these in turn control other “computers” far away. Someday large central computers may control small computers in automobiles to give better mobile telephone service.

Another challenge is to enable the expert programmer to use a computer more easily for various important non-numerical tasks, such as the simplification of algebraic expressions and the preparation of indexes. It may even be possible one day to make certain specialized kinds of programing so easy that a novice can write programs after half an hour or so of instruction.

For some tasks there cannot be infallible rules guaranteeing success. Presumably, a person who writes or speaks intends to write or speak particular words, yet some poorly written or slovenly pronounced words are incomprehensible. Thus, we cannot hope to establish completely foolproof rules for the recognition of words or pictures or for the translation of one natural language into another. Even humans make mistakes in such tasks. Human mistakes in this particular area are, however, so few compared to those made by machines that the human performance constitutes an infinitely superior order of activity.

Will we someday make computers perform such tasks better by making them imitate human behavior in detail? At present we cannot answer that question, because we know so little about human beings. For instance, we know very little about what sort of information the sense organs transmit to the brain. We know the human eye cannot perceive an object unless the eye moves continually so as to shift the image on the retina. Complex layers of interconnected cells lie between the light-sensitive cells of the retina and the nerve fibers of the optic nerve, and a complicated processing of nerve signals occurs before information can travel to the brain. The brain itself contains all sorts of obscure nerve interconnections. In brief, we do not know accurately what reaches the brain from the eye, and we have only the vaguest notion of what happens to it when it gets there. That the visual field is situated on the surface of the brain is an interesting fact, but it is not profoundly helpful. It does nothing to explain, for instance, why our mental image of a room stands still while our eyes move from one direction to another.

The study of human anatomy, physiology and psychology is fascinating and rewarding, but I can scarcely believe it will tell us directly and in detail how to make computers perform complicated tasks any more than a study of horses would tell us how to make an automobile. Indeed, it may be more plausible to assume that we will learn about human beings by programing the computers to do more complicated tasks.

10 comments
  1. Jari says: February 16, 201112:00 pm

    Charlie: Possibly something like this: “After growing wildly for years, the field of computing appears to be reaching its infancy.” Here’s more: http://en.wikipedia.org…

  2. JMyint says: February 16, 20112:07 pm

    Dumber than a box of rocks. It’s the software that is ‘smart’, which is what he says in the begining of the article.

  3. Myles says: February 17, 20111:52 pm

    Its kind off interesting what we put that computer power to use with. One second of display from a high resolution game would probably require more calculations than a person could or would want to do in a lifetime.

  4. Hirudinea says: February 18, 20115:27 pm

    Well they can win Jeopardy now, next Wipeout!

  5. Jari says: February 19, 201112:44 pm

    Hirudinea: So the Big Blue meets the Big Balls :)

  6. John says: February 19, 20113:02 pm

    Jari: I suppose its funnier in the original Finnish…

  7. Toronto says: February 19, 20117:38 pm

    John: the show “Wipeout!” is famous for its Big Balls.

  8. John says: February 19, 20118:06 pm

    Toronto: Thanks. I’m not a game show type of guy.

  9. Jari says: February 20, 20119:23 am

    John: What Toronto said, nothing Finnish here. And in case you don’t know, Hirudinea was referring that recently “Watson”, an IBM built computer defeated Jeopardy champions, hence the Big Blue.

  10. John says: February 20, 20119:45 am

    Jari: Yes I’m quite aware of WATSON, thank you.

Submit comment

You must be logged in to post a comment.