Problems, Too, Have Problems (Oct, 1961)

This is a veryforward thinking article. It talks about a lot of things that are only getting widespread adoption now including image recognition, parallel processing and mainly general purpose problem solvers like Siri, Wolfram Alpha and Google’s new (and very impressive) Voice Search. I think that what the authors, nor really anyone else at the time, didn’t anticipate just how much more complex and miniaturized computers would become and just how much processing power and data storage would be necessary to perform these tasks.

>>|
Next >>
11 of 11
>>|
Next >>
11 of 11

Problems, Too, Have Problems

by John Pfeiffer

A dialogue, perhaps to become one of the most fruitful in history, has begun between the men who study the human brain and those who design computers. Point of agreement: the brains and the computers need each other desperately.

Ever since man started making tools to tinker with nature one to two million years ago, he has been getting into—and, so far, out of—more and more elaborate kinds of trouble. Many of the problems he is now trying to solve and now creating overtax even his prodigious capacities for coping with complexity. There are more products, more people, more laws and loopholes, more nations, higher risks of head-on conflicts. Furthermore, events unfold so swiftly that decisions had better be right the first time; you may never have a chance to check the answer.

We are transforming a world and shall soon be transforming a solar system, and there is only one possible way to deal with the rising flood of problems of our own making. To avoid painting ourselves into a corner, we need, of course, more education, but we also need more thinking aids.

The future of the human race depends to a large extent on a new field of research that is just beginning to take shape. Its scope and boundaries have not yet been clearly defined. Investigators agree that some of the studies under way are important and some trivial, but they do not always agree as to which is which. Like all growing fields, this one is marked by sharp controversies, exaggerated claims, and mountains of publicity, much of it blatant nonsense. This field of research does not yet even have a name, at least not a name completely acceptable to most of those engaged in experiments. You will hear about research on “artificial intelligence,” “bionics,” even (excuse the expression) “intellectronics.” Such terms tend to imply that investigators are concerned solely with ingenious, brainlike machines; yet work is proceeding at a much broader and more fundamental level. The basic emphasis of the new field is on problem solving in general rather than the solution of individual problems. It includes some studies of electronic machines and particularly fast general-purpose computers, a new breed of problem solver. But the development of new thinking aids demands new knowledge about the way the human brain goes about problem solving. The brain (some will be reassured and others frightened to learn) is still the champion problem solver, and no serious challenger to its all-round supremacy is in sight.

Studies of computers and of brains may therefore go hand in hand. The interconnections among the components of a large computer are so complicated that no one man knows its wiring diagram in full detail—or, as a matter of fact, knows whether the machine is actually wired according to the diagram. But comparing the machine’s circuitry to that of the brain is like comparing the roads of a backwoods village to the maze of ramps and underpasses and clover leaves of a superhighway system. The fundamental objective of brain research is to understand how networks of nerve cells build images of events and objects, store memories, form abstractions, learn, and routinely solve problems beyond the abilities of existing machines. Although a vast amount remains to be discovered, current studies are yielding fresh insights at a rate that could not have been predicted a decade or so ago.

At present only fifty to a hundred investigators, concentrated chiefly at M.I.T., I.B.M., the Rand Corporation, Bell Telephone Laboratories, Carnegie Tech, the University of Illinois, and a few other institutions, are involved in basic research on problem solving. But rapid expansion is expected. In a sense, research on problem solving indicates more forcefully than perhaps in any other field the shape of the future. It will certainly bring more advanced computers and other thinking aids, and these machines will certainly draw on and contribute to new knowledge about the brain. Above all, however, is the move toward a unique and subtle and unpredictably fruitful collaboration. Machines are participating more actively, more intelligently if you will, in problem solving and that foreshadows an enormous extension of the already impressive powers of the human mind.

The machines assisting us now are Neanderthal models compared to what is coming. Investigators in the field of problem solving have ambitious plans, and this article presents some of their basic experiments and theories.

The brute in chess.

One method of solving problems guarantees success in many cases, at least theoretically. For example, you could work out a jigsaw puzzle by trying every possibility without bothering to observe the shapes of individual pieces; you could pick up pieces at random until you find a pair that fit together, continue picking up pieces until you find one that makes a three-piece combination, and so on and on. Sooner or later the procedure will yield a solution to any jigsaw puzzle. If complete instructions existed for chess—say, a list of all possible positions with the best move for each position—anyone could play a perfect game. Of course, it would take centuries to solve a moderately intricate jigsaw puzzle in this fashion and eons merely to write out the chess instructions. In principle, however, a fast enough machine or a creature with infinite time on its hands could solve a large number of difficult problems using this blind, brute-force approach.

We are too slow and too short-lived to play perfect chess, or to do anything perfectly that is worth doing at all. Still we deal with problems enormously more complex than chess, and we have survived by being right or nearly right in a sufficiently high proportion of cases. Instead of trying all possibilities, we select from a vast number of possibilities on the basis of experience and intuition and the learning of special tricks and short cuts.

A new phase in understanding such amazingly successful methods began about six years ago with a study by Allen Newell and J. C. Shaw of the Rand Corporation and Herbert A. Simon of Carnegie Tech. They prepared a program, or set of instructions, that enables a computer to solve elementary theorems in symbolic logic, a kind of algebra concerned with statements rather than numbers. A typical problem would be to show by a formal proof that “if . . . then” sentences (e.g., “if viewers protest, then television programs will improve”) are logically equivalent to “or” sentences (e.g., “viewers do not protest or television programs would improve”). Special symbols represent statements and relationships among statements.

Newell, Shaw, and Simon coded this information on punched cards together with rules for manipulating the symbols. Then they fed the program to a Rand computer known as JOHNNIAC and instructed it to try proving fifty-two elementary theorems selected from Principia Mathematica, the great treatise on logic by Alfred North Whitehead and Bertrand Russell. The machine succeeded in proving all but fourteen of the theorems, thereby earning a grade of 73 per cent, hardly up to high-honor standards. The time required for a proof varied from ten seconds to forty-five minutes, and JOHNNIAC could have found about a dozen further proofs if it had had a larger memory and another thousand hours of computing time.

Now in fact a program using very advanced mathematical theorems can get faster answers to the same problems. But the Rand-Carnegie Tech team was interested in finding what light a computer might throw on how humans solve problems. They analyzed their own methods, such as referring to appropriate rules of inference and dividing a difficult problem into sub-problems that could be solved more readily, and instructed the machine to operate accordingly. Theirs was a pioneer demonstration that instructions embodied in a deck of cards could serve as a model of human behavior. The model is far from perfect. At least one of the proofs, a proof that students find readily, is too much for it. But studying just such limitations is one way of designing more refined models, and the present version has shown at least a touch of brilliance. It succeeded in finding one proof shorter and more elegant than that used by the authors of Principia Mathematica.

Similar models have since been developed for solving problems in plane geometry, freshman calculus, and other areas. Chess is of special interest, because it requires high intelligence and its strategies resemble those used in designing communications and air-defense systems. Early chess-playing programs used bulldozer tactics to examine tens of thousands of positions for every move and, even then, could just about beat a weak player. Newell, Shaw, and Simon are working on a program that plays far better chess and examines an average of only twenty positions per move. It eliminates many possibilities by considering such factors as king safety and center control.

The trust officer that whirs.

Another project indicates how these techniques will be applied. Early in 1959, Geoffrey P. Clarkson, a twenty-four-year-old graduate student at Carnegie Tech, approached the investment trust officer of a Pittsburgh bank: “I told him I was trying to understand what he was doing and put it on a machine. He was worried at first, but became very interested later on.” Clarkson asked questions, attended committee meetings, and prepared the information for coding on punched cards.

The resulting program simulates a trust officer investing funds in common stocks. It analyzes data concerning everything from the state of the general economy to the past, present, and future expectations of individual companies—and comes up with portfolios tailored to the requirements of the particular client. In a typical test a computer used the program to work out the portfolio for an actual client, a widow under fifty-five years old who had $28,000 to invest and wanted a steady income to supplement her salary as a schoolteacher. The machine selected a diversified portfolio consisting of 100 shares for each of eight companies and yielding an estimated dividend of 4.9 per cent. A checkup of the bank’s records for this client showed a neat match. The trust officer had also selected 100 shares for eight companies (in seven out of eight cases, the same companies that the machine selected), and his portfolio brought the same yield.

Simulating human behavior in practical situations often turns out to be less difficult than might be expected. Ask a man to tell you how he goes about weighing salary, job security, and the quality of his co-workers in choosing a job; he will tell you that they all depend on each other. But investigators at M.I.T.’s Lincoln Laboratory have found that in mimicking his mental process on a computer it is not necessary to worry about the way the three considerations affect each other. They devised a test that requires a man to choose one of two hypothetical jobs. They then had him estimate the importance he attached to the three criteria. They found that a computer, programed to combine these separate estimates, came out rather close to the job choices made by the group tested. The supposed interdependence of the criteria could just be ignored.

This test and others reveal a significant aspect of problem solving. Confronted with crucial problems, people either crack up and behave foolishly, or else simplify things sufficiently to arrive at clear-cut decisions. Simplification becomes particularly important when the pressure is on, when major problems arise rapidly and suddenly— for example, in air-traffic control or military emergencies. It is precisely in these situations that we shall depend more and more on machines to assist harried men. A recent Lincoln report summarizes current trends: “In the next few years [computers] will be required to make decisions calling for even more judgment than is now needed, more speed will be required, and the number of decisions will be larger . . . The sheer volume of good judgment demanded per minute will increase.”

Investigators are already considering more advanced programs—for example, learning programs. Arthur L. Samuel of I.B.M. has a self-improving checker-playing program. Formerly, he could beat it, but now he is no longer in its class. A “rote learning” checkers program (see diagram, page 145) enables the computer to accumulate experience by probing deeper and deeper every time it encounters a familiar and previously analyzed position. With a few thousand more hours of playing, it could probably beat a master or at least hold its own. A faster program may be even more effective. Samuel has compiled a list of thirty-one criteria by which positions may be judged, and the problem is to discover the relative importance of each in evaluating positions. The machine is now programed to do this by replaying a master’s game, assuming each move is the right one, and adjusting the criteria accordingly. It has digested four books of games to date (at a rate of thirty moves a second), and ten to twenty books more may bring it to tournament level.

Will there be a GPS in your home?

Playing games, investing money, and solving problems in logic represent only fragments of what the mind can do. Why not devise a set of instructions that spell out the underlying processes we use to solve a wide variety of problems, a way of simulating the full powers of the mind? A number of investigators are taking first steps toward the discovery of processes common to the solution of different types of problems.

Newell, Shaw, and Simon have analyzed the statements of people thinking aloud as they solve problems, and are working on a program known as the General Problem Solver or GPS. The main part of this program consists of instructions that tell a machine to set up three abstract goals: (1) transform object A into object B; (2) apply an “operator” to object A; and (3) reduce the difference between object A and object B. An object by definition is practically anything. Object A, for example, may be an expression in symbolic logic or a position in a chess game or an investment situation. Then object B would be the logical expression to be proved or the position that checkmates the opponent’s king or a well-chosen portfolio. The appropriate “operator” is a way of changing A to B; “reducing the difference” simply means getting closer to your objective.

The important thing about these goals is that they can be used over and over again. Since there is no magic formula for winning at chess or making good investments, you proceed by setting up intermediate objectives such as finding a new position or the best stocks in a given industry. Attaining such objectives brings you that much closer to a checkmate position or the desired portfolio. The (GPS program consists of a basic deck of punched cards incorporating about 10,000 instructions that tell a computer how to apply the three goals. Additional instructions, about 1,000 or so, may specify conditions unique to specific problems. Right now, GPS is capable of solving certain problems in logic, programing chess, algebra, and trigonometry—and perhaps addition ciphers such as GERALD plus DONALD equals ROBERT, where D stands for five and you must find out what numbers the other letters stand for.

Notice that these problems have clear-cut starting points and clear-cut solutions. An interesting question is whether GPS principles can be used to handle less well-defined problems—for example, to program a computer for composing original music. To answer this question Walter R. Reitman and Marta Sanchez of Carnegie Tech, a psychologist and a music professor, are working on a long-term project to simulate an anonymous composer. They have not yet succeeded in simulating this or any other composer. Reitman compares their progress so far to that of an engineer who wants to make a new car and is still concentrating on the design of spark plugs. But in preliminary tests a computer has already been programed to compose simple melodies and is shortly expected to compose chorals in the style of Bach. GPS is continually being refined and broadened. It can already transform a complex problem into a simpler one that, if solved, leads directly to a solution of the original version.

The greatest computer.

From the standpoint of advances made since 1950, problem-solving specialists have come a long way. But from another standpoint things are somewhat less impressive. If the human brain is taken as a measure of the quality of our efforts to date, a thoughtful look at what it can do should be enough to postpone overoptimistic claims about machine substitutes. There is an almost immeasurable difference between what is being achieved now with cleverly programed computers and what could be achieved if computers were in the brain’s class.

Imagine a robot, watching a screen made up of many flashing lights. The lights are signals and when certain patterns of signals appear on the screen, the watcher presses a button that transmits signals to other screens where more lights flash. He does this second after second, year after year. Imagine a communication system made up of ten billion such beings. Each of them is mindless, and yet the system somehow makes plans and discoveries, formulates theories, dreams and loves and hates.

The brain is a system of this sort. Its elements are neurons, cells specialized to receive signals in the form of electrical pulses and to “fire” or transmit signals to other cells in response to appropriate patterns of incoming signals. Neurons serve as versatile, automatic units— miniature robots buzzing with activity. There are an estimated 5,000 million firings every second of your life, in response to information flowing into the nervous system from the outer world and from the inner world of the body. Neurons may also fire in response to nothing in particular, emitting signals spontaneously.

One way of investigating the brain is to design devices that do some of the things that neurons do. For example, Leon D. Harmon and his associates at Bell Telephone Laboratories have used transistors and other components to build artificial neurons, and tested them in circuits containing up to about twenty units, for the purpose of imitating and analyzing some elementary functions of elementary sense organs. Circuits of a thousand or more artificial neurons, however, involve wiring so complicated that the system is simulated the way an investor or chess player may be simulated.

The brain’s circuitry is something else again. Each of its ten billion neurons makes connections with a hundred or more other neurons, and the microscope shows only masses of close-packed cell bodies and twisting fibers. One region is concerned with the control of breathing, another with thirst mechanisms, still another with speech. But they all look pretty much alike. Finding pathways in and through this wilderness is the job of workers like Walle J. H. Nauta, a leading brain anatomist at the Walter Reed Army Institute of Research. He recently discovered a pathway running downstream from certain forebrain structures to the hypothalamus (a center regulating hormone production, among other things) to a region still lower down in the brain stem. Nauta spent four years tracing the pathway, which appears to be involved in emotional behavior and attention, and prepared about 10,000 slides of tissue for examination under the microscope.

Pattern recognition.

Anatomy provides fundamental information on the circuitry of the nervous system. But even relatively simple processes, such as recognizing the letters of the alphabet, cannot yet be explained on an anatomical basis. We recognize an “A” whether it is large or small, near or far, upright or tilted or upside down, printed in any one of dozens of type forms or written sloppily The brain learns to discard all the features of all these A’s except those features common to the whole lot. This is a powerful way of arriving at ideas in general. The notion of number or form, for example, depends on ridding groups of widely dissimilar things of all but one of their characteristics. Thus the sides of a river, a person’s hands, any pair of words, and a man and his son share the common property of two-ness. Circularity is a feature of the image of the full moon, the cross section of a tree, swirling whirlpools, arguments that get nowhere.

A spurt in research on artificial pattern recognizers followed publication of a trail-blazing study by Oliver G. Selfridge and Gerald P. Dinneen of the Lincoln Laboratory in 1955. One computer distinguishes samples of ten handprinted letters by noting curved strokes, vertical lines, crossbars, and twenty-five other features. A “listening” computer at the Air Force Cambridge Research Laboratories recognizes a few spoken one-syllable words, and enough is known at present to build a machine that could handle 500 or more such words. No machine recognizes written or spoken sentences. But existing devices are forerunners of such machines as speech typewriters and robot gun directors capable of responding directly to barked-out orders.

What a frog sees.

Little is known about the living pattern recognizers in anything as complex as the human brain, although important evidence is coming from research on the brain of a humbler species. Jerome Y. Lettvin, Humberto R. Maturana, Warren S. McCulloch, and Walter H. Pitts of M.I.T. have picked up signals from the optic nerves of frogs watching events in the outer world. They used microelectrodes to record impulses in thousands of individual nerve strands less than 1/25,000 inch in diameter, and found that the optic nerve includes four kinds of fibers. (See diagram page 147.) Certain fibers serve as “bug detectors,” responding only to the curved edges of objects that are moving or have just moved. (A frog goes after moving targets only. It will starve to death among freshly killed insects.) Other fibers respond to sharp boundaries, changing distributions of light, and any general lowering of illumination such as might be produced by a bird’s shadow.

In other words, the frog’s eye does not see all of the real world, but only what it is built to see. It can never detect the fact that insects have six legs, pincers, antennae, and so on. It creates an abstract model of an insect, a symbolized insect, and detects only the features or patterns that during the course of evolution have been found to be significant in avoiding enemies and obtaining food. Out of such selected patterns the frog’s brain creates models of the frog’s world, a limited world but one that has had frogs in it for some two hundred million years.

There is much more to these experiments. In a recent study Lettvin and his associates show that four types of neurons are associated with the four types of pattern-detecting fibers. Fibers responding to sharp boundaries, for example, come from small cells that look something like fiat-topped trees; bug-detector fibers come from larger cells whose branches are arranged in characteristic two-layer systems. So a relationship exists between the shapes of neurons and the duties they carry out, between anatomy and function. “We are dealing with four ideograms here, four anatomical symbols,” Lettvin says. “It is essentially a deciphering problem like understanding ancient languages or enemy messages.” Since similar cells are found in the optic nerves of all species from frog to man, cracking anatomical ciphers provides clues to the general design of living pattern recognizers.

Codes in the head.

But anatomy by itself may not be enough to reveal how the patterns are stored. When you hear a faint sound, about fifty electrical pulses a second travel along the auditory nerve fibers to hearing centers in the brain. The loudest sounds may produce several hundred pulses a second, and intermediate rates represent intermediate intensities. Other patterns of pulses represent tastes, odors, shadings, colors, etc. Such moving or “running” codes must somehow be transformed, frozen in place as fixed codes or records as clear-cut and enduring as letters engraved in stone.

This ability may vanish because of injury to the limbic system, a group of nerve structures buried near the border between the cerebral hemispheres. For example, a onetime postman in Boston can retain information for just about three minutes. He can repeat a joke told two minutes ago; a four-minute-old joke is lost to him forever. Signals representing the present produce fleeting disturbances among his nerve cells, which reverberate and die away like echoes in a cave. Although the patient remembers events that occurred before his illness, he has stored nothing in his head since then.

One theory about how we form permanent memory traces draws on new discoveries in biochemical genetics. Genes are records of times past, transmitting from generation’ to generation the information necessary to reproduce those traits that have proved most valuable for survival. They are giant molecules of materials known as nucleic acids, and all inherited information is built into them in a kind of chemical code. Holgar Hyden, at the University of Goteborg in Sweden, reports evidence which suggests that coded nucleic acid molecules may hold records of individual pasts as well as of the past of the species.

He has observed chemical changes in the brains of rats trying to maintain equilibrium on a seesaw device. While a rat is learning to do the trick, neurons in a balancing center of its brain show a sharp rise in RNA or ribonucleic acid, one of the substances that may carry genetic codes. Later the RNA in the neurons falls back to previous levels, but levels rise permanently in another type of nerve cell, the so-called “glia,” which outnumber neurons by at least ten to one in the brain and may be more important in mental processes than has hitherto been suspected. The inference is that the RNA transferred to these cells is somehow effective in fixing codes or memory traces.

The Swedish experiments may someday help us understand how we store our remembrances of things past. At a conservative guess, a person records about 15 million words worth of information during a lifetime and the actual amount may be a thousand or a million times greater. Memory traces representing the information may be stored as chemical patterns imprinted on RNA molecules scattered through the centers of the brain.

Protoplasmic kisses.

Whatever the nature of memory traces, finding them after they have been stored is another problem. According to an old notion, learning involves the growth of new nerve fibers whose tips branch out like shoots among tangled vines and make delicate contact with other fibers. Such “protoplasmic kisses,” to use the words of one investigator, form increasingly intricate networks whose structures embody things learned. If this or an equivalent process actually takes place, recalling a past event would mean that nerve impulses pass along previously traversed neuron nets and eventually reach permanent storage sites. It is something like following marked trails through forests.

Analogous processes have been run on computers. Edward Feigenbaum of the University of California is working on EPAM (Elementary Perceiver and Memorizer), a program that simulates a child learning to read. To illustrate EPAM’s workings, assume that the computer is first instructed to distinguish between two written words. The words are coded on punched cards and stored in the machine’s memory, which may be compared to an array of many rows of tiny electric lamps. Imagine that one row includes nine such lamps, the first two of which are unlighted, the next four lighted, and the last three unlighted. This could be represented as a series of “offs” and “ons” or zeros and ones, 001111000 -which might be the code for the word CAT. DOG, on the other hand, might be stored as 110101111.

Consider the start of the learning program, when the machine contains only these two words in its memory. It is instructed to distinguish between them, an easy job since all it has to do is look at the first digits in each code to find a difference. It does roughly what a person does when he distinguishes two coins or two houses on the basis of a single obvious feature—say, size. In effect it performs a test that may be represented by a simple two-branch tree, the left branch representing 0 and the right 1: Now suppose that the word CAR, represented by 011010110, is fed into the machine. Since this word cannot be distinguished from CAT solely on the basis of the first digit in its code, a second test is called for and the tree forms two new branches: A third test would distinguish BALL (101110001) from DOG by forming another pair of branches: EPAM learns by growing more and more elaborate trees with many branches and sub-branches, as it distinguishes among more and more written words. It also grows a tree for distinguishing different codes representing spoken words, and still another tree for codes representing actual objects. Then it learns to associate written words and sounds and images by interlinking the trees properly, just as a child would when it learns that the spelled-out word CAT is associated with the spoken sound KAT and a picture of a cat. Remembering stored associations in this system amounts to retracing a previously formed pathway to its end point. People may or may not learn this way, but EPAM provides a theory that can be put to the test on computers, and has already explained some of the psychological laboratory findings on human learning.

The brain has elaborate mechanisms for selecting among many different pathways so that we think and do things in sequences. Karl Pribram of the Stanford University Medical Center believes that one important center for controlling the sequences is the front part of the cerebral cortex, a thin sheet of gray matter that covers the brain. When this part of the cortex is damaged, patients lose much of their ability to order their lives.

Wanted: more intelligence.

The brain represents a continuing challenge to scientists at work on general problem-solving techniques, whether they are interested chiefly in acquiring new knowledge or building new devices. The challenge is heightened by the tantalizing fact that at a basic level brains and computers must operate in a similar manner. The coded patterns of experience are symbols, and all our ideas and emotions and actions are the results of manipulating those symbols according to certain elementary information processes—copying and erasing symbols, comparing symbols, transferring symbols from one place to another, and so on. Computers carry out exactly the same “atomic” processes. They do nothing else whether they are solving mathematical equations or playing chess or applying investment policies.

One objective of brain research is to discover how nerve cells carry out these processes. Biology deals with uncertain and variable phenomena. Animals never behave the same way all the time and neither do the cells of which they are composed. The individual neurons that handle our codes are no more predictable; they function poorly at times and die off. You were born with all the neurons you will ever have. They do not reproduce like most other cells, and a person loses an average of more than a thousand neurons every hour of his life. The brain operates as an ultrareliable system made up of unreliable units. Investigators designing computers for satellite communication networks, for instance, would like to know how to build ultrareliability into their circuits.

They would also like to achieve a better working relationship with computers. Now they must stoop to instruct the machines. They use semi-artificial languages that permit the writing out of super-precise programs, and such precision does not come naturally to most of us. Current plans call for raising computer I.Q.’s so that the computers are easier to work with. At present you do not actually ask the machines questions. You write out long series of instructions which yield answers, and that is the difference between simply asking for a business letter and telling a person every single step required to go to a file and find the letter.

Bert F. Green Jr., Alice Wolf, and Carol Chomsky at the Lincoln Laboratory have developed an “automatic question answerer.” In following their program a computer has access to a vocabulary of about 200 words and the date, place, teams, and scores of every baseball game played in the American League during the 1959 season. The machine determines the meaning of a question by instructions that tell it, among other things, how to parse the question, equate meanings (“Red Sox” equals “the Boston team”), and tie together the various parts of the question into a meaningful unit. It searches the data (in some cases it has to do a good bit of reorganizing and processing) and comes up with the answer. The machine can answer billions of different questions from such simple queries as “Did Chicago play on July 5?” to complex questions like “How many teams won at least two games in July by one run?”

Improvements are called for, because language often tends to be imprecise. The question: “Did the Orioles win most of their games in July?” has several meanings—for example, “Did the Orioles win more than half of the games they played in July?” or “Did the Orioles win more games in July than in any other month?” or “Did more than half of the Orioles’ total victories for the season occur in July?” Now, when confronted with some ambiguities, the machine stops: for others it chooses a meaning and answers according to the chosen interpretation. Within a year or so it may provide appropriate answers for each possible interpretation, or at least say “Unclear. Please rephrase your question.”

Machines that talk back.

Full-scale, pointed conversations with computers will be a development of the next five or ten years. Suppose an engineer has a vague idea about the sort of new part required for a jet engine or gas-turbine motor. He draws a rough outline of the part on a fluorescent screen connected with a computer and specifies the type of steel he wants. The machine says: “That won’t work. The stresses are too great. Try another shape or another material.” So the engineer draws another preliminary sketch on the screen, and the conversation continues until man and machine have arrived at a refined design.

In conversing with a future machine, designers will first try to state what they are talking about, and it will help them define their notions more clearly. Computers so programed will aid in the shaping of business policies and military strategies as well as engineering parts.

Raising computers a notch or two closer to our intellectual level will also bring greatly improved automatic translating machines. Smoother translations from, say, Russian into English require a deeper understanding of semantics. According to David G. Hays of Rand, that means programing computers to do such things as telling the difference between grammatically similar words like “snowman” and “iceman,” and recognizing that a composite term like “cellophane flower petal curler” refers to a device which curls flowers made of cellophane and not something else. Investigators are just beginning to apply the analysis of meaning to advanced problem-solving programs.

Language and novelty.

Various aspects of language research are involved in practically every effort to design improved programs, because we cannot communicate more effectively with machines unless we learn more about how we communicate with one another. Some kind of deep linguistic analysis may help toward solving the biggest problem of all- how to build originality into a program. Samuel at I.B.M. has already run head on into this problem with the thirty-one position-evaluating criteria of his checker player. There is no guarantee that these represent the best criteria, and the experts have already told him about all they can express as far as their methods are concerned. What he wants now is a machine that finds new criteria.

A possible approach is described in certain theoretical studies by mathematicians John McCarthy and Marvin L. Minsky of M.I.T. Suppose that a special language could be found for expressing Samuel’s checker-playing criteria. Knowing the grammar of the language, a machine could then generate a whole series of new sentences—new criteria and test them in actual checker games. Of course, the new sentences should not be generated completely at random, since that might call for an impractically large amount of trial-and-error testing. The problem would involve programing the machine to produce sentences or criteria that had a reasonably good chance of being relevant to the design of improved strategies. If that problem were solved, the machine would generate novelty and might come up with its own ideas.

Achieving originality may require new kinds of computers. Models now available generally solve problems by performing operations one at a time. To do many things at once calls, in effect, for many machines working in parallel and connected with a master machine that makes sense of the combined information. With smaller and smaller electronic parts, such systems may be achieved in a space no larger than that occupied by one of today’s computers.

Research on problem solving is a dramatic sign that evolution is off on a new tack. Something extraordinary is happening, something as extraordinary as the rise of living cells from inanimate crystal stuff. Nature is in the process of creating a strange new kind of symbiosis—a unique and, we may hope, mutually beneficial living together of men and advanced machines. Neither can endure separately. Man is a slow, sloppy, and brilliant thinker; computers are fast, accurate, and stupid. Little can be done to reduce the difference in speed. But it is a safe bet that machines will be behaving far more intelligently than they are now, and that we will be thinking far more precisely. end The Trigger in a Frog’s Eye.

Recent-studies indicate the intricate organization of visual mechanisms in the eye of the frog. An insect is flying past, a tidbit of potential food. Its image falls on the frog’s retina, a thin film or screen made up of light-sensitive elements that are known as photoreceptors and change chemically when light strikes them. The chemical changes have a “triggering” effect. They stimulate nearby nerve cells or neurons, causing them to fire and send electrical pulses along the optic nerve toward higher centers concerned with vision.

Neurophysiologist Jerome Y. Lettvin and his associates at M.I.T. have found that there are four types of neurons—shown in yellow, red, green, and blue—each of which is specialized to respond to a different aspect of events under way in the frog’s field of vision. The type of neuron shown in yellow responds only to boundaries, sharply defined edges. Neurons shown in red have been called ‘”bug detectors,” because they detect only convex or positively curved moving edges such as might be produced by the front end of an insect. Neurons shown in green respond to changes in contrast or distribution of light, while those shown in blue respond to any general lowering of illumination.

Fibers from each of the four types of neurons make up the optic nerve and sort themselves out in a higher visual center to form four layers of nerve tissue, maps that display the four basic types of information. Out of this information the frog builds its visual world, a world that does not contain “real” insects—but what might be called abstract models of insects, things with moving and sharp edges that produce shifting shadows.

The electronic moving-edge detector shown at the lower right represents a system of photocells or “electric eyes” connected to a transistorized artificial neuron. The red cell in the middle is an inhibitor. When an object with a convex shape passes over this cell, the active cells around it are permitted to fire. This device represents an effort by Bell Telephone Laboratories investigators to simulate some of the functions of the frog’s “bug detector” neurons. Other laboratories have built artificial neurons and computors have been programed to simulate neurons in action.

A Young Checker Player Learns to Beat Dad.

Computers equipped with foresight can improve their performances, as demonstrated by I.B.M. scientist Arthur L. Samuel. A standard computer following his programs—sets of instructions coded on punched cards—not only plays checkers at a fairly high level, but automatically lifts itself by its own bootstraps, increasing the quality of its game. One program, the “rote learning” program diagramed above, involves some of the tactics used by human players.

The computer looks before it leaps. Before indicating its actual move, it is instructed to explore the board to a depth of three. That is, it selects one of its possible moves, examines all its opponent’s possible replies, and considers all its own counterreplies. Then it selects another one of its moves, investigates the replies and counter-replies, and so on. If seven moves are available for each position, a reasonable average, the machine does not decide on a move until it has explored a total of 343 positions. The diagram above indicates only a small part of the intricate exploratory “tree.” Each of the eight positions at the three-deep level has been evaluated from the standpoint of maximum advantage to be gained, assuming that the opponent always makes his best move.

Since the machine attempts to maximize the score and it assumes that the opponent will attempt to minimize the score, the line of play leading to the +11 position would naturally be selected if all the eight positions were new to the computer. But notice that the -)-9 position is not a new one. The machine, having stored the results of a previous three-deep analysis, recognizes the familiar position and benefits from past experience. In effect, it now looks six moves ahead and changes its evaluation accordingly—in this case making an adjustment from +9 to +12 and choosing this line of play.

During the course of several hundred games, the computer knows some positions to a depth of twenty-one moves, and operates on the basis of remembered analyses of many billions of positions. Using the rote-learning program, it may be adjusted to take fifteen to thirty seconds for a move, which is about twice as fast as a human player. When it was first learning, the machine lost to Samuel. Now it can beat him easily—a vivid proof that a machine, like a child, can become cleverer than its teacher.

0 comments
Submit comment

You must be logged in to post a comment.