<< Previous
1 of 7
<< Previous
1 of 7


ARTICLE BY MAX GUNTHER “OH, MY GOD” croaked a network-TV director in New York. He seemed to be strangling in his turtle-neck shirt. It was the evening of Election Day, 1966, and the director’s world was caving in. Here he was, on the air with the desperately important Election Night coverage, competing with the two enemy networks to see whose magnificently transistorized, fearfully fast electronic computer could predict the poll results soonest and best. Live coverage: tense-voiced, sweating announcers, papers flapping around, aura of unbearable suspense. The whole country watching. And what happens? The damned computer quits.

Oh, my God. The computer rooms disintegrated in panic. Engineers leaped with trembling screwdrivers at the machine’s intestines. The director stared fish-eyed at a mathematician. A key-punch girl yattered terrified questions at a programmer. Young Madison Avenue types rushed in and out, uttering shrill cries. And the computer just sat there.

The story of this ghastly evening has circulated quietly in the computer business ever since. You hear it in out-of-the-way bars and dim corners of cocktail parties, told in hoarse, quavering tones. It has never reached the public at large, for two reasons. One reason is obvious: Those concerned have sat on it. The second reason is less obvious and much more interesting.

When the initial panic subsided, the director freed some of his jammed synapses and lurched into action. He rounded up mathematicians, programmers, political experts, research girls and others. And he rounded up some hand-operated adding machines. “All right,” he said, “we’ll simplify the calculations and do die whole thing by hand. This may be my last night on TV; but, by God, I’ll go on the air with something!”

And so they perspired through the long, jangling night. The network’s election predictions appeared on the screen just like its competitors’. The director and his aides gulped coffee, clutched burning stomachs and smoked appalling numbers of cigarettes. They kept waiting for an ax to fall. Somebody was bound to notice something wrong sooner or later, they thought. The hand-cranked predictions couldn’t conceivably be as good as the computerized punditry of the competition. Maybe the hand-cranked answers would be totally wrong! Maybe the network would become the laughingstock of the nation! Maybe. . . . Oh, my God!

Well. As history now tells us, the entire poll-predicting razzmatazz was the laughingstock of the nation that November. None of the three networks was wronger than the other two. When the half-gutted director and his fellow conspirators skulked out of bed the next morning and focused smoldering eyes on their newspapers, they at last recognized the obscure little facts that had saved their professional lives: An electronic computer, no matter how big or how expensive or how gorgeously bejeweled with flashing lights or how thoroughly crammed with unpronounceable components, is no smarter than the men who use it. Its answers can never be better than the data and formulas that are programed into it. It has no magical insights of its own. Given inadequate data and inexact formulas, it will produce the same wrong answers as a man with an aching head and an adding machine. It will produce them in a more scientific-looking manner, that’s all.

Over the past ten years, it has been fashionable to call these great buzzing, clattering machines “brains.” Science-fiction writers and Japanese moviemakers have had a lovely time with the idea. Superintelligent machines take over the world! Squish people with deadly squish rays! Hypnotize nubile girls with horrible mind rays, baby! It’s all nonsense, of course. A computer is a machine like any other machine. It produces numbers on order. That’s all it can do.

Yet computers have been crowned with a halo of exaggerated glamor, and the TV election-predicting circus is a classic example. The Columbia Broadcasting System got into this peculiar business back in 1952, using a Remington Rand Univac. The Univac did well. In 1956, for instance, with 1/27 of the popular vote in at 9:15 p.m., it predicted that Dwight Eisenhower would win with 56 percent of the votes. His actual share turned out to be 57.4 percent, and everybody said, “My, my, what a clever machine!” The Univac certainly was a nicely wrought piece of engineering, one of the two or three fastest and most reliable then existing. But the credit for insight belonged to the political experts and mathematicians who told the Univac what to do. It was they, not the machine, who estimated that if Swamp-water County went Democratic by X percent, the odds were Y over Z that the rest of the state would go Democratic by X-plus-N percent. The Univac only did the routine arithmetic.

Which escaped attention. By the 1960s, the U. S. public had the idea that some kind of arcane, unknowable, hyper-human magic was soldered into computers—that a computerized answer was categorically better than a hand-cranked answer. As the TV networks and hundreds of other businesses realized, computers could be used to impress people. A poll prediction looked much more accurate on computer print-out paper than in human handwriting. But, as became clear at least to a few in 1966, it’s the input that counts. Honeywell programing expert Malcolm Smith says: “You feed guesswork into a computer, you get beautiful neat guesswork back out. The machine contains no Automatic Guess Rectifier or Factualizing Whatchamacallit.”

The fact is, computers are monumentally dense—”so literal,” says Smith, “so inflexible, so flat-footed dumb that it sometimes makes you want to burst into tears.” Smith knows, for he spends his life trying to make the great dimwits cogitate. To most people, however, computers are metallic magic, wonderful, tireless, emotionless, infallible brains that will finally solve mankind’s every problem. Electronic data processing (EDP) is the great fad of the 1960s and perhaps the costliest fad in history. Companies big and small, universities, Government agencies are tumbling over one another in a gigantic scramble for the benefits of EDP. They believe EDP represents, at last, instant solutions to problems they’ve wrestled with for decades: problems of information flow, bookkeeping, inventory control. And they’re hounded by dreams of status. To have a computer is “in.” Even if you’re a scruffy little company that nobody ever heard of, you must have a computer. Businessmen meeting at conventions like to drop phrases such as “My EDP manager told me” and “Our programing boys think,” and watch the crestfallen looks of uncomputerized listeners.

It’s a great business to be in. Computer makers shipped some 8000 machines in 1965 and 13,700 (3.75 billion dollars’ worth) in 1966. There are over 30,000 computers at work in the country today and there will be (depending on whose guess you listen to) as many as 100,000 by 1975. It’s a boom business in which young salesmen can buy Cadillacs and Porsches, while their college classmates in other professions are still eating canned beans in one-and-a-half-room flats. The salesmen don’t need any unusual qualifications to strike it rich: just a two- or three-year apprenticeship, a sincere hard handshake, a radiating awareness of belonging to an elite group and a good memory for a polysyllabic vocabulary. (You don’t sell machines; you sell “systems” or “systems concepts,” or “integrated functional solid-state logic systems concepts.” They seem to cost more that way.) The salesmen are all business. They sell machines on a severely pragmatic level, maybe exaggerating their products’ worth sometimes but, in general, avoiding any unbusinesslike talk about “superbrains.” Computer manufacturers as a whole, in fact, avoid such talk. To their credit, they have struggled from the beginning to keep things in perspective, have publicly winced when imaginative journalists compared computers with that odd gray mushy stuff inside the human skull. “Don’t call them brains! Please, please don’t call them brains!” shouted IBM scientist Dr. Arthur Samuel at a reporter once. “But listen,” said the reporter, “don’t they——”

“No, they don’t!” howled Samuel. “Whatever you’re going to say they do, they don’t!” (Samuel, now at Stanford University, had won unwanted fame for programing an IBM machine to play checkers.) “Computers are just extremely fast idiots,” says logician-mathematician Richard Bloch, a former Honeywell vice-president now working with Philadelphia’s Auerbach Corporation. Bloch, a lean, dark, ferociously energetic man who smokes cigars incessantly, first tangled with the machines in the early 1940s, when he helped run Harvard University’s historic Mark 1. “On second thought, ‘idiots’ is the wrong word. It suggests some innate thinking capacity gone wrong. Computers have no thinking capacity at all. They’re just big shiny machines. When will people learn that machines don’t think?”

Maybe never, though men like Bloch never tire of saying so. “A computer can multiply umpteen umpteen-digit numbers a second,” says Bloch, “but this is only blind manipulation of numbers, not thinking. To think about a problem, you’ve got to understand it. A computer never understands a problem.”

Arthur Samuel, for instance, tells about an early checkers-playing experiment. A British computer was given a simple set of rules in arithmetical form. Among other things, it was told that a king is worth three points, an uncrowned piece one point. It played an ordinary undistinguished game until its human opponent maneuvered a piece within one move of being crowned. Then the machine seemed to go mad.

Somewhere in its buzzing electrical innards, a chain of “reasoning” something like this took place: “Oh, my goodness! If my opponent gets his piece into the king’s row, he’ll gain a three-point king where he had only a one-point man before. In effect, this means I’ll lose two points. What’ll I do? (buzz, buzz . . .) Ah! I’ll sacrifice one of my uncrowned pieces. The rules say he must take my piece if I offer it, and this will force him to use his move and prevent him from getting his man crowned. I’ll have lost only one point instead of two!”

So the cunning computer sacrificed a man. The human player took it. The situation was now exactly the same as it had been before, so the computer slyly sacrificed another man. And so on. Piece by piece, the unthinking machine wiped itself out.

The computer had proved itself able to manipulate some of the arithmetical and logical formulas of checkers. But it had failed in one supremely important way. It simply didn’t understand the game. It didn’t grasp what no human novice ever needs to be told: that the basic object of a game is to win.

The trouble with computers is that they seem to be thinking. While cars, lawn mowers and other machines perform easily understood physical tasks, computers seem to be working with abstract thoughts. They aren’t, of course: they are only switching electric currents along preordained paths. But they produce answers to questions, and this gives them a weird brainlike quality.

People expect too much of them, as a result, and this seriously worries some scientists. The late Norbert Wiener, coiner of the term “cybernetics.” was particularly worried about the increasing use of computers in military decision making. Referring to machines that can manipulate the logical patterns of a game without understanding it. lie once wrote that computers could win some future nuclear war “on points … at the cost of every interest we have at heart.” He conjured up a nightmarish vision of a giant computer printing out “war won: assignment completed . . .” and then shutting itself down, never to be used again, because there were no men left on earth to use it.

Secretary of Defense Robert S. McNamara lias hinted at similar worries in the years-long argument about our famous (but so far nonexistent) Nike-X missile-defense system. Neither full-Hedged hawk nor dove, McNamara favors a leisurely and limited building of Nike bases. He wants the U. S. to have some defense against a possible Russian or Chinese ballistic-missile attack, but he fears that an all-out missile-building program will involve us in a ghastly game of nuclear leapfrog with the Soviets—the two sides alternately jumping ahead of each other in countermeasures and countercountermeasures until the radioactive end. One trouble with missile and antimissile systems, as McNamara once expressed it to a group of reporters, is that “the bigger and more complex such systems get, the more remote grows man’s control of them.” In a nuclear-missile war, so many things would happen so fast, so much data would have to be interpreted in so limited a time that human brains could not possibly handle the job. The only answer for both the U. S. and Russia in a missile arms race would be increasing reliance on automatic control—in other words, on computers.

The last war might, in fact, be a war between computers. It would be a coldly efficient war, no doubt. A logical war: Score 70,000,000 deaths for my side, 60 megadeaths for your side; I’m ahead: your move. pal. How could we convey to the machines our totally illogical feelings about life and death? A country is made of people and money, and the people may properly be asked to give their lives for their country, yet a single human life is worth more than all the money in the world. Only the human brain is flexible enough to assimilate contradictions such as this without blowing a fuse.

A large modern computer can literally perform more arithmetic in an hour than can a football stadium full of human mathematicians in a lifetime, and it makes sense to enlist this lightning-fast electronic help in national defense. “But,” said Norbert Wiener shortly before he died in 1964, “let us always keep human minds in the decision loop somewhere, if only at the last ‘yes’ or ‘no.’ ”

The U.S. Ballistic Missile Early Warning System (BMEWS) is an example of the kind of setup that worried Wiener. Its radar eyes scan sky and space. Objects spotted up there are analyzed automatically to determine whether they are or aren’t enemy missiles. The calculations performed by computers— distance of the objects, direction, checkoff against known craft—take place in fractions of a second, far faster than human thought. It all works beautifully most of the time, and this has led some enthusiasts to suggest going one step further in automation. “If BMEWS can spot enemy missiles by itself,” they say, “why not hook up one more wire and have BMEWS launch our missiles?” But U. S. military chiefs have so far agreed with Norbert Wiener. There is a subtlety in the human brain that no computer seems likely ever to duplicate.

A few years ago, an officer was monitoring a BMEWS computer station in the Arctic. It was night. The rest of the staff was in bed. Suddenly, the computer exploded into action. Lights flashed, a printer chattered, tape reels whirled. The officer gaped, horrified. The machine was signaling a massive missile attack.

There are self-checking devices and “redundant” networks in the BMEWS, as in any other large computer system, and the officer had no logical reason to suspect a mechanical breakdown. There could be little doubt that the computer was actually reporting what its far-flung radar eyes saw. The officer’s orders were clear: In an event like this, he must send a message that would mobilize military installations all over the United States. Global war was only minutes away.

The officer hesitated. Questioned later, he couldn’t explain why. He could only say, “It didn’t feel right.” And he gambled time to wake other staff members. One of them dashed outdoors to look at the cold, clear, starlit Arctic sky, ran back indoors, examined the computer’s print-out, conferred with the others. Standing there in that antiseptic room full of shiny electronic equipment, the small knot of men made what may have been the most important decision in all the history of the world to date. They decided to wait.

They waited 30 awful seconds. The missile attack came no closer.

The officer’s feeling had been correct. This was no missile attack. Unaccountably, through a freakish tangle of circumstances that should never have happened and could not have been predicted and was not fully unraveled until weeks later, the computer and its eyes had locked onto something quite without menace: earth’s friendly companion and goddess of love, the moon, peacefully coming up over the horizon. If computers alone had handled the affair, the earth might now be a smoldering radioactive cinder. Because of a man and his slow, strange human brain and its unfathomable intuition, we are all still here.

When a computer makes a mistake, it’s likely to be a big one. In a situation where a man would stop and say. “Hey, something’s wrong!” the machine blindly rushes ahead because it lacks the man’s general awareness of what is and isn’t reasonable in that particular situation; such as the time when a New York bank computer, supposed to issue a man a dividend check for $162.40, blandly mailed him one for $1,624,000; or the time when a computer working for a publishing company shipped a Massachusetts reader six huge cartons neatly packed with several hundred copies of the same book: or the time when an IBM machine was constructing a mathematical “model” of a new Air Force bomber that would fly automatically a few dozen feet off the ground. Halfway through the figuring, it became apparent that the computer was solemnly guiding its imaginary aircraft along a course some five feet below the ground. (“Goddamn it,” roared General Curtis LeMay at one of the scientists, “I asked for an airplane, not a plow!”) Or the time when—– Well, everybody makes mistakes. In general, society is most worried about mistakes made by war computers in the BMEWS style, for the potential result of a mistake in this field is the end of the world. Fearful imaginings such as hail-Safe have expressed this fear, and most U. S. military planners share the fear and are cautious in their approach to computers. But no such colossal danger haunts computer users in science and business; and in these two fields, the great dumb machines have been pushed willy-nilly into all kinds of applications —some more sensible than others. A New York management-consultant firm, McKinsey and Company, exhaustively studied computer installations in 27 big manufacturing companies four years ago and found that only nine were getting enough benefits to make the machines pay.

“Sometimes computers are used for prestige purposes, sometimes as a means of avoiding human responsibility,” says computer consultant John Diebold. Diebold, at 41, is a millionaire and an internationally sought-after expert on “automation” (a term he coined in the early 1950s). “Scientists and executives have discovered that it’s impressive to walk into a meeting with a ream of computer print-out under your arm. The print-out may be utter nonsense, but it looks good, looks exact, gives you that secure, infallible feeling. Later, if the decision you were supposed to make or the theory you were propounding turns out to be wrong, you simply blame the computer or the man who programed it for you.”

Professor David Johnson of the University of Washington is another well-known computer consultant who worries about what he calls “the mindless machines.” He is amused by the fact that his engineering students seek status by using IBM cards as bookmarks—just as, 20 years from now. they will seek it by buying IBM machines for their companies. He praises computers for their ability to manipulate and organize huge masses of data at huge speeds. But, “What the computer does,” he says, “is to allow us to believe in the myth of objectivity.” The computer “acts without excessive hesitation, as if it is sure, as if it knows. …” A man who isn’t sure can often make people think he is, simply by coming up with a bundle of factual-looking print-out. He hides his own bad brainwork, says Professor Johnson, by “sprinkling it with eau de computer.”

Worse, Professor Johnson says, the growing availability of computers tends to make some researchers in scientific institutions avoid problems that don’t lend themselves to machine handling: Problems involving human values, problems of morality and aesthetics, subtle problems that can’t be translated into arithmetic and punched itito those neat little snip-cornered cards—all these get left out of the calculations. The tendency is to wrench reality around and hammer it into a nice square shape so the inflexible machines can swallow it. Professor Johnson glumly cites the case of a computer-headed robot recently developed by a major agricultural-research center to pick tomatoes. It clanks along briskly, picking the juicy red fruits faster than a whole gang of human workers. The only trouble is, its blind, clumsy fingers break the tomatoes’ skins. The agricultural scientists are now trying to solve the problem. By making the robot more gentle? No, by developing thicker-skinned tomatoes.

“It simply isn’t accurate to call these machines ‘clever,’ ” says Robert Cheek, a chief of the Westinghouse Tele-Computer Center near Pittsburgh. This is one of the biggest computer installations in the world, designed to handle Westinghouse’s huge load of corporate clerical and accounting work, and it generates science-fictionish visions of an office of the (if the cliche may be pardoned) future. It’s an entire modernistic building housing almost nothing but computing equipment. Clerks and secretaries who once populated it have been crowded out, and now it smells like the inside of a new car. Bob Cheek, a slight, mild man, looks small and lonely as he paces among the square whining monsters; and it is tempting to imagine that the machines have subjugated him as their slave. Actually, he is little more awed by this great aggregation of computing power than by an electric toaster. “Artificial intelligence?” he will say in response to the question he has heard too often. And lie will look at his machines, think of the man-hours required to make them work, take off his glasses, rub his weary eyes and chuckle sourly.

Logician Richard Bloch is an example of high human intelligence. He learned chess at the age of three and is now, among other things, a Life Master bridge player and a blackjack shark. He once tried to teach a Honeywell computer to play bridge. “The experiment gave me new respect for the human brain,” lie recalls wryly. “The brain can act on insufficient, disorganized data. A bridge novice can start to play—badly but not stupidly—after an hour or so of mediocre instruction, in a half drunken foursome. His brain makes generalization! on its own. reaches conclusions nobody ever told it to reach. It can absorb badly thought-out, unspecific instructions such as, ‘If your hand looks pretty good, bid such and such.’ What does ‘pretty good’ mean? The brain can feel it out. Now, you take a computer——-”

Bloch pauses to chew moodily on his cigar. “A computer won’t move unless you tell it every single step it must take, in excruciating detail. It took me more than a hundred pages of densely packed programing before I could even get the damned machine to make the first bid. Then I gave up.”

The fact is, human thinking is so marvelous and mysterious a process that there is really not much serious hope of imitating it electronically—at least, not in this century. Nobody even knows how the brain works. Back in the late 1950s. during the first great soaring gush of enthusiasm over computers, journalists and some scientists were saying confidently that the brain works much like a very small, very complex digital computer— by means of X trillion tiny on-or-off switches. It remained only for IBM, Honeywell and Rem Rand to devise a monstrous mile-high machine with that many switches (and somehow figure out a way to supply its enormous power needs and somehow cool it so it wouldn’t melt itself), and we’d have a full-fledged brain. But this was only another case of wrenching reality around to fit machinery. There is no reliable evidence that the brain works like an EDP machine. In fact, evidence is now growing that the basic components of human thought may be fantastically complicated molecules of RNA (ribonucleic acid), which seem to store and process information by means of a little-understood four-letter “code.”

The human brain is uncanny. It programs itself. It asks itself questions and then tells itself how to answer them. It steps outside itself and looks back inside. It wonders what “thinking” is.

No computer ever wondered about anything. “It’s the speed of computers that gives the false impression that they’re thinking,” says Reed Roberts, an automation expert who works for a New York management-consultant firm, Robert Sibson Associates. “Once a man has told a machine how to process a set of data, the machine will do the job faster than the man’s brain could; so fast, in fact, that you’re tempted to suspect the machine has worked out short cuts on its own. It hasn’t. It has done the job in precisely the way it was told, showing no originality whatever.”

For instance, you can program a machine to add the digits of each number from 1 to 10,000 and name every number whose digits add up to 9 or a multiple of 9. The machine will print out a list instantly—9, 18, 27—acting as though it has gone beyond its instructions and cleverly figured out a short cut. This is the way a man would tackle the problem. Instead of routinely adding the digits of every number from 1 to 10,000, he’d look for a formula. His brain would generalize: “Every time you multiply a number by 9, the result is a number whose digits add up to 9 or a multiple of 9. Therefore, I can do my assignment quickly just by listing the multiples of 9 and ignoring all other numbers.” Is this what the computer did? No. With blinding speed but monumental stupidity, it laboriously tried every number, from 1 to 10,000, one by one.

In this example lies one of the main differences between thought and EDP. The human brain collects specific bits of data and makes generalizations out of them, organizes them into patterns. EDP works the other way around. A human programmer starts the machine out by giving it generalizations—problem-solving methods or “algorithms”—and the machine blindly applies these to specific data.

It is by no means easy to program a computer, and one of the great problems of the 1960s is a severe shortage of people who know how to do it. There are now some 150,000 professional programmers in the country, and computer owners are pitifully crying for at least 75,000 more. One estimate is that 500,000, all told, will be needed by the early 1970s.

The shortage is understandable. Computer programing is self-inflicted torture. The problem is to make a mindless machine behave rationally. Before you can tell the machine how to solve a problem, you must first figure out how your own brain solves it—every step, every detail. You watch your brain as it effortlessly snakes its way along some line of reasoning that loops back through itself, and then you try to draw a diagram showing how your brain did it, and you discover that your brain couldn’t possibly have done it—yet you know it did. And there sits the computer. If you can’t explain to yourself, how are you ever going to explain to it} Aptitude tests for would-be programmers contain questions that begin, “If John is three years older than Mary would have been if she were three and a half times as old as John was when. . . .” This is the kind of human thought that must precede the switching on of a computer. The machine can’t add two plus two unless there are clever, patient human brains to guide it. And even then it can’t: All it can do is add one and one and one and one and come up with the answer—instantaneously, of course. No computer can multiply; all it can do is add, by ones, too fast for human conception. Nor can any computer divide; it can only subtract, again by ones. Feed it problems in square roots, cube roots, prime numbers, complex mathematical computations with mile-long formulas—it can solve them all with incredible rapidity. How? Essentially, by adding or subtracting one, as required, as often as required, to come up at once with an accurate answer it might take a team of mathematicians a thousand years to obtain—and another thousand to check for accuracy. It never invents its own mathematical short cuts. If it uses short cuts, they must be invented and programed into it by human thinkers.

A computer’s only mental process is the ability to distinguish between is and isn’t—the presence or absence of an electric current, the this way or that way of a magnetic field. In terms of human thought, this kind of distinction can be conceived as one and zero, yes and no. The machine can be made to perform binary arithmetic, which has a radix (base) of 2 instead of our familiar 10 and which is expressed with only two digits, 1 and 0. By stringing together yeses and noes in appropriate patterns, the machine can also be made to manipulate logical concepts such as “and,” “or,” “except when,” “only when,” and so on.

But it won’t manipulate anything unless a man tells it how. Honeywell, whose aggressive EDP division has recently risen to become the nation’s second-biggest computer maker, conducts a monthly programing seminar in a Boston suburb for top executives of its customer companies to help them understand what their EDP boys are gibbering about. The executives learn how to draw a “flow chart,” agonizingly breaking down a problem-solving method into its smallest steps. They translate this flow chart into a set of instructions in a special, rigid, stilted English, (open input omast INVCRD. OPEN OUTPUT NMAST INVLST.) They watch a girl type out this semi-English version on a key-punch machine, which codes words and numbers in the form of holes punched into cards. These cards are fed into the computer, and another translation takes place. A canned “compiler” program (usually fed into the machine from a magnetic-tape unit) acts as an interpreter, translates the semi-English into logical statements in binary arithmetic. The computer finally does what the novice programmers have told it to do—if they’ve told it in the right way. The machine understands absolutely no deviations from its rigid language. Leave out so much as a comma, and it will either stop dead or go haywire. (At Cape Kennedy recently, a computer-guided rocket headed for Brazil instead of outer space because a programmer had left out a hyphen.) Finally, the executives head back to Boston’s Logan International Airport, soothe their tired brains with ethyl alcohol with an olive or a twist, and morosely agree that nobody is so intractably, so maddeningly dense as a computer.

But they are glad to have learned. They’ve made a start toward finding out what goes on inside those strange square machines in the plant basement; and with that knowledge, they’ll have a defense against a Machiavellian new kind of holdup that their Honeywell instructors have warned them about. It has happened more and more often and recently happened in one of the country’s biggest publishing houses. Almost all the company’s clerical work was computerized: inventory, billing, bookkeeping, payroll. With the corporate neurons thus inextricably tangled into the computer, the chief programmer went to the president and smilingly demanded that his salary be doubled. The president fired him on the spot—and shortly afterward realized the full enormity of what he had done. Nobody in the company, nobody in the whole world except the chief programmer knew what went on in the computer or how to make it do its work. The programs were too complex—and the computer, having no intelligence, could offer no explanations. As the horrified president now discovered, it was not true (as he had boasted) that a marvelous machine was running his company’s paperwork. The cleverness hadn’t been in the machine but in the brain of a man. With the man gone, the machine was just a pile of cold metal. The company nearly foundered in the ensuing year while struggling to unravel the mess.

Computers are that way: They absorb credit for human cleverness. Often a computerized operation will seem to go much more smoothly than it did in the old eyeshade-and-ledger days and the feeling will grow that the machine itself smoothed things out. What has really happened, however, is that the availability of the computer has forced human programmers to think logically about the operation and make it straightforward enough for the machine to handle. Professor David Johnson recalls a time when a company called him in to program an accounting operation for a computer. In previous years, this operation had taken two men ten months to perform by hand and brain. Johnson drew his flow charts, saw ways of simplifying, finally came up with an operation so organized that one man could do it in two days with a desk calculator. The company promptly abandoned its dreams of EDP—but if it had used a computer as planned, the machine rather than the programmer would doubtless have been showered with praise for the new simplicity.

Computers have been given credit for many things they haven’t done. Even more, they’ve been given credit for things they were going to do in the future. The loudest crescendo of computer prognostications occurred in the late 1950s. Future-gazers went wild with enthusiasm. Soon, they said, computers would translate languages, write superb music, run libraries of information, become chess champions. Ah, those fantastic machines! Unfortunately, the whole history of computers—going all the way back to the pioneering Charles Babbage in the 19th Century—has been a series of manic-depressive cycles: early wild enthusiasm, followed by unexpected difficulties, followed by puzzled disappointment and silence.

Music? An amiable professor at the University of Illinois, Lejaren Hiller, Jr., has programed a machine to write music. One of the machine’s compositions is the Illiac Suite. Says Hiller: “Critics have found it—er—interesting.”

Chess? A computer in Russia is now engaged in a long-distance match with one at Stanford University in California. The match began awkwardly, with both machines making what for humans would be odd mistakes. Everybody concerned now seems somewhat embarrassed. Stanford’s Professor of Computer Science John McCarthy, when asked recently how the game was going, said: “I have decided to put off any further interviews until the match is over.”

Translate languages? There’s something about human speech that computers just don’t seem to get. It isn’t rigid or formal enough; it’s too subtle, too idiomatic. An IBM computer once translated “out of sight, out of mind” from English to Russian and back to English. The phrase returned to English as “blind, insane.”

Libraries of information? “We don’t know a good enough way to make a computer look up facts,” says Honeywell programing researcher Roger Bender. He leans forward abruptly and jabs a finger at you. “Who wrote Ivanhoe?” he asks. You say, “Walter Scott.” Bender says, “How did you know? Did you laboriously sort through books in your memory until you came to Ivanhoe} No. And how did you even know it was a book? You made the connections instantly, and we don’t know how.”

Superbrains? Dr. Hubert L. Dreyfus, professor of philosophy at the Massachusetts Institute of Technology, recently published a paper called “Alchemy and Artificial Intelligence.” In it, he expresses amusement at the prognosticators’ claim that today’s computers are “first steps” toward an ultimate smarter-than-human brain. The claim, he says, makes him think of a man climbing a tree, shouting, “Hey, look at me, I’m taking the first steps toward reaching the moon!” In fact, says Professor Dreyfus, computers don’t and can’t approximate human intelligence. They aren’t even in the same league.

Honeywell’s Roger Bender agree. “We once had a situation where we wanted a machine to take a long list of numbers and find the highest number,” he recalls. “Now, wouldn’t that seem to you like an easy problem? Kids in first grade do it. Nobody has to tell them how. You just hand them a list and they look at the numbers and pick the highest. Of all the simple-minded—— Well, it just shows what you have to go through with computers.”

In this case, a programmer tried to figure out how he himself would tackle such a problem. He told the machine; “Start with the first number and go down the list until you come to a number that’s higher. Store that number in memory. Continue until you find a still higher number,” and so on. The last number stored would obviously (obviously to a man, that is) be the highest number on the list.

The machine imbibed its instructions, hummed for a while and stopped. It produced no answer.

“It was baffling,” says Bender. “Nobody knew what the trouble was, until someone happened to glance down the list by eye. Then the problem became apparent. By great bad luck, it turned out, the highest number on the list was the first number. The computer simply couldn’t figure out what to do about it.”

Consultant John Diebold says: “Computers are enormously useful as long as you can predict in advance what the problems are going to be. But when something unexpected happens, the only computer in the world that’s going to do you any good is the funny little one beneath your scalp.”

  1. slim says: August 19, 200911:51 pm

    Just for kicks, I did a Google on “who wrote Ivanhoe”. The third result was “Sir Walter Scott wrote Ivanhoe”.

  2. Mark Pack says: August 20, 20093:09 am

    “Diebold” is an unusual name, so I wonder of that John Diebold is of Diebold Election Systems, the often controversial firm that now supplies electronic voting systems in the US and elsewhere?

    Ironically one of the frequent criticisms of them is how they keep the workings of their systems very secret – and so humans don’t have the information to spot if the computers have done something wrong.

  3. machine906 says: August 20, 20098:25 am

    This article shows how far we have come in two directions: 1. Computers have evolved in a good way past just mindless machines with flashing lights, spinning reels, and clunking gears, and, 2. We are WAY too dependent on these digital creations that permeate our lives, even in 2009.

    I look forward to reading an article about 2009 in 2049 and laughing about the state of society, present or past.

  4. Yaos says: August 20, 20099:55 am

    Here’s a funny line:
    “We once had a situation where we wanted a machine to take a long list of numbers and find the highest number,” he recalls. “Now, wouldn’t that seem to you like an easy problem? Kids in first grade do it. Nobody has to tell them how.”

    Nobody has to tell first graders how to count, because they learned how to count in kindergarten.

    It’s pretty neat how far computers have come, they are capable of doing what was not possible when the article was written and they claimed it would never be possible, and finding the highest number now is very easy what with modern languages and everything.

  5. sly says: August 20, 200910:11 am

    Computers haven’t evolved. They’ve become faster, but that’s about it. They are still mindless machines.

    Finding the highest number is still the same algorithm, I mean you can sort a list & pull the last number or whatever, but realistically the bug that the programmer ran across can easily happen these days.

    Things haven’t changed as much as you think.

  6. Cyrion says: August 20, 20092:08 pm

    This week I’m going to “sprinkling something with eau de computer”!

  7. Firebrand38 says: August 20, 20093:02 pm

    “Computers haven’t evolved?” That’s arguably one of the most thoughtless statements I’ve ever heard on the subject of computers. What’s your definition of evolved?

  8. Eric M. Berg says: August 20, 20097:15 pm

    In “Fearful imaginings such as hail-Safe have expressed this fear”, “hail-Safe” should be “Fail-Safe”, referring to the 1962 novel (and subsequent film) about an accidental nuclear war — see “” for details.

  9. Mike says: August 20, 20098:09 pm

    Just for kicks, I used Babel Fish to translate “out of sight, out of mind” to Russian and back to English. The result was “from the eyes down, from the heart there”. I think we still have a ways to go on that one.

  10. Kruk says: August 20, 20098:19 pm

    I think “hail-Safe” was when they tried seeding thunderclouds with sodium iodide to try to dissipate them before the hail started.

  11. Randy says: August 20, 20098:21 pm

    Computers haven’t evolved. They are the product of intelligent design. 🙂

  12. Firebrand38 says: August 20, 20099:41 pm

    Randy: touche’

  13. Rezmason says: August 20, 200910:23 pm

    I believe we have a related but worse problem today with society’s perspective of computing. The personal computing revolution has resulted in the common belief that computers are only complex appliances that offer us activities to do to improve our lives, and cannot enhance the activities we do on demand. This belief is extremely inaccurate.

    Computing can be so much more practical than it currently is. If we held ourselves to a higher standard– if our society encouraged from its citizens a deeper understanding of and more purposeful activities that apply computation– then we would all be much further ahead than we currently are.

  14. Torgo says: August 20, 200911:01 pm

    Computers haven’t evolved. They are simply smaller and faster.

    Just as safe or dangerous as they ever were.

  15. Randy says: August 20, 200911:09 pm
  16. Firebrand38 says: August 20, 200911:38 pm

    Computer hardware and software has improved to a degree not even imagined when I was a kid. “Simply smaller and faster”, and that doesn’t qualify as an evolving design how?

    “Just as safe or dangerous”? Well yeah, but you won’t get your neck tie caught in the punch card reader. The design evolved away from that peripheral.

    And as for “If we held ourselves to a higher standard– if our society encouraged from its citizens a deeper understanding of and more purposeful activities that apply computation– then we would all be much further ahead than we currently are.”

    Nice sentiment, execution might be a bit problematic. That’s a typical useless Utopian philosophy no better than a recipe for barbecued whale (Step 1…Catch a whale).

  17. Gregly says: August 21, 20095:08 pm

    I should point out that if you type “Who wrote Ivanhoe?” into Wolfram Alpha today, it will correctly return “author = Sir Walter Scott”. So maybe we’ve come a small way after all. 🙂

  18. Equinox says: August 21, 20095:36 pm

    Go to .
    Input “who wrote ivanhoe”.

    Now, as others wrote before, it is not that computers have become “smarter” – it is just that programmers (or “EDP Boys” as they were called) have become more proficient at using the ressources at hand, along with the fact that the usable computing power has (and still is -> Moore’s Law) exponentially grown.

  19. B22 says: August 22, 20098:00 am

    When computers do exactly what you tell them to do, exactly the way you tell them to do it, this shows that computers are “stupid” according to the author of this article. Look at it from another angle, and this so-called “stupidity” is “reliability and flexibility”.

    These days, computers do all the things he said they can’t do (play excellent chess, bridge, draughts, translate languages (Google translate gets that “out of sight, out of mind” translation right), compose music, find the author of Ivanhoe, reliably tell the difference between the moon and a missile, work with incomplete and disorganized data, program themselves, generalize.

    They don’t “Squish people with deadly squish rays! Hypnotize nubile girls with horrible mind rays, baby!”, but they do fly around landlocked Central Asian countries, strafing wedding parties (and the occasional terrorist leader).

    So, the author of this article has been quite unfair on the science fiction writers and journalists and scientists whom he accuses of nonsense and “soaring” enthusiasm. They were more right than he was.

  20. Firebrand38 says: August 22, 200910:05 am

    B22: Spare me the hyperbole. Referring to Makr al-Deeb in 2004 as “wedding parties” (plural) is hardlyy to the point. And yeah, terrorist leaders have been killed more often than civilians despite your snide comment.

  21. B22 says: August 22, 20093:37 pm

    Firebrand38 – you’re well-named. I was only injecting a note of flippancy.

  22. AshleyZ says: August 23, 20096:44 am

    “Just for kicks, I used Babel Fish to translate “out of sight, out of mind” to Russian and back to English. The result was “from the eyes down, from the heart there”. I think we still have a ways to go on that one.”

    Babel Fish uses increasingly dated algorithms – Google Language Tools uses more sophisticated statistical translation, and gets the correct answer.

  23. John Savard says: August 23, 200912:34 pm

    Well, while the article makes a good point about computers just being fast calculating machines, he did overstate his argument at one point. Some computers can and do multiply and divide, using the same basic methods that people do with pencil and paper, or even more efficient methods. Those methods are designed into the computer by people, it is true. But if computers added by adding one over and over, and so on, they would be considerably slower than they are. Instead, they add any two numbers, and they multiply by adding and shifting over, like people do – or using even faster methods.

  24. Colin says: August 24, 20092:17 pm

    Perhaps out of sight out of mind works in Russian, but on, which translates to Japanese and back to English on Google, it returns:

    Vision Center

  25. Lee Ann says: August 30, 20095:52 pm

    I’m not sure if Yaos is being serious or not – modern language or no, you *still* have to write a program to do something like find the largest number in a list, or if you don’t, it’s because the designer of your program’s library did. The problem in the example given was a logic flaw in the *program*, not an error in the computer, and that sort of bug is still completely likely today.

Submit comment

You must be logged in to post a comment.