The Chip (Oct, 1982)

This is an excellent, very long, 1982 National Geographic overview of all aspects of the microchip. It covers advances in silicon tech, how chips are produced, their uses and their effect on society. Topics include robots, hackers, digital watches, computers in the classroom, AI, early navigation systems, online news and shopping, telecommuting and more. Plus a ton of great pictures. Check out this rather prescient quote about online privacy:

“With personal computers and two-way TV,” he said, “we’ll create a wealth of personal information and scarcely notice it leaving the house. We’ll bank at home, hook up to electronic security systems, and connect to automatic climate controllers. The TV will know what X-rated movies we watch. There will be tremendous incentive to record this information for market research or sale.”

ELECTRONIC MINI-MARVEL THAT IS CHANGING YOUR LIFE

The Chip

By ALLEN A. BORAIKO, NATIONAL GEOGRAPHIC EDITORIAL STAFF
Photographs by CHARLES O’REAR

IT SEEMS TRIFLING, barely the size of a newborn’s thumbnail and little thicker. The puff of air that extinguishes a candle would send it flying. In bright light it shimmers, but only with the fleeting iridescence of a soap bubble. It has a backbone of silicon, an ingredient of common beach sand, yet is less durable than a fragile glass sea sponge, largely made of the same material.

Still, less tangible things have given their names to an age, and the silver-gray fleck of silicon called the chip has ample power to create a new one. At its simplest the chip is electronic circuitry: Patterned in and on its silicon base are minuscule switches, joined by “wires” etched from exquisitely thin films of metal. Under a microscope the chip’s intricate terrain often looks uncannily like the streets, plazas, and buildings of a great metropolis, viewed from miles up.

Even more incongruous, a silicon flake a quarter inch on a side can hold a million electronic components, ten times more than 30-ton ENIAC, the world’s first electronic digital computer. ENIAC was dedicated in 1946, the ancestor of today’s computers that calculate and store information, using memory and logic chips. But ENIAC’s most spectacular successor is the microprocessor—a “computer on a chip.” This prodigy is 30,000 times as cheap as ENIAC, draws the power of a night-light instead of a hundred lighthouses, and in some versions performs a million calculations a second, 200 times as many as ENIAC ever could.

The chip would be extraordinary enough if it were only low-cost, compact electronics, but its ability to embody logic and memory also gives it the essence of human intellect. So, like the mind, the chip has virtually infinite application— and much the same potential to alter life fundamentally.

A microprocessor, for example, can endow a machine with decision-making ability, memory for instructions, and self-adjusting controls. In cash registers the miniature computer on a chip totals bills, posts sales, and updates inventories. In pacemakers it times heartbeats. It sets thermostats, tunes radios, pumps gas, controls car engines. Robots rely on it; so do scientific instruments such as gene synthesizers. Rather than simply slave harder than humans, machines can now work nearly as flexibly and intelligently, to begin priming a surge in productivity we may one day recall as the second industrial revolution.

The chip’s condensed brainpower nourishes another phenomenon—personal computers. Last year more than 800,000 were sold, most to people who do not know how these first cousins of the pocket calculator work, nor need to know, because the chip makes them increasingly easy to use.

Piggybacking on personal computers are dozens of new services. Exotic now, computer conveniences such as electronic mail and newspapers and home banking and shopping could in time become as universal as telephone service.

Questions arise. If we can screen out all but the news that interests us most, will we grow parochial? If we shop and pay bills from home and carry less cash, will streets be safer? Must employees who work at home with company computers be electronically monitored? Will children stimulated by computers grow up to find effective cures for poverty, hunger, and war?

These questions were unimaginable in 1959, birth year of the chip, but in a decade they may be current. That would be no surprise, so broadly and swiftly has the chip penetrated our lives.

Recently I spent months gauging the progress and impact of the chip. In laboratories, scientists showed me that the chip, though complex, is understandable. At home a personal computer alternately enraged and enlightened me. And I learned that the chip’s every advance incubates another, and that one another and another.

Eventually one billion transistors, or electronic switches, may crowd a single chip, 1,000 times more than possible today. A memory chip of such complexity could store the text of 200 long novels.

Chips refrigerated in ultracold liquid helium make feasible a supercomputer vastly more powerful than any yet built, with a central core as compact as a grapefruit.

Naval scientists envision semi-intelligent and autonomous robots that can pilot ships to evade enemy fire as well as rescue sailors and recover sensitive code books from sunken submarines.

Borrowing techniques from drug manufacturers, chemists hope to grow, not build, future computer circuits.

Farfetched? Then consider these coming innovations in light of some breakthroughs already achieved.

Unperfected but promising microelectronics implanted beneath the scalp can restore very rudimentary sight and hearing to some of the blind and deaf.

Robots that see, feel, and make simple judgments are entering factories, where less capable robots have been “reproducing” themselves for some time.

Within limits, computers can talk, heed our speech, or read. Some diagnose illness, model molecules, or prospect minerals with the reasoning and success of expert human doctors, chemists, and geologists.

The shock waves of the microelectronics explosion expand too far, in too many directions, to tally them all. But a few of the deeper tremors, recorded here, yield a sort of seismic profile of what lies beneath and beyond this first instant in the age of the chip.

I WISH we’d had this chip when we were designing it.” Dana Seccombe taps the tiny device in the palm of his hand as tenderly as if it were a rare seed, germ of some plant bred to fruit with money. Just so for his employer, the Hewlett-Packard Company, propagator of computers, calculators, and other electronic cash crops.

Dana, head of chip design at an HP plant in Colorado, passes me the chip. It’s a microprocessor and quite a handful, so to speak: 450,000 transistors, laced together by 20 meters of vapor-deposited tungsten “wire.” Mapping every street and freeway of Los Angeles on the head of a pin would be an equivalent feat—and no harder. That is, in fact, the gist of Dana’s complaint.

Every year for more than two decades now, engineers have roughly doubled the number of components on a chip, mainly by shrinking them. They began with soldered wires as thin as cat whiskers. These projected from silicon or germanium crystals sealed in pea-size metal cans. What resembled a three-legged stool was actually a simple electronic switch—a transistor.

The transistor was invented in 1947 at Bell Telephone Laboratories to replace the bulky glass tubes that controlled and amplified electric currents in early TVs and computers such as ENIAC. These vacuum tubes were energy’ hungry, gave off far more heat than transistors, and frequently burned out.

But the transistor too had a flaw. It often broke off circuit boards, plastic cards embossed with flat, snakelike wires. The remedy, hit on independently by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor: Make the crystal in a transistor serve as its own circuit board. When the snake ate its tail, the integrated circuit—since dubbed the chip—was born.

Today engineers call it the crude oil of electronics, attesting that world dominance in technology rests substantially on the chip. It has strategic virtues indeed.

The chip lacks soldered wires, reducing failure points and making it ultrareliable. (A vacuum-tube computer as complex as Hewlett-Packard’s microprocessor would fail in seconds.) Since the chip is tiny, electrical signals take short paths from switch to switch, saving time. Further, a chip carrying 1,000 transistors does more work, faster, than one with ten—at about the same cost.

Lured by this fairy-tale performance and economy, engineers raced to jam transistors on the chip: 5,000 produce a digital watch; 20,000 a pocket calculator; 100,000 a small computer equal to older ones as large as rooms. At 100,000 transistors, you enter “very large-scale integration,” or VLSI. The chip, engineers joke, comes in grades like olives—large, jumbo, and colossal.

CONTEMPLATING the Hewlett-Packard chip—colossal grade— Dana says that to grasp its complexity I must scan its floor plan. He unfurls a roll of drafting paper. Four by eight feet, shingled edge to edge with thousands of squares and rectangles neatly inked in brown and black and green and blue, it’s but one section of the chip.

“How wide a section, Dana?”

He thinks in microns; one equals thirty-nine millionths of an inch. “Fifteen hundred microns.” That’s the width of 20 hairs from my head; to spread out the rest of the chip’s design would take a gymnasium.

Dana traces a red line from a black square to a green rectangle, symbols denoting transistors and their precisely mated connections. “It takes 100 calculations to position one of these rectangles properly. We mapped two million of them,” he adds. Not so odd, his wish for the computing power of a new chip even while still designing it.

Indirectly but obligingly, the chip goes to its own rescue in the guise of computer-aided design, or CAD. A computer built of earlier chips can store diagrams of transistors, rules on how to link them, and data on the intended function of new chips, information that enables the computer to design a chip circuit, display it on a screen, simulate its operation, and report its performance.

Besides plotting transistors, computers also route the interconnections among them. But no computer can yet calculate, in reasonable time, the optimum way to wire a VLSI chip: Possible wiring patterns number in the millions, so complex have chip designs become. Humans must still tediously debug them—hunt for errors, or bugs—and with video screens and attached electronic pens reroute connections or regroup transistors like building blocks.

By 1990 ambitious engineers expect to squeeze ten million transistors on the chip, enlarging it slightly and making it as complex as a city nearly 1,000 miles square. How do you build a megalopolis almost twice the size of Alaska?

Manufacturing any chip is a painstaking, protracted process. Just south of San Francisco Bay, at the Intel Corporation in Silicon Valley, I found that it can take as long as three months to make a microprocessor “Some magic’s involved,” engineer Ralph Leftwich said as I pulled on a baggy white nylon jump suit, cap, and bootees. Voila! I was a conjurer’s illusion in my bunny suit, required fashion in the “clean rooms” where Intel pioneered the microprocessor in 1971 and where filtered air holds fewer than 100 particles of dust or other contaminants per cubic foot. To a microscopic chip circuit, motes are as menacing as boulders.

In one clean room, trays held razor-thin silicon wafers, polished mirror smooth and racked like diminutive records. They were slices of a sausagelike crystal grown from molten silicon so pure that if contaminants were redheads, there would be but 15 of them on earth. Such crystals yield wafers as large as five inches across; each wafer becomes the base of hundreds of chips.

Two things make silicon, a semiconductor, the favored material for chips. Its ability to carry electricity can be precisely altered by ingraining its crystal structure with measured amounts of chemical impurities, or dopants. And silicon surfaces can be conveniently oxidized—rusted, in effect—into an electrically insulating glaze.

“Chips are sandwiches,” Ralph said as I peered at a silvery oxidized wafer. He explained that techniques reminiscent of silk screening would stack and stencil the wafer with layers of insulation and crystal, the crystal doped with infinitesimal pockets of impurities laid out in some 300 identical chip-scale circuit patterns (pages 426-7).

“The impurities form conducting areas that overlap from top to bottom of a wafer. By etching ‘windows’ between them, we create transistors.” At the end, with as many as 12 detailed levels demanding interconnection, a wafer receives an aluminum coating and a final etch that leaves conducting filaments invisible to the naked eye.

A new chip’s ultrafine “wiring” offers so little entree to its transistors that they defy individual quality testing. But their collective performance is judged as needlelike probes jab at metal pads on the rim of each chip on a wafer, running 10,000 electrical checks a second. Sound chips are diced from wafers by a diamond saw, then bonded and wired to gold frames and sealed in small ceramic cases propped on stubby plug-in prongs. Packaged, a wafer’s worth of chips looks like a swarm of black caterpillars.

THIS ELECTRONIC SPECIES shelters by the dozens in a personal computer, and in their cocoon they might metamorphose into a journalist’s tool as useful as pen or notebook.

So I fancy at home one day, unpacking a personal computer the size of a portable typewriter. And “floppy discs”: plastic platters about the diameter of 45-rpm records. Like cassette tapes, they’re invisibly patterned with magnetic fields representing information. To make the computer receptive, there’s a master disc. A shoe-box-shaped “disc drive” that I hook to the computer sends information back and forth between the disc and the computer’s chips.

“Slip disc into drive,” directs a manual. “Turn on power. “The drive purrs, spinning the disc. It stops. Atop the computer, in the upper left of a TV screen—another attachment—there now hovers a small square of light. It blinks. That’s all.

Minutes pass. “How’s it going?” calls my wife from another room. Flustered, I tell her truthfully: “Nothing to it!”

That maddening, flashing marker on the screen insists on action, so I yank the computer’s plug. A sullen scan of the manual discloses what’s really needed: a concise chain of instructions—a program—telling the computer what to do, step by step. In my knotted brain a light goes on, followed by another on the screen.

Prompted by the blinking marker, or cursor, I type a practice game program on the computer’s keyboard. Now the machine should display a dot, bouncing like a ball back and forth across the screen.

It beeps instead, heralding an error. I give the computer a very personal command not in any manual, then begin debugging.

CHOOSE A STARTING POSITION FOR DOT is up on the screen, good. So are the commands IF DOT ON SCREEN, PLOT NEW DOT POSITION and ERASE OLD POSITION. About two
dozen other instructions look fine. Wait. I forgot to type: move dot again. Short one step of logic in its program, the computer simply quit. As might a dim-witted cook given a recipe that fails to instruct: “Bake cake in 350° oven for 5O minutes.”

Frustrated and chastened by this machine that demands finicky precision, I can see why last year business and government paid an estimated four billion dollars for ready-made computer programs, or “software.” Why by 1990 we may need 1.5 million programmers—more than three times as many as today—to write instructions for computers that issue paychecks, run factories, and target nuclear missiles. And why hundreds of programmers need months to debug 500,000 commands for flight computers aboard the space shuttle.

Fortunately, falling prices for personal computers help swell a rising tide of off-the-shelf programs that make the machines “user friendly.” Once only an electronics hobbyist could master a personal computer—by building it. But as the chip reshapes computers into consumer items—some desk-top models cost no more than TV sets, pocket computers even less—they must be simple enough for anyone to use.

To budget money, for example. One program instantly shows a home buyer how changing interest rates affect house payments. Or savings. Programs teach, everything from arithmetic to zoology. Game programs—pinball and chess and monster mazes—may number in the thousands.

With a printer and a word-processing program, the computer I used to write this article shifts, copies, or erases a word, line, paragraph, or page of text, to print cleanly edited manuscripts or letters. It also keeps files and corrects mispellings. Misspellings. Misspellings.

IT’S THE NATURE of computers, of course, to do these things electronically, by switching, storing, and transforming pulses of electricity. But humans can’t understand electrical signals; computers comprehend nothing else.

Yet we do communicate with computers—by translating our numbers, letters, and symbols into a code of electrical pulses. In computers, by custom, a high-voltage electrical pulse represents the digit 1; a low-voltage signal stands for 0. Because this system is binary (it contains only two digits), the electrical pulses in a computer are called bits, from binary digits.

Electrical pulses representing two digits may seem thin resource for expression, but Lincoln’s eloquent Gettysburg Address was telegraphed across Civil War America with only a dot and a dash, the “bits” of Morse code. Similarly, ones and zeros can encode numbers, an alphabet, or even the information in photographs and music.

Many computers, including most personal ones, digest information in chains of eight electrical pulses. These pulse strings— called bytes—shuttle through a computer’s chips something like trains in a railroad switchyard. Since a byte consists of eight bits that may stand for either 1 or 0, the “cars” in one of these “trains” can be arranged in 256 (2^8) different ways. That’s more than enough combinations to represent uniquely each letter, number, and punctuation mark needed for this article. Or to write the instructions enabling a computer to express and print it.

To carry out instructions, a computer depends on its central processor; in personal computers this “brain” is a single chip—a microprocessor. If you scanned this silicon sliver by microscope, you would notice what might be railroad tracks. These conduct “1″ and “0″ electrical pulses, passing through the chip at nearly the speed of light.

Alone, a microprocessor cannot hold all the data it needs and creates when working. So memory chips help out. Magnified, they show transistors in intersecting rows and columns, recalling a city street map. This grid allows the microprocessor to assign a byte a unique “address” for instant storage and recall. Most often, a memory chip permits bytes to be retrieved individually, like the numbers in a telephone book. Some such random-access memory chips, or RAMs, can store the equivalent of four copies of the Declaration of Independence.

FOR JAPAN, the chip itself is a declaration of independence. In recent years Japanese electronics firms have adopted and refined U. S. technology to win a global lead in RAMs, the vital fuel of the computer industry. Japan’s semiconductor samurai also have a reputation for quality and sharp pricing, keys to survival in a fiercely competitive 10-billion-dollar world market for chips. I glimpse part of it one day in Tokyo’s Akihabara district.

This is no tranquil geisha quarter I’m wandering, but a garish electronics bazaar. If it holds a chip, you’ll see it here, declares a shopkeeper. He sits nearly hidden in one of hundreds of stalls crammed with everything electronic from cassette players to pocket computers, ballyhooed by huge banners in hot pink and Day-Glo orange.

At many stalls loose chips tumble like jelly beans from bins and boxes. Hobbyists paw through them; so do engineers hunting competitors’ chips to study. Keeping tabs on a rival’s products isn’t easy, for the Japanese output of electronic goods is huge: 16 million TVs, 16 million radios, and 55 million calculators in 1981 alone.

“We face far keener competition in Japan than in the U. S.,” says Dr. Matami Yasufuku, executive director of Fujitsu Limited, Japan’s largest computer company and a top chip producer. “We Japanese can’t afford to dump discount-priced chips overseas.”

U. S. competitors claim the Japanese have done just that, to capture 70 percent of the world market for 64K RAMs, chips able to store 65,536 bits of information. (“K” stands for 1,024). The Defense Department worries that U. S. computers, weapons, and telecommunications may grow dangerously dependent on the foreign memory chips. Anxious not to provoke import quotas, the Japanese have cut chip exports and shifted some production to U. S. plants.

Yet Japan’s chip makers remain aggressive. Recently they unveiled a new generation of memory chip, with four times the capacity of 64K RAMs. Their domestic chip plants expand relentlessly too: So many have opened on Kyushu in the past few years that this southernmost of Japan’s main isles has been nicknamed Silicon Island.

U. S. rivals, trying themselves to gain or expand a Kyushu toehold, note that in the 1970s Japan’s influential Ministry for International Trade and Industry sponsored a national drive to end U. S. dominance in chips. And they complain of Japan’s tax breaks, research subsidies, and cheap loans for domestic firms, proof to them that the Japanese will tolerate no threat to a commodity as strategic as the chip.

“We’ve got a few years of tough competition ahead,” concedes Dr. Lewis M. Branscomb, vice president and chief scientist of the International Business Machines Corporation, “as the Japanese exploit the fact that they have given intense interest to manufacturing, productivity, and quality in the past 20 years while Americans were asleep at the wheel.” Nonetheless: “I’m much surer of our ability to match them in production and productivity than of their ability to match us in innovation.”

INNOVATION. Lately that word has taken on talismanic overtones in U. S. microelectronics research. Small wonder, considering some of the far-reaching changes brewing in the nature of the chip.

• Design: A squad of engineers needed 18 months to lay out Hewlett-Packard’s microprocessor, but university students are now learning to plan complex chips in far less time, using new design principles devised by Professor Carver Mead of the California Institute of Technology and Lynn Conway of the Xerox Corporation.

Significantly, chips designed in this new fashion offer organizational insights that can simplify construction of “parallel processors,” computers organized to do all steps of a task simultaneously, like a factory where everyone works at once.

Supercomputers operate somewhat like this now. In hours they run calculations— long-range weather forecasts, for example—that other computers take days to finish. Such speed is expensive; a super unit typically costs ten million dollars. But Dr. Mead believes that with new chip designs supercomputers could be built small and cheap enough to give one to every child.

“The consequences would be awesome,” he predicts. “Kids could simulate with utter realism what it’s like to pilot a jet, fly by the rings of Saturn, or be jostled by the atoms banging around in a fluid. Think how kids raised with such computers would transform society. There’s nothing they wouldn’t believe they could handle.”

• Manufacture: Shrinking microcircuits put a premium on new tools to make chips with exquisite precision. At an IBM plant in eastern New York, beams of electrons transfer chip designs directly from computers to wafers. And they do it with an accuracy comparable to a skipper holding his ship within 525 feet of its course throughout a voyage from New York to New Orleans.

Such beams have unmatched potential to pattern wafers with incredibly fine circuits. At the National Research and Resource Facility for Submicron Structures at Cornell University, Dr. Michael Isaacson has carved into salt crystals letters so tiny that a 30-volume encyclopedia could be written on a chip the size of a half-dollar.

• Materials: Other scientists try building chip circuits, atom by atom, of chemicals beamed at wafers. The goal of such “molecular beam epitaxy” is more transistors on chips, packed in three-dimensional rather than flat arrays. The process can also sheet wafers with layers of gallium and arsenic compounds that conduct electricity ten times as fast as silicon.

The drive to cram more components on the chip may end in a test tube, says chemist Forrest L. Carter of the U. S. Naval Research Laboratory in Washington, D. C. Dr. Carter thinks that relatively soon molecule-size computer switches will be synthesized from inorganic chemicals, like some drugs. Then, within 30 years, we could be jamming a cubic centimeter “with a million billion molecular switches, more, probably, than all transistors ever made.”

From Bell Telephone Laboratories scientist Andrew Bobeck has come the magnetic bubble memory. On this chip, bubble-shaped magnetic areas in a film of garnet crystal store such computerized messages as, “We’re sorry, but the number you have reached has been changed to. . . .”One day, Bobeck told me, a bubble chip the size of a postage stamp will hold the contents of a small phone book.

Researchers at Bell Labs, IBM, and elsewhere are refining Josephson junctions— electronic switches made of metals that lose all resistance to electric current when chilled to near absolute zero. Chips with these devices can switch signals in seven-trillionths of a second, presaging ultrafast telephone switching equipment, or a refrigerated supercomputer. Its chilled circuits could be packed into the volume of a grapefruit, cutting travel time for signals and enabling the machine to carry out 60 million instructions a second, ten times as many as current high-performance computers.

IBM hopes to build a prototype in a few years. “Could it be of commercial significance?” IBM’s Dr. Branscomb baited me. “I’ll tell you in the 1990s.”

BY THEN the Japanese may have created a thinking computer. Memory-chip successes have inspired the Ministry for International Trade and Industry to launch work on a machine that may win Japan command of the technological revolution being sparked by the chip. In Tokyo, MITI official Sozaburo Oka-matsu told me: “Because we have only limited natural resources, we need a Japanese technological lead to earn money for food, oil, and coal. Until recently, we chased foreign technology, but this time we’ll pioneer a second computer revolution. If we don’t, we won’t survive.”

MITI expects to have a prototype of the thinking computer by 1990, and a commercial product about five years later. “It will be easy to use,” Okamatsu projected. “By recognizing natural speech and written language, it will translate and type documents automatically. All you’ll have to do is speak a command. If the machine doesn’t understand, it will talk—ask questions. It will draw inferences and make its own judgments, based on knowledge of meanings as well as of numbers. It will learn too, by recalling and studying its errors.”

This vision of artificial intelligence—machines acting in ways humans regard as intelligent—unnerved me, so I sought out computer scientist Edward Feigenbaum at Stanford University. The Japanese, too, had asked his opinion of the thinking-computer project.

“I told them it was the right idea at the right time,” he said. “Artificial intelligence is a great scientific challenge. The more people working on AI, the better.”

Artificial intelligence is as much art as science. Under Dr. Feigenbaum, “knowledge engineers” tease from human experts factual knowledge and the sometimes unrecognized rules of thumb they use to apply it. Encoded in programs, such information already allows computers to plan genetics experiments, deduce the structure of molecules, and diagnose diseases.

Future “expert systems” may advise chip designers, soldiers who must troubleshoot complex weapons, even plant lovers, as the programs gradually become everyday consultants. “Imagine one helping you nurse your sick houseplants,” suggested Dr. Feigenbaum.

At the University of Pittsburgh, computer scientist Harry Pople and internal-medicine specialist Jack D. Myers have created Cadu-ceus, a program that catalogs more diseases than a doctor could possibly remember and that enables a computer to combine facts and judgment and make a multiple diagnosis. “Like your brain, it can shift gears from disease to disease,” Dr. Myers told me. “I’ll show you.”

Into a computer went details about an elderly man rushed one night to the university hospital. He’d awakened panicky and short of breath. Heart attack? “My first guess,” said Dr. Myers.

Considering the case—no chest pain, an earlier heart attack, blood pressure normal, a history of diabetes—the computer weighed and momentarily set aside more than a dozen diseases before flashing a message about a prime suspect, pursuing: diabetes MELLITUS.

The computer asked about the man’s blood-sugar level. Quite high. It asked other questions to clinch matters, then announced

CONCLUDE: DIABETES MELLITUS.

More questions probed breathing sounds, heart murmurs, chest X rays. … In minutes the computer also judged the patient a heart-attack victim. His doctor had taken several days to decide as much, with doubts.

In complex or unusual cases, Caduceus makes a sounder diagnosis than general practitioners, says Dr. Myers, and almost always agrees with the specialist who has time to study a patient’s every symptom. After more testing, Caduceus could become a common doctor’s adviser, and may even lower medical costs as physicians prescribe fewer but more suitable tests to answer a computer’s questions about patients.

Also in Pittsburgh, Nobelist Herbert A. Simon teaches computers sweet reason with a program that seeks orderly patterns in irregular data and thereby hits on predictable laws of nature. This approximates the intuitive thinking of human scientists.

Named for Elizabethan philosopher and scientist Sir Francis Bacon, the program has independently rediscovered laws of planetary motion and electrical resistance, as well as the concept of atomic weight. Could Bacon discover an unknown natural law?

“Maybe, but the main goal is learning how the mind works,” Dr. Simon told me at Carnegie-Mellon University. “I grew up in a computerless world,” he said, “amid vague ideas about thought and the brain. Computers, when you try to program them to act like us, shed great light on such things.”

And could a computer, I asked, win a Nobel prize? “The Nobel Committee may yet have to think about that.”

Wherever the discussion turns to thinking machines, the name Marvin Minsky comes up. Professor of computer science at Massachusetts Institute of Technology, he believes self-conscious and truly intelligent computers and robots are a distant certainty. They may be as inscrutable as humans, he adds: “The notion that computers do only what they’re told is misleading. If you can’t tell a computer how best to do something, you program it to try many approaches. If someone later says the machine did as told, that’s ambiguous—you didn’t specify and couldn’t know which approach it would choose. That doesn’t necessarily mean we can’t control an intelligent computer, just that we won’t always know every detail of what it has in mind.”

THAT PROSPECT may upset some adults, but children would likely take it in stride, as they have the more than 100,000 personal computers and computer terminals in U. S. classrooms.

As the chip has cut their cost and advanced their use in schools, personal computers have refueled an old debate about the value and purpose of teaching machines. In Minnesota, where nearly all children 6 to 18 attend schools equipped with classroom computers, I saw third graders use one for rote grammar drill. The machine freed their teacher for true teaching, but it somehow seemed a costly alternative to flash cards.

Many education experts say the potential of school computers has been barely tapped, either to present subjects that boost analytic skills or to make children computer literate—able to run computers and grasp their impact on society. By that measure, most kids still grow up computer dropouts, possibly dooming them to be “know nots.”

“The chip is remaking this into a world where information is literally wealth,” says Peter Schwartz, former head of Future Studies at SRI International, a California think tank. “Without equal skill in using computers to get and employ information, people may divide into ‘knows’ and ‘know nots’ and suffer or prosper accordingly.”

These cares have yet to burden Stacey, a second grader at P. S. 41 in New York City. I watched as she giggled and pecked at the keyboard of a personal computer loaned by the LOGO Computer Learning Center, also in New York. Soon the computer was drawing triangles within triangles, and Stacey was challenging a classmate to find them all.

Afterward, at the center, I confessed to associate director Dr. Robert W. Lawlermy chagrin at seeing seven-year-olds juggle abstractions that had nearly bested me in high-school geometry. It’s not uncommon, he assured me, for a child with a computer to learn more at a younger age.

“But the profoundest effect of computers on children,” he went on, “may be to make them reflect on how they think.” As Stacey had told me, nodding at her computer screen, “I try to make it like my head sees it.”

On another front—a battlefront—children are dueling robots, blasting missiles, and zapping aliens in mock clashes programmed into video-game chips. Perhaps as many as 30 billion quarters are fed annually into coin-operated video games; that they tempt children to truancy or theft any more than other pastimes is, according to the industry, an unfounded fear.

Versions for home TV and new emphasis on strategy over mayhem blunt most objections to video games. Some seven million U.S. households have them now, and Atari Incorporated, the world’s largest maker of video games, expects that number to at least triple before finally peaking.

U. S. Army tank gun crews have also been toying with the chip, built into training simulators modeled on a video game. Like that diversion, the simulators stir aggressive impulses, and troops gladly practice more, without the peril and expense of real tank maneuvers.

ROBOT SOLDIERS have no place in Pentagon planning yet, but the Army will soon test a robot ammunition handler with chips for a “brain.” A mechanical arm flexing hydraulic “muscles” and a pneumatic gripper “hand,” it will hoist and arm 200-pound howitzer shells, duty that now fatigues and endangers four GIs.

Cosmetically, today’s robots lag light-years behind the sleek androids of science fiction. Yet in dozens of industries chip-smart robots draw admiring looks for raising productivity as they tirelessly paint cars, weld ships, feed forges, and more. The hulking “steel collar workers” toiling in such jobs resemble counterbalanced beams set on boxes full of electronics. Other, smaller robot arms have shoulder, elbow, and wrist joints nimble enough to assemble electric motors or jiggle dainty light bulbs into automobile instrument panels. Some machines have more finesse, but none match the versatility of a robot: All it needs to switch jobs is a new tool at the end of its arm and a new program in its chips.

So an electrician tells me at a Chrysler assembly plant in Delaware. He oversees 30 robot welders and unselfconsciously calls them his. They crane and thrust like giant, long-necked vultures, made restive and quizzical by the skeletal car frames passing their perches. In two rows they seesaw over the steel bodies, diligently and fastidiously gripping them in C-shaped beaks. Air hoses hiss and convulse, the long necks shiver, and the snouts froth white sparks, wringing crackling arcs of heat and light from the clamped, welded steel.

Where once 30 men sweated to weld 60 cars an hour, the faster robots now handle as many as 100, and the electrician has time to smoke his pipe. Waving it at the robots, he says they’re more consistent too. “If they weld right the first time, they weld right every time, Mondays and Fridays included.”

I heard more praise at a General Motors plant in Ohio: Robots work overtime without extra pay, cut defects and waste, never strike. … I also saw robots measure car-door openings with laser “eyes,” one of many additions—tactile sensors, TV cameras, infrared probes—making robots increasingly productive. So much so that by 1990 GM hopes to be using ten times the 1,600 robots it has today.

Manufacturers and engineers talk more and more of fully automated factories, making computer-designed goods with mass-production economy and the distinction of custom detail. The Boeing Commercial Airplane Company is taking off that way now, lofted by the chip’s cheap computing power. Filling orders for ten jets, each with unique seating, Boeing builds them all together, but to computer-customized blueprints. It’s easy, because a robotlike device drills holes wherever wanted with just a change in program, dictated by a design computer.

Today’s most advanced factory may be in Japan. In the Fanuc Ltd. plant near Mount Fuji, I saw unattended carts glide to automatic storage racks, accept metal blocks, and then roll to robots; they loaded the metal into unmanned drill presses and lathes to be shaped into parts for more computerized tools and robots. On a shop floor bigger than a football field I saw but 15 human workers.

Japan claims roughly half the world’s 25,000 or so robots, and Dr. James S. Albus, a robotics expert at the U.S. National Bureau of Standards, likens that technological head start to an earlier one: “Japan has given us another Sputnik.”

Mulling the U. S. robot revolution coming in reply—and the jobs that will inevitably disappear—MIT automation researcher Harley Shaiken cautions that robots and the chip differ in a major way from previous waves of mechanization.

“This technology affects offices as well as factories,” he told me. “It creates a potential economic vise: One jaw shoves people from the plant, and the other limits their shift to white-collar jobs.” Shaiken concludes that without retraining programs and new jobs, we invite severe economic dislocations.

“We’re creating jobs in the long run,” responds Stanley Polcyn, president of the Robot Institute of America and senior vice president of Unimation, Inc., a major producer of robots. This nation has only about 6,000 now, he notes, adding that to meet demand for more, the robotics industry itself will hire great numbers of workers.

Then there are the new job markets robots will open, like deep-sea mining. Or repair of home robots. In five years, predicts Polcyn, the first modestly useful but very expensive ones should be housebroken.

ANOTHER FIXTURE of futuristic forecasts—the electronic newspaper—is already here. More than a dozen dailies now publish an edition without cutting a tree or inking a press. “You can’t give a kid separate editions for the lawyers, laborers, and housewives on his paper route,” points out Elizabeth Loker, who helped develop an electronic edition of the Washington Post. It goes out over telephone lines to personal computers, and subscribers choose what they’ll read from a menu on their screens, instead of hefting an entire paper off the front step. “Electronic delivery lets every reader assemble his own newspaper,” says Loker.

Reading news on a computer screen for an hourly fee can tax the eyes and the wallet— an electronic version of a 25-cent paper could easily cost ten dollars. But publishers believe that shoppers will pay for up-to-the-minute advertising, a money-maker that also attracts the American Telephone and Telegraph Company. A possible future rival of newspapers, AT&T has already tested an electronic edition of the Yellow Pages.

At Bell Labs, the research arm of AT&T, I learned a primary cause of such changes. “Each time microelectronics cuts computing costs by a factor of ten,” explained Dr. John S. Mayo, executive vice president for network systems, “it opens a vast array of things that were once uneconomic.”

Like the teleterminal Dr. Mayo showed me: a combination telephone and computer terminal, with a compact keyboard and screen. The desk-top device logs his appointments, finds phone numbers, makes calls, sends and receives memos, and displays files—all at the tap of a few buttons.

Though experimental, Bell’s teleterminal exemplifies the chip’s power to alter the way we work, or even where we work.

“In 20 years a significant number of us— not just craftsmen or entrepreneurs—will work at home, using computers and dealing with our offices by electronic mail,” says Dr. Margrethe H. Olson. The New York University professor advises corporations considering how to attract or keep workers who dislike commuting, have small children, or are homebound by handicaps.

Some bank and insurance company employees “telecommute” now, a trend, Dr. Olson told me, with subtle implications. “The nine-to-five workday will grow artificial. Sick leave, vacation, and pension policies will change. So will the separation of work and family and the concept of leisure time—what you do with it, and when.”

At Columbia University, professor of public law and government Alan F. Westin spoke of a potentially worrisome aspect of working with the chip.

“Word processors and computer terminals can keep us under surveillance,” he said. “A boss can know how many keystrokes a secretary makes in a minute, hour, or day. At insensitive companies new technology may be an opportunity to grip workers totally.”

A decade ago Dr. Westin headed national studies of inquisitive centralized computer data banks, research that led to new federal privacy laws. He sees another challenge to our privacy in this decade.

“With personal computers and two-way TV,” he said, “we’ll create a wealth of personal information and scarcely notice it leaving the house. We’ll bank at home, hook up to electronic security systems, and connect to automatic climate controllers. The TV will know what X-rated movies we watch. There will be tremendous incentive to record this information for market research or sale.”

While some ponder how to shield sensitive information lodged in the ubiquitous chip, others contrive to tap it—for revenge, for fun, for profit. All three motives have figured in computer crimes.

Computers are woefully corruptible. Files can be altered, unauthorized commands can be added to programs, and legitimate ones misused, often without discovery. Nor does this take great skill: In tests, amateurs have penetrated the defenses of even classified military computers.

In recent years experts have put the cost of push-button capers at 100 million to 6.5 billion dollars annually. But undetected and unreported computer crimes make estimates suspect, cautions one authority, Donn B. Parker. He calculates that known computer frauds—a limited sample—typically cost their victims about half a million dollars. And the potential for plunder is sobering: Daily now, banks transfer more than 500 billion dollars around the U. S. by computer.

Electronic lawbreakers may hit harder and more often in the future, as personal computers multiply the means to penetrate computer systems and dramatically increase the number of people familiar with them. Drug runners and bookmakers already use personal computers, and other organized criminals will likely make them outright accomplices.

Teenagers, as easily as if vandalizing empty houses, have wrought long-distance havoc with their keyboards. Using telephone lines as a link, two California boys tampered with racehorse and greyhound pedigrees stored in a computer in Kentucky, and for a time the files of some Canadian corporations were an open book to youngsters at school computers in Manhattan.

CHILDREN OF THEIR TIME, you may lament, making mischief in a fashion ushered in with incredible rapidity by the chip. With such swiftness that you may conclude a revolution in our lives is well under way.

Yet it has hardly begun. In decades to come the technology of this age of the chip will surely seem minor, gradually dwarfed by its sweeping social effects.

Some will come as we put the chip to new uses. Chips aside, the latest artificial limbs and organs are not fundamentally new—unlike the microcircuits some scientists speculate we may one day implant in our heads to augment our intelligence.

As well, the chip will add new dimensions to old social issues. In an economy based on robots, how will we share wealth, now commonly distributed in the form of jobs?

Deepest change of all, the chip will alter our self-image. Apes that master sign language and use tools have already shaken the idea that to have ideas is to be human, a view likely to decline even further if machines too begin thinking.

Such profound adjustments seem to be the unavoidable and unsettling price of living in the age of the chip. But not too great a price, for in paying it we stand to gain the benefit of exercising some of our best virtues: patience, flexibility, wisdom.

2 comments
  1. links for 2009-01-26 | Nerdcore says: January 26, 20091:31 pm

    [...] The Chip (Oct, 1982) (tags: Retrotech Tech Computer) [...]

  2. […] very successful event: Full event video Program Slides Five photos from the event Event Summary National Geographic 1982 Story “The Chip” […]

Submit comment

You must be logged in to post a comment.