COMPUTER MEMORIES (Jun, 1955)

COMPUTER MEMORIES

A large mathematical machine must be able to store information and refer to it. This requirement has stimulated the evolution of information-storage units based on various physical effects

by Louis N. Ridenour

A computing machine capable of solving problems must possess a “memory” or, less poetically, an “information-storage unit.” The recent history of improvements in computing machines has been largely a history of improving memory devices. No ideal system has yet been found, but there has been a great deal of progress within the past decade, and several promising new developments are on the horizon.

When we speak of the “memory” of a machine, we are using the term in a rather different sense from the usual one involving human mental activity. In a computer the component labeled “memory” serves the functions of storing instructions, data put into the machine and results of computations which are held until they are needed for successive operations. To clarify what we mean by the machine’s memory, let us compare the machine with a human computer, say a man preparing his tax return.

His program of instructions is set forth in the income tax form and amplified in books prepared by professional tax advisers. The “input data” are available in certain records: income payments to him, the amount withheld for the Internal Revenue Service by his employer, checkbook stubs testifying to his deductible expenses and contributions, and so on. The taxpayer processes these data (performing addition, subtraction, multiplication and division) according to the program of instructions and enters the results as “output data” on the form. During most of the work the man is not employing his memory: he reads the instructions from the form or a book and the input data from his records, and he stores partial results of his calculations as he goes along by jotting them on a scratch-pad. Almost the only way his memory comes into play is in his use of learned skills—adding, multiplying, reading and writing.

In the case of the computing machine, reading and writing are the province of input and output devices rather than the memory, and the rules for multiplication and addition are wired into the machine at the time of its construction. Thus the computer’s memory is used in a quite different way: it must hold the jottings on the scratch-pad, the program of instructions set forth on the tax form (and, if possible, their fuller explanation by J. K. Lasser), files of receipted bills and canceled checks, information from the looseleaf notebook used to keep a budget, and the like.

We must therefore compare computer memory devices not with the memory functions of the human brain, but rather with the physical information-storage devices used by men—the scratch-pad, notebooks and other current records, books and other permanent references.

Next we have to consider the form in which the information is stored, or, to put it another way, the language of the memory. Human language is composed of 10 digits (the decimal system), 26 letters of the alphabet (in English) and various punctuation marks and other symbols. But in an electronic computer it is essential to translate all information into a simpler code, and the most convenient is the binary system, using just two symbols: 0 and 1.

The binary system is peculiarly appropriate to the nature of an electronic machine; if it had not existed, computer designers would have had to invent it. Its two symbols can be expressed simply by the “on” and “off” states of a vacuum tube (or its potential replacement, the transistor). In the logical circuits of a modern computer the memory units commonly consist of pairs of vacuum tubes connected in a circuit which is called a “toggle” because of its analogy with a toggle switch. Once turned on, a toggle remains on until an explicit action is taken to turn it off. The toggle circuit is so connected that when one tube of the pair is carrying current, the other cannot carry current. Upon receiving an appropriate signal voltage, the toggle reverses; the tube formerly carrying current is abruptly cut off, and the tube formerly not carrying current now passes a large current. A toggle is either on or off; it has no other stable states. It can switch from on to off or vice versa in considerably less than a millionth of a second.

These two states, then, can represent the binary symbols, and a combination of those symbols in turn can represent a character of a richer alphabet. Thus numbers or words of ordinary English text can be stored in the machine’s memory as sequences of on-or-off signals. It takes five binary symbols to express a single letter of the English alphabet.

A big difficulty lies, of course, in the fact that the language of men must be translated into this language of the machines. At the output end the machine itself can perform the translation, but on the input side men usually have to do the translating to the machine. Moreover, there is a serious discrepancy between men and machines in operating speed. Even the primitive machines we now know how to build can perform any unit operation between a thousand and a million times faster than a human being could. If the machines are to be very useful in doing man’s clerical work, some way must be found to match the speed of input to the speed of the machine’s operation. A straightforward solution would be to let tens or even hundreds of human key-punch operators provide a single machine with its input data. This also suggests how the problem of translation can be approached. The single operation of pressing a key labeled A could cause the machine to type the letter A on a page, select the matrix necessary to cast an A in a lino-type slug, punch the binary representation of the letter A in a paper tape, punch the holes representing that letter in a business-machine card, and so on. Thus the operation would simultaneously produce two kinds of copy, one legible to human beings, the other to the machine. (In fact, the teletype page printer already does this: it types copy in English text and at the same time perforates a paper tape which can be read by a machine.)

Once a given set of data has been recorded in machine language, no human operator should ever again have to perform the same key-punch operation, because machines could always reproduce the data from the transcript. All information needed for reference should be preserved in machine-readable form.

These, then, are the requirements and general problems in designing machine memories. Now let us look at the memory devices themselves. It is useful to distinguish three classes of memories. The first is the inner or high-speed memory of the machine. This is the machine’s analogue to the scratch-pad; it is used to store the data and instructions in current use. There are two important requirements of the inner memory: it must permit rapid access to its data and it must be erasable. Because the elements of a rapid access system are costly, the inner memory of a machine usually is small, with a capacity of some 1,500 to 6,500 English words.-(We shall use words instead of “bits”—binary digits— as the measure of memory capacities in this article.) It must therefore be supplemented with an intermediate-speed memory—the analogue to a human computer’s notebooks and files of .documents. This memory may take as much as a thousand times longer than the fast inner memory to look up an item of information, but it can transfer a large block of data at once. Its total storage capacity is typically in the range from 10,000 to 100,000 English words. Like the inner memory, it must be erasable. The third class of machine memory corresponds to books and similar large repositories of the knowledge of mankind. Such a memory is required in any application where the machine is to keep large files: for instance, the subscription and promotion lists of a mass-circulation magazine, which may call for the storage of an amount of information equivalent to 100 million English words. To achieve this vast increase in storage capacity at reasonable money cost we must pay the price of slower access to the stored information, sacrifice of rapid erasability or some combination of the two.

Up to the present time most of the attention of designers has been focused on the high-speed inner memory of computing machines. Several different forms have been developed. One device is the vacuum-tube toggle circuit already mentioned. This was used for the inner memory of the ENIAC, the first of the modern high-speed computers. Since vacuum-tube circuits are complicated and expensive, the eniac memory was very small by present standards; its capacity was equivalent to only 27 words of English text.

The expense of a vacuum-tube toggle as a memory element made it evident that an entirely different memory system would be required if large storage was to be achieved. Designers turned to the idea of storing information in a delay tank of mercury, where the information is cycled in the form of ultrasonic sound waves. The electrical pulses representing information “bits are converted by crystals to sound waves, which travel so much slower than electricity or light that a given length of path can hold a vastly greater amount of information. A mercury delay tank with a two-foot sound path permits the storage of about 400 pulses, or bits, spaced one microsecond apart. The pulses are amplified and recirculated through the tank repeatedly.

A number of practical computing machines using this type of high-speed memory have been built. The best known is UNIVAC, whose inner memory has 100 parallel acoustic channels and a storage capacity of 2,800 English words.

The chief objection to the acoustic-delay memory is its rather long access time. While signals are traveling through the mercurv as sound pulses, they are unavailable to the computer. To refer to items of the stored information, the machine must wait until the wanted information reaches the receiving crystal and is turned back into electrical pulses. On the average this entails a delay of one half the total delay introduced by the mercury tank, or about 200 micro-seconds in a typical case. This is a long time compared to a modern computer’s speed of calculation.

The next idea developed was an electrostatic memory system using a cathode-ray tube in principle like a television picture tube. Information is recorded as patterns of charge (“dots” or “dashes”) produced by an electron beam at spots on the screen. The information stored in a given spot can be read by directing a beam of electrons to the chosen spot; electron emission produces output signals identifying the stored information. A typical electrostatic memory has 40 storage tubes, each with a pattern of 32 by 32 storage positions on its face; this corresponds to storage sufficient for about 1,350 English words. It takes only about 12 to 20 microseconds to deliver an information “bit” from its memory. The main virtue of the electrostatic memory is that the electron beam can very quickly pick out any part of the stored information. But its big drawback is that it almost doesn’t work; the system is so delicate and so subject to external interferences that it is extremely difficult to maintain accurate operation.

More promising than any of the fore-going is a new type of memory just recently developed. It depends on magnetic effects. The units in the system are tiny rings of magnetic material, called “cores” [see photograph on page 93]. Information is stored in a core by applying a magnetic field; when it is magnetized in one direction, its state represents the digit 1; in the opposite direction, it represents 0. The core is a permanent magnet which “remembers” its magnetic state after the applied magnetic field is turned off; thus the core can store information as long as may be desired. Now when a magnetic field is applied again, the behavior of the core will depend on its magnetic state. If, for example, a field is applied to urge the core in the magnetic direction signifying 1, and the stored magnetism in the core happens to be in the direction signifying 0, the field will change the core’s magnetic state. This change produces a signal current in a wire threading the core. Thus the generation of the current shows that the core was storing a 0. On the other hand, if the core happened to be storing the digit 1, the same applied field would produce no change in its magnetic state, and there would be no signal; the absence of a signal would show that the stored information was the digit 1.

The system uses an array of cores arranged in rows and columns [see photograph]. The magnetic field is applied by means of two current-carrying wires passing through each core at right angles to each other. The system exploits a peculiar property of the new magnetic materials called ferrites: namely, the fact that some cling to their magnetic state and cannot be changed from it by an opposing magnetic field until the field is stronger than a certain threshold value. In the scheme shown in the diagram on this page, the magnetic field produced by a current in a single wire, say Y1; is not sufficient to change a core’s magnetic state; only when two currents are added together at an intersection, as at Y1-X1 does the magnetic field become strong enough to reverse the core’s state. In this manner it is possible to read the information stored in any individual core. The information is read as a signal on another wire winding which can link all the cores in the array, since only one core is consulted at a time in reading.

The magnetic-core memory produces stronger signals than an electrostatic memory, is relatively immune to unwanted electrical disturbances and uses simpler circuitry.

Another promising type of high-speed memory is in laboratory development. It involves the use of a ferroelectric material, which has the property of “remembering” the direction of an electric field applied to it. This behavior is exactly analogous to that of the magnetic cores just described: the cores remember their magnetic history, while the ferroelectric crystal remembers its electric history. The main practical difference between the two systems is that the ferroelectric memory requires much less electric power. The material used in the experimental ferroelectric memory sys-ems is barium titanate. A single small crystal of this material can store 256 “bits” or eight English words.

For the larger, intermediate-speed memory of a computer the favorite device at present is the magnetic drum. The drum’s outer surface is coated with a ferromagnetic material like that used in magnetic tape recorders. On this surface, which can be rotated at high speed, information is recorded in the form of magnetized areas “written” in circular tracks as the drum rotates. The writing and reading are done by electromagnet “heads” mounted close to the recording surface. The tracks, in present practice, are laid down about a tenth of an inch apart and can store about 80 digits per inch. Thus a drum four inches in diameter and 12 inches long can store some 4,000 English words. Larger drums, storing up to 100,000 English words, have been built.

From the standpoint of access to the stored information the magnetic drum suffers from the same kind of limitation as the ultrasonic delay tank. The computer must wait until the desired information on the drum comes under the reading head. Even on the fastest-rotating drums this waiting time may be 10 to 20 thousandths of a second. Moreover, the magnetic-drum memory requires substantial vacuum-tube circuitry, and hence is not particularly economical.

Magnetic-core arrays may provide an answer to the intermediate-speed memory problem as they have for computers’ inner memories. At present only their cost stands in the way of their use for this larger memory, and they may soon be fully competitive with the magnetic drum, even in the largest practicable sizes—around 200,000 English words.

So far comparatively little work has been done on the third type of memory required by computers—the large-capacity storage of information corresponding to the human library of printed books. Many electronic computers still are tied to punched cards, which at present computer operating speeds are an anachronism. The inner memory of a computer can be read at six million English words per minute. The intermediate-speed memory can be read at the rate of 400,000 words per minute per reading head, and information is often read from several heads in parallel to increase this rate. But punched cards at best can be read no faster than 20,000 words per minute. In addition, the sheer physical bulk of a large file of punched cards makes access to an item of stored information slow and difficult. We must search through the many drawers of cards, choose the drawer containing the entry of interest, carry the cards in that drawer to the reader and read through the deck until we have found what we want. This may take minutes.

Most designers of the newer computers have adopted the magnetic tape as the best available improvement over punched cards for large storage capacity. By paralleling several channels on a single tape, by using several tapes in parallel and by using great lengths of tape, information storage of almost any desired total capacity can be realized. But tape has several serious drawbacks. Access to items on the tape is slow; the time is proportional to the length of the tape. Far more important is the difficulty of making corrections or additions to the information stored. To add a new entry to an ordered file we must either have provided empty spaces between each two original entries in the file, or else we must rewrite the entire file, putting the new entry in its correct location. Neither of these possibilities is attractive. Leaving blank spaces in the original file means that we are using the store in a highly wasteful way, with no guarantee that our blank spaces will be properly arranged for the additions we shall later have to make. Rewriting the whole file to make additions and changes is costly and lends itself to the commission of errors.

Two millennia ago human beings had just the same difficulties with scrolls— the ancients’ counterpart of books. The scroll form was dictated by the need for protecting the edges of their brittle papyrus; the scroll left only two edges exposed. Shortly after the tougher parchment was introduced, the book form was invented, either in Greece or in Asia Minor. Called the codex, it was originated primarily for law codes, so that pages could be removed or added as statutes changed.

Reels of magnetic tape are the scroll stage in the history of computing machines. It remains for someone to invent the machine’s analogue of the codex. A promising beginning has been made by Gilbert W. King. He has undertaken to exploit the great density of information storage that is possible through the use of high-resolution photographic emulsions. With his “photoscopic” technique information can be stored at densities more than a hundred times as great as those possible in magnetic media. This permits not only a more compact storage system—on a disk the size of a 12-inch phonograph record more than half a million words can be stored—but also faster reading rates per reading station, more information accessible to a single reading station for a given access time, and other advantages.

The first practical applications of photoscopic storage are just beginning. It is too soon to say how important this new technique may ultimately prove to be. There are indications that we shall now be able to give the computing machine a storage system having the capacity and flexibility of a library of books.

1 comment
  1. Alan Dewey says: March 17, 20108:31 am

    Beautiful picture of that small ferroelectric ram unit. The author, Dr. Ridenour, made no mention that the era of Ferroelectric memory was begun by MIT graduate student Dudley A. Buck for his 1952 M.S. thesis. Ferroelectric research was begun at Bell laboratories by J. Reid Anderson after Mr. Buck began his work.

Submit comment

You must be logged in to post a comment.