A Short History of Computing (Jul, 1978)
A Short History of Computing
A few weeks ago a master’s degree candidate in computer science confided, with an embarrassed laugh, that he had never seen a computer. His experience with the machines of his chosen vocation had consisted entirely of submitting punched cards through a hole in a wall and later getting printed results the same way. While his opportunities to see equipment are restricted due to his student status, there are also thousands of working programmers and analysts using large scale equipment who have no contact with existing hardware and will never have a chance to see any first or second generation computers in operation.
This is in sharp contrast with the way programmers worked in the late 1950s and early 1960s. Before 1964, when multiprogramming computers were introduced, the typical programmer had opportunities to come in contact with the computer if he or she wanted to do so. Prior to 1960, in fact, most programmers actually operated the machine when debugging their programs. These people learned of the computer as a physical device; the current programmer is more likely to think of it as a vague logical entity at the other end of a terminal. Thus, many large system programmers have the rare distinction of using a tool without knowing how it works or what it looks like. This is in spite of the fact that many important computer developments have occurred within the average programmer’s lifetime.
However, in the past year or two, dramatic reductions in the cost of minicomputer components and the advent of the microcomputer have returned the hands-on computer to respectability in two ways. First, it is now possible to justify hands-on debugging on a small computer, since the hourly rate of the programmer is higher than that of the machine. Second, the decreasing cost of home computing has fostered the birth of a new class of “renaissance programmers”: people who combine programming expertise with hardware knowledge and aren’t afraid to admit it. Renaissance programmers can learn much from the lessons of computer history; simple and inelegant hardware isn’t necessarily best, but it’s frequently cheapest.
In short, the stored program computer became a necessary tool only recently, even though various mechanical aids to computation have been in existence for centuries.
One of the first such aids was the abacus, the invention of which is claimed by the Chinese. It was known in Egypt as early as 460 BC. The Chinese version of the abacus (as shown in photo 1) consists of a frame strung with wires containing seven beads each. Part of the frame separates the topmost two beads from the lower five. The right-hand wire represents units, the next tens, the next hundreds, and so on. The operator slides the beads to perform addition and subtraction and reads the resulting sum from the final position of the beads. The principle of the abacus became known to Roman and early European traders, who adopted it in a form in which stones (called by the Latin calculi, hence the word “calculate”) are moved around in grooves on a flat board.
The use of precision instruments dates back to the Alexandrian astronomers. Like the mathematics of the period, however, the development of scientific instruments died away with the demise of the Alexandrian school. The Arabs renewed interest in astronomy in the period between 800 and 1500 AD, and it was during this time that the first specialists in instrument making appeared. The center of instrument making shifted to Nuremberg, beginning about 1400. By the middle of the 16th Century, precise engraving on brass was well advanced due in part to the interest in book printing.
Calendrical calculators used for determining the moon’s phases and the positions of the planets crop up in all the major periods of scientific thought in the past two thousand years. Parts of a Greek machine about 1800 years old, apparently used to simulate the motions of the planets, were found in 1902 in the remains of a ship off the island of Antikythera. The gears of the machine indicate amazing technical ability and knowledge. Later calendrical calculators, which were usually of the type in which two or more flat disks were rotated about the same axis, came to include a means of telling time at night by visually aligning part of the Big Dipper with the pole star.
Trigonometric calculators, working on a graphical principle, were in use in the Arabic period. Such calculators were used mainly to determine triangular relationships in surveying. The popularity of this device was renewed in 14th Century Europe; in fact, calculating aids of all kinds grew rapidly in popularity as well as in scope from this time onward, largely due to the difficulty of the current arithmetic techniques. Napier was continually seeking ways to improve computational methods through his inventions. One such invention, “Napier’s bones,” consisted of a number of flat sticks similar to the kind now used in ice cream bars. Each stick was marked off into squares containing numbers. To perform calculations, the user manipulated the sticks up and down in a manner reminiscent of the abacus. Of particular interest is the fact that Napier’s invention was used for general calculation at a time when many other devices were used for the specific determination of one measurement, such as the volume of liquid in a partly full barrel, or the range of an artillery shot.
Pascal invented and built what is often called the first real calculating machine in 1642 (shown in photo 2). The machine consisted of a set of geared wheels arranged so that a complete revolution of any wheel rotated the wheel to its left one tenth of a revolution. Digits were inscribed on the side of each wheel. Additions and subtractions could be performed by the rotation of the wheels; this was done with the aid of a stylus. Pascal’s calculator design is still widely seen in the form of inexpensive plastic versions found in variety stores.
In 1671 Leibniz invented a machine capable of multiplication and division, but it is said to have been prone to inaccuracies.
The work of Pascal, Leibniz, and other pioneers of mechanical calculation was greatly facilitated by the knowledge of gears and escapements gained through advances in the clock. In the 13th Century, a clock was devised for Alfonso X of Spain which used a falling weight to turn a dial. The weight was regulated by a cylindrical container divided into partitions and partly filled with mercury. The mercury flowed slowly through small holes in the partitions as the cylinder rotated; this tended to counterbalance the weight. By the 15th Century, the recoil of a spring regulated by an escapement had made its appearance as a source of motive power. Gear trains of increasing complexity and ingenuity were invented. Clocks could now strike on the hours, have minute and second hands (at first on separate dials), and record calendrical and astronomical events. Gears opened the door to wonderful automata and gadgets such as the Strasbourg clock of 1354. This device included a mechanical rooster which flapped its wings, stretched its metal feathers, opened its beak and crowed every day at noon. Later, important improvements in timekeeping included Galileo’s invention of the pendulum; and the accurate driving of a clock without weights or pendulum which led to the portable watch.
Although mechanical and machine shop techniques still had a long way to go (consider the 19th Century machinist’s inability to fit a piston tightly into a cylinder), the importance of mechanical inventions as aids to computation was overshadowed by electrical discoveries beginning with the invention of the battery by Volta in 1800.
During the 1700s, much experimental work had been done with static electricity. The so-called electrical machine underwent a number of improvements. Other electrical inventions like the Leyden jar appeared, but all were based on static electricity which releases very little energy in a very spectacular way. In 1820, following Volta’s discovery, Oersted recognized the principle of electromagnetism that allowed Faraday to complete the work leading to the dynamo, and eventually to the electric motor. It was not until 1873, however, that Gramme demonstrated a commercially practicable direct current motor in Vienna. Alternating current (AC) was shown to be the most feasible type of electric power for distribution, and subsequently the AC motor was invented in 1888 by Tesla. The value of electric power for transportation was quickly recognized and employed in tramways and electric railways. This led to improvements in methods for controlling electricity. Electric lighting methods sprang up like weeds during the latter half of the 19th Century. The most successful were due to the efforts of Swan in England and Edison in the United States. Work on electric lighting, the telegraph and the telephone led to the wonder of the age: radio. In 1895, Marconi transmitted a radio message over a distance of one mile, and six years later from England to Newfoundland.
As a consequence of the rapid growth of interest in the radio, much work was done on the vacuum tube. Lee de Forest discovered the principle of the triode in 1907. Until the development of the transistor, the vacuum tube was the most important device in computer technology due to its ability to respond to changes in electrical voltage in extremely short periods of time. The cathode ray tube, invented by William Crookes, was used in computers for a few years prior to 1960. It faded temporarily from view but returned in 1964 due to advances in technology that improved its economic feasibility as well as its value as a display tool. In 1948 Bardeen, Brattain and Shockley developed the transistor, which began to replace the vacuum tube in computers in 1959. The transistor has many advantages over the vacuum tube as a computer component: it lasts much longer, generates much less heat, and takes up less space. It therefore replaced the vacuum tube, only to fall prey in turn to microminiaturization. Of course, the transistor principle didn’t go away, but the little flying saucers with three wires coming out of their bases did.
Oddly enough, one of the most fundamental devices in the early history of computing predates the electronic computer by more than two hundred years. The punched card was first used to control patterns woven by the automatic loom. Although Jacquard is commonly thought to have originated the use of cards, it was actually done first by Falcon in 1728. Falcon’s cards, which were connected together like a roll of postage stamps, were used by Jacquard to control the first fully automatic loom in France, and later appeared in Great Britain about 1810 (see photo 3). At about the same time, Charles Babbage began to devote his thinking to the development of computing machinery. Babbage’s first machine, the Difference Engine, shown in photo 4, was completed in 1822 and was used in the computation of tables. His attempts to build a larger Difference Engine were unsuccessful, even though he spent £23,000 on the project (£6,000 of his own, and £17,000 of the government’s).
In 1833 Babbage began a project that was to be his life’s work and his supreme frustration: the Analytical Engine. This machine was manifestly similar in theory to modern computers, but in fact was never completed. During the forty years devoted to the project, many excellent engineering drawings were made of parts of the Analytical Engine, and some parts of the machine were actually completed at the expense of Babbage’s considerable personal fortune. The machine, which was to derive its motive power from a steam engine, was to use punched cards to direct its activities. The Engine was to include the capability of retaining and displaying upon demand any of its 1000 fifty-digit numbers (the first suggestion that a computing machine should have a memory) and was to be capable of changing its course of action depending on calculated results. Unfortunately for Babbage, his theories were years ahead of existing engineering technology, but he contributed to posterity the idea that punched cards could be used as inputs to computers.
Herman Hollerith put punched cards to use in 1890 in his electric accounting machines, which were not computers, but machines designed to sort and collate cards according to the positions of holes punched in the cards (see photo 5). Hollerith’s machines were put to effective use in the United States census of 1890.
In 1911, the Computing-Tabulating-Recording Company was formed, which changed its name to International Business Machines in 1924. In the period between 1932 and 1945 many advances were made in electric accounting machines, culminating in 1946 with IBM’s announcement of the IBM 602 and 603 electronic calculators, which were capable of performing arithmetic on data punched onto a card and of punching the result onto the same card. It was Remington Rand, however, who announced the first commercially available electronic data processing machine, the Univac I, the first of which was delivered to the US Census Bureau in 1950. In 1963, just thirteen years after the beginning of the computer business, computer rental costs in the United States exceeded a billion dollars.
Univac I was not the first computer, even though it was the first to be offered for sale. Several one of a kind computers were built in the period between 1944 and 1950 partly as a result of the war. In 1939 work was begun by IBM on the Automatic Sequence Controlled Calculator, Mark I, which was completed in 1944 and used at Harvard University (see photo 6). Relays were used to retain numbers; since relays are electromechanical and have parts that actually move, they are very slow by modern standards.
In 1943, Eckert, Mauchly and Goldstine started to build the ENIAC (Electronic Numerical Integrator and Calculator), which became the first electronic computer using vacuum tubes instead of relays (see photo 7). The next year John von Neumann became interested in EN I AC and by 1946 had recognized a fundamental flaw in its design. In “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” von Neumann pointed out the advantages of using the computer’s memory to store not only data but the program itself. Machines without stored program capabilities were limited in scope, since they had to be partly rewired in order to solve a new problem (as was the case with EN I AC). This process sometimes took days during which time the machine could not be used. If rewiring of such machines was to be avoided, instructions had to be entered and executed one at a time, which greatly limited the machine’s decision making capabilities. Machines with stored program capabilities automatically store not only numeric data but also the program (which looks like numbers and can be treated like numbers) in memory. In short, stored program instructions can be used to modify other instructions, a concept that leads to programs which can modify themselves. It is the von Neumann stored program concept which is universally used in modern computers from the smallest microcomputer to the largest number crunchers.
The growth of the missile industry in the 1950s greatly stimulated the progress of computers used for scientific work. The nature of missile data handling at that time was such that work loads were very high during the week or so after a firing and virtually nonexistent in between. Computers were too expensive to leave idle, which led managers to look for other work for the machines. Business data processing grew from these roots to its present status, accounting for the lion’s share of machine usage today.
The latter part of 1959 saw the arrival of the transistorized computer. As a consequence of this innovation, air conditioning and power requirements for computers were reduced. Several new computers in that year were announced by IBM, Control Data Corporation, General Electric, and other manufacturers. Among the IBM announcements were the 7070 general purpose computer; the 7090, a high speed computer designed for a predominance of scientific work; the 1401, a relatively inexpensive computer aimed at the medium sized business and the 1620, a low priced scientifically oriented computer. The fantastic growth of the computer field continued through 1961 and 1962 with the announcement of more than 20 new machines each year. In 1963, continuing the family line from the grandfather 704 (as shown in photo 8), the IBM 7040 was announced. This machine embodied many of the features of the 7090 at a reduced cost. In the same year at least 23 other computers were announced by several different manufacturers. In 1964, IBM announced the 7010, an enlarged and faster version of the 1410, and the 360, which came in many different sizes and embodied many features not found in previous computers. Control Data Corporation announced the 6600, and General Electric their 400 series. The IBM 360/370 is typical of a trend in computer manufacturing which is currently followed by most manufacturers: upward compatibility. In the years prior to 1965, every manufacturer spent huge sums of money on research and programming support for several types of computers; several went out of business doing so. Likewise, computer users spent a lot of money to develop their systems for a particular computer only to find it had been superseded by a faster, less expensive machine. As a consequence, the deadly management decision of the period was, “Do we get the cheaper machine and spend the money on reprogramming, or do we risk staying with an obsolete computer and losing our programmers to the company across the street?”
Current developments point to a new trend away from the bigger machines. The combination of lower prices for components and programmable read only memories is attracting many manufacturers to the field of minicomputers and microcomputers. The current trend is clearly toward the personal computer, with TV game microprocessors leading the way.”