FEEDBACK (Sep, 1952)

This is the second in a series of 5 articles I’ve scanned from an amazing 1952 issue of Scientific American about Automatic Control. It discusses automatic machine tools, feedback loops, the role of computers in manufacturing and information theory. These are really astounding articles considering the time in which they were written.

<< Previous
1 of 8
<< Previous
1 of 8


It is the fundamental principle that underlies all self-regulating systems, not only machines but also the processes of life and the tides of human affairs

by Arnold Tustin

FOR hundreds of years a few examples of true automatic control systems have been known. A very early one was the arrangement on windmills of a device to keep their sails always facing into the wind. It consisted simply of a miniature windmill which could rotate the whole mill to face in any direction. The small mill’s sails were at right angles to the main ones, and whenever the latter faced in the wrong direction, the wind caught the small sails and rotated the mill to the correct position. With steam power came other automatic mechanisms: the engine-governor, and then the steering servo-engine on ships, which operated the rudder in correspondence with movements of the helm. These devices, and a few others such as simple voltage regulators, constituted man’s achievement in automatic control up to about 20 years ago.

In the past two decades necessity, in the form of increasingly acute problems arising in our ever more complex technology, has given birth to new families of such devices. Chemical plants needed regulators of temperature and flow; air warfare called for rapid and precise control of searchlights and anti-aircraft guns; radio required circuits which would give accurate amplification of signals.

Thus the modern science of automatic control has been fed by streams from many sources. At first, it now seems surprising to recall, no connection between these various developments was recognized. Yet all control and regulating systems depend on common principles. As soon as this was realized, progress became much more rapid. Today the design of controls for a modern boiler or a guided missile, for example, is based largely on principles first developed in the design of radio amplifiers.

Indeed, studies of the behavior of automatic control systems give us new insight into a wide variety of happenings in nature and in human affairs. The notions that engineers have evolved from these studies are useful aids in understanding how a man stands upright without toppling over, how the human heart beats, why our economic system suffers from slumps and booms, why the rabbit population in parts of Canada regularly fluctuates between scarcity and abundance.

The chief purpose of this article is to make clear the common pattern that underlies all these and many other varied phenomena. This common pattern is the existence of feedback, or—to express the same thing rather more generally—interdependence.

We should not be able to live at all, still less to design complex control systems, if we did not recognize that there are regularities in the relationship between events—what we call “cause and effect.” When the room is warmer, the thermometer on the wall reads higher. We do not expect to make the room warmer by pushing up the mercury in the thermometer. But now consider the case when the instrument on the wall is not a simple thermometer but a thermostat, contrived so that as its reading goes above a chosen setting, the fuel supply to the furnace is progressively reduced, and, conversely, as its reading falls below that setting, the fuel flow is increased. This is an example of a familiar control system. Not only does the reading of the thermometer depend on the warmth of the room, but the warmth of the room also depends on the reading of the thermometer. The two quantities are interdependent. Each is a cause, and each an effect, of the other. In such cases we have a closed chain or sequence—what engineers call a “closed loop” (see diagram on the opposite page).

In analyzing engineering and scientific problems it is very illuminating to sketch out first the scheme of dependence and see how the various quantities involved in the problem are determined by one another and by disturbances from outside the system. Such a diagram enables one to tell at a glance whether a system is an open or a closed one. This is an important distinction, because a closed system possesses several significant properties. Not only can it act as a regulator, but it is capable of various “self-excitatory” types of behavior—like a kitten chasing its own tail.

The now-popular name for this process is “feedback.” In the case of the thermostat, the thermometer’s information about the room temperature is fed back to open or close the valve, which in turn controls the temperature. Not all automatic control systems are of the closed-loop type. For example, one might put the thermometer outside in the open air, and connect it to work the fuel valve through a specially shaped cam, so that the outside temperature regulates the fuel flow. In this open-sequence system the room temperature has no effect; there is no feedback. The control compensates only that disturbance of room temperature caused by variation of the outdoor temperature. Such a system is not necessarily a bad or useless system; it might work very well under some circumstances. But it has two obvious shortcomings. Firstly, it is a “calibrated” system; that is to say, its correct working would require careful preliminary testing and special shaping of the cam to suit each particular application. Secondly, it could not deal with any but standard conditions. A day that was windy as well as cold would not get more fuel on that account.

The feedback type of control avoids these shortcomings. It goes directly to the quantity to be controlled, and it corrects indiscriminately for all kinds of disturbance. Nor does it require calibration for each special condition.

Feedback control, unlike open-sequence control, can never work without some error, for the error is depended upon to bring about the correction. The objective is to make the error as small as possible. This is subject to certain limitations, which we must now consider.

The principle of control by feedback is quite general. The quantities that it may control are of the most varied kinds, ranging from the frequency of a national electric-power grid to the degree of anesthesia of a patient under surgical operation. Control is exercised by negative feedback, which is to say that the information fed back is the amount of departure from the desired condition.

ANY QUANTITY may be subjected – to control if three conditions are met. First, the required changes must be controllable by some physical means, a regulating organ. Second, the controlled quantity must be measurable, or at least comparable with some standard; in other words, there must be a measuring device. Third, both regulation and measurement must be rapid enough for the job in hand. As an example, take one of the simplest and commonest of industrial requirements: to control the rate of flow of liquid along a pipe. As the regulating organ we can use a throttle valve, and as the measuring device, some form of flowmeter. A signal from the flowmeter, telling the actual rate of flow through the pipe, goes to the “controller”; there it is compared with a setting giving the required rate of flow. The amount and direction of “error,” i.e., deviation from this setting, is then transmitted to the throttle valve as an operating signal to bring about adjustment in the required direction (see diagram at the top of page 53).

In flow-control systems the signals are usually in the form of variations in air pressure, by which the flowmeter measures the rate of flow of the liquid. The pressure is transmitted through a small-bore pipe to the controller, which is essentially a balance piston. The difference between this received pressure and the setting regulates the air pressure in another pipeline that goes to the regulating valve.

Signals of this kind are slow, and difficulties arise as the system becomes complex. When many controls are concentrated at a central point, as is often the case, the air-pipes that transmit the signals may have to be hundreds of feet long, and pressure changes at one end reach the other only after delays of some seconds. Meanwhile the error may have become large. The time-delay often creates another problem: overcorrection of the error, which causes the system to oscillate about the required value instead of settling down.

For further light on the principles involved in control systems let us consider the example of the automatic gun-director. In this problem a massive gun must be turned with great precision to angles indicated by a fly-power pointer on a clock-dial some hundreds of feet away. When the pointer moves, the gun must turn correspondingly. The quantity to be controlled is the angle of the gun. The reference quantity is the angle of the clock-dial pointer. What is needed is a feedback loop which constantly compares the gun angle with the pointer angle and arranges matters so that if the gun angle is too small, the gun is driven forward, and if it is too large, the gun is driven back.

The key element in this case is some device which will detect the error of angular alignment between two shafts remote from each other, and which does not require more force than is available at the fly-power transmitter shaft. There are several kinds of electrical elements that will serve such a purpose. The one usually selected is a pair of the miniature alternating-current machines known as selsyns. The two selsyns, connected respectively to the transmitter shaft and the gun, provide an electrical signal proportional to the error of alignment. The signal is amplified and fed to a generator which in turn feeds a motor that drives the gun (see diagram on the next page).

THIS GIVES the main lines of a practicable scheme, but if a system were built as just described, it would fail. The gun’s inertia would carry it past the position of correct alignment; the new error would then cause the controller to swing it back, and the gun would hunt back and forth without ever settling down.

This oscillatory behavior, maintained by “self-excitation,” is one of the principal limitations of feedback control. It is the chief enemy of the control-system designer, and the key to progress has been the finding of various simple means to prevent oscillation. Since oscillation is a very general phenomenon, it is worth while to look at the mechanism in detail, for what we learn about oscillation in man-made control systems may suggest means of inhibiting oscillations of other kinds—such as economic booms and slumps, or periodic swarms of locusts.

Consider any case in which a quantity that we shall call the output depends on another quantity we shall call the input. If the input quantity oscillates in value, then the output quantity also will oscillate, not simultaneously or necessarily in the same way, but with the same frequency. Usually in physical systems the output oscillation lags behind the input. For example, if one is boiling water and turns the gas slowly up and down, the amount of steam increases and decreases the same number of times per minute, but the maximum amount of steam in each cycle must come rather later than the maximum application of heat, because of the time required for heating. If the first output quantity in turn affects some further quantity, the variation of this second quantity in the sequence will usually lag still more, and so on. The lag (as a proportion of one oscillation) also usually increases with frequency—the faster the input is varied, the farther behind the output falls.

Now suppose that in a feedback system some quantity in the closed loop is oscillating. This causes the successive quantities around the loop to oscillate also. But the loop comes around to the original quantity, and we have here the mechanism by which an oscillation may maintain itself. To see how this can happen, we must remember that with the feedback negative, the motion it causes would be opposite to the original motion, if it were not for the lags. It is only when the lags add up to just half a cycle that the feedback maintains the assumed motion. Thus any system with negative feedback will maintain a continuous oscillation when disturbed if (a) the time-delays in response at some frequency add up to half a period of oscillation, and (b) the feedback effect is sufficiently large at this frequency.

In a linear system, that is, roughly speaking, a system in which effects are directly proportional to causes, there are three possible results. If the feedback, at the frequency for which the lag is half a period, is equal in strength to the original oscillation, there will be a continuous steady oscillation which just sustains itself. If the feedback is greater than the oscillation at that frequency, the oscillation builds up; if it is smaller, the oscillation will die away.

This situation is of critical importance for the designer of control systems. On the one hand, to make the control accurate, one must increase the feedback; on the other, such an increase may accentuate any small oscillation. The control breaks into an increasing oscillation and becomes useless.

TO ESCAPE from the dilemma the designer can do several things. Firstly, he may minimize the time-lag by using electronic tubes or, at higher power levels, the new varieties of quick-response direct-current machines. By dividing the power amplification among a multiplicity of stages, these special generators have a smaller lag than conventional generators. The lag is by no means negligible, however.

Secondly, and this was a major advance in the development of control systems, the designer can use special elements that introduce a time-lead, anticipating the time-lag. Such devices, called phase-advancers, are often based on the properties of electric capacitors, because alternating current in a capacitor circuit leads the voltage applied to it.

Thirdly, the designer can introduce other feedbacks besides the main one, so designed as to reduce time-lag. Modern achievements in automatic control are based on the use of combinations of such devices to obtain both accuracy and stability.

So far we have been treating these systems as if they were entirely linear. A system is said to be linear when all effects are strictly proportional to causes. For example, the current through a resistor is proportional to the voltage applied to it; the resistor is therefore a linear element. The same does not apply to a rectifier or electronic tube. These are non-linear elements.

None of the elements used in control systems gives proportional or linear dependence over all ranges. Even a resistor will burn out if the current is too high. Many elements, however, are linear over the range in which they are required to work. And when the range of variation is small enough, most elements will behave in an approximately linear fashion, simply because a very small bit of a curved graph does not differ significantly from a straight line.

We have seen that linear closed-sequence systems are delightfully simple to understand and—even more important—very easy to handle in exact mathematical terms. Because of this, most introductory accounts of control systems either brazenly or furtively assume that all such systems are linear. This gives the rather wrong impression that the principles so deduced may have little application to real, non-li

1 comment
  1. Alan B. Barley says: January 22, 201111:58 am

    1.OCR TEXT INCOMPLETE – FEED BACK I hope this message comes to the attention of the Web Master of Monern Mechanix.
    The OCR text portion of this article cuts off in midsentence first paragraph, column 3 – page 5 of 8. Magazine page 52. — If you could please delete this comment after fixing the error.

    I enjoy your site, keep up the good work.

Submit comment

You must be logged in to post a comment.