From Steam Engines to Robots… The Hierarchies of Robotic Devices (Apr, 1978)
From Steam Engines to Robots… The Hierarchies of Robotic Devices
By F. W. Chesson
The defining of just what constitutes a robot has been dimmed by anthropological bias, or the impediment of seeing the robotic scene through Human-Colored glasses. That shambling and amiable tin-man traveler, on the Yellow Brick Road to Oz may be far more acceptable to the viewer than the most complex but immobile logic device. Consequently, there’s the rub … or byte! When does a computer become a robot brain?
In order to examine these perplexing and contemporary questions, it will be useful to consider and, if possible, categorize the various hierarchies of the robot or cybernetic world. To start with, there is the pre-robotic environment of the servo-mechanism, with its self-balancing feedback loop(s). Even at this primitive level, the feedback loop is itself subject ot varying levels of complexity, interaction, and adaptability. Adaptability is the most essential feature of any organism, animal, vegetable or servo.
Feedback is that quality which immediately separates a large category of objects and systems. The most elaborate-appearing mechanism is nothing in comparison, when put beside a tiny device endowed with the quality of governing its activities in proportion to the varying intensity of its input stimuli, and output responses.
The word Cybernetic, the steersman of classical Greek, comes from the feedback concept. Here is the helmsman, who senses the movement of wind and wave, and adjusts the rudder accordingly to keep the ship upon her course.
One of the earliest known feedback devices was the Baille-Ble, or mill-hopper, described as far back as 1588. Its use in water or windmills was to distribute the grain to the millstones, according to the rate of speed of the mill’s drive shaft. Such feedback factors as grain flow, grain hardness, millstone tension, and drive shaft force, all interacted to determine the amount of grain delivered to the stones.
This was, of course, a very primitive level of feedback, the grain-hopper receiving four jolts for every revolution of the shaft. Man, in the form of the miller, was still required to optimize conditions and design goals by regulating the flow of water to the drive wheel or the wind pressure upon the sails, and to adjust the proximity of the millstones.
Not until the late 18th Century did a more familiar feedback device appear. This was the fly-ball governor, another contribution of James Watt to the perfection steam power. The governor consisted of a pair of iron balls at the ends of hinged arms, and linked to the engine’s drive shaft. As the shaft speed increased, the balls rotated ever faster, responding to centrifugal force and causing the linkages to move. This movement was coupled to the steam supply throttle, causing it to cut off the steam with increased speed and to open the valve as the shaft velocity slowed. The fly-ball governor continued in use well into the electrical age and was even employed with early phonograph motors.
The 19th Century saw, too, the growth of hydraulics as a science, and feedback also appeared here in the form of Leon Farcot’s Servo-Motor of 1868. Feedback was also discernable in nature, with many writers commenting upon it in terms of the dependency and population oscillations of predator and prey animals, to the conservation of energy in the Solar Phoenix Cycle.
World War II saw research and application of feedback and servomechanisms advance under the pressure of wartime. Bombsights, anti-aircraft gun directors, and radar all fused into integrated systems.
These early analog and digital devices converged irrestibly towards today’s state of the art, where the external analog world is sensed, converted to, and processed digitally, then reacted upon by analog extensions. Any device which aspires to the name robot is therefore bound to the laws of feedback and stability, if only to maintain an external upright position, or an internal state of data-processing stability. Negative feedback converges and conserves. Positive feedback diverges into randomness and disorder. The first mechanic who managed to assemble a fly-ball governor in reverse discovered this, as the steam engine’s speed increased to the literal breaking point. His descendent, at his op-amp breadboard, is no less dismayed as he discovers a hidden glitch of oscillations emerging from his clean-looking Bode Diagram.
Two post-war inventions which captured much popular interest in servo-systems were the Homeostats of W. Ross Ashby, and the “Tortoises” of W. Grey Walter. In the literary area there was Dr. Norbert Wiener’s monumental Cybernetics, accompanied by, in the science-fiction realm, such classics as Asimov’s I, Robot.
The Homeostat was, in essence, an interconnected complex (usually four) servo units, so organized that a disturbance to the input of any one unit would reflect throughout all the others, and result in a mutual attempt to restore their state of equilibrium, or homeostasis. This dynamic balance, whose name was coined by Dr. Walter B. Cannon, circa 1930 in his book The Wisdom of the Body, expresses the essential requirements for all living creatures, and continuity-seeking mechanisms. Here, simple feedback is transcended by an integrated, responsive, and variable feedback, constantly adopting to external and internal variations.
The other device, or family of devices, came from the investigations of Dr. William Grey Walter, of the Burden Neurological Institute of Bristol, England. Being mobile, the “tortoises” were more visually attractive than the static Homeostats, but perhaps less sophisticated in theory. Basically, they were obstacle-avoiding automata, attracted to light up to a certain level, but repelled by a greater intensity. Later models could home in on a lamp flashing at a certain frequency to recharge their batteries. Their phototropism had been anticipated by “Philidog,” the creation of M. Piraux of the Philips organization in France, which was demonstrated at the Paris International Radio Exhibition of 1929. The “dog” would follow the movements of a flashlight, but when the lamp was put too close to his nose, he “became annoyed and started to bark!”
Photopnobia, for high illumination levels, could well have been included in a robot dog built (probably by Westinghouse) for the 1939 New York World’s Fair. It was designed to home in on visitors by sensing their body heat and to “bite” their legs. But, just before the exhibit’s opening, it was attracted by the headlights of a passing automobile, and charged out an open door like a four-legged kamikaze and was run over, despite the startled driver’s efforts to avoid it. If this robot tragedy offers any lesson, it is that prospective designers of automata should consider all possible environmental influences upon their future creations, and then try to program for at least N + 1 contingencies.
The ability to learn from experience, rather than continually react in the same manner, can be considered a prime requisite for any progressive artificial intelligence. A robot turtle which finds, by trial and error, its way through a maze is interesting only from a hardware standpoint. If its evolutionary successor should record only those turns which did not lead to blind alleys, and thus retrace its path through short order, it may be tentatively applauded.
More sophisticated, however, is the mechanism (or living person!) which purposefully sets out on a different route each trial, to see if there is not an even shorter way through the maze. Finally, there comes the entity which evaluates the design of each previous maze it has run, to predict the configuration of the new one, and therefore how best to optimize each trial run to come. For maze, substitute problem, or task-area, or environment, and we see the evolution of an artificial or real intelligence in its true light.
Proceeding through the robot hierarchy, we come upon a host of diverse and interesting devices, the simulators. If their repetoir is limited and their application highly specialized, they yet have a story of successful problem solving to tell. They present a controlled environment for the student, (human or robot), to enter and manipulate, according to programmed conditions and problems. That early flight simulator, the Link Trainer of World War II, was a pre-flight instructor for thousands of airmen. It provided realistic banks and turns in response to control movements, furnished excellent instrument-flight training, and was virtually crash-proof.
Simulating animal behavior has fascinated Man since Antiquity. Tales of magic horses, brazen warriors, and unbeatable chess-players have caught the attention of writers from the Arabian Nights down through Edgar Allen Poe. The experiments with dogs relating to Classical Conditioning by Dr. Pavlov, earning him the Nobel Price for Medicine and Physiology in 1904, have been simulated over the years, culminating with today’s extensive computer programs.
The robot dogs shown in the photograph were developed by the author in the early Sixties, when the teaching-machine “fad” was approaching its heady zenith. At the time of the design, relay logic still had some cost advantages over the contemporary RTL gates, but some transistors were employed for the “eyes” and “ears” of the automated canines.
Pavlov’s Classical Conditioning experiments underly much of modern learning theory; hence, if a robot, android, or humanoid is to learn, it is desirable to know what conditioning is all about. On a basic level, Pavlov rang a bell, then fed the dog, measuring the animal’s response by the amount of saliva generated. After a while, the bell alone could evoke a salivatory reaction. On a human level, do our mouths not water at the mere aroma of a tasty pie? Or even at the verbal cue that: “Dinner’s ready!” …? Of course, should the announcement prove false or premature, our anticipatory response will diminish markedly. It can, however, be readily restored, along with our faith in human nature.
The electro-mechanical dog was designed to perform the following simulations, which will be examined: conditioning (learning), extinction (forgetting), spontaneous recovery, higher order conditioning, learning curves, memory of stimuli occurrences, and stimuli hierarchy.
In general operation of the simulator, pressing the RESET switch puts the robot dog at an untrained level (electronic brainwash!). Salivation being somewhat difficult to imitate, the response to feeding was represented by having the dog wag its tail, a readily observable act of canine satisfaction. To attract the interest of younger students, the feeding stimulus was simulated via a plastic bone having a concealed magnet. When the magnet end of the bone was in proximity to the dog’s “nose,” a reed switch was closed, activating a tail-wagging power transistor and solenoid.
Via a microphone and photocell, the dog could “hear”
and “see.” Normally, the audio stimulus was dominant, activating a Schmitt-trigger delay for a pre-set time interval. If the food stimulus was presented during this period, an AND gate caused this coincidence to be recorded by the Conditioning Counter, a ten-point stepping relay. Today’s equivalent would probably be a CMOS type 4017 decimal-decoded counter chip. When a preset number of coincidences, say four, had been registered, a form of relay flip-flop circuitry caused the dog to wag its tail to the sound stimulus as well as to food.
As long as an occasional sound-food coincidence, (reinforcement), occurred, the conditioned state would be maintained. But after another preset number of sound-stimuli without food following, (anticoincidence), say five, the flip-flop resets the dog to an unconditioned state, and it must be retauqht.
Sometimes, the experimenters found that their animals would recover their condition, (spontaneous recovery), without any apparent external action. This is similar to being given a telephone number in the afternoon, forgetting it totally by night, yet having it suddenly come to mind the next morning, apparently released from some buffer-storage deep in the subconscious. In the simulator, the spontaneous recovery function may be cut in and its “latent period” set by a potentiometer. Should normal conditioning be re-established before it can act, it is reset for future use. Once it has acted, however, it is of a one-shot nature; following a second extinction, true conditioning must follow for the SR circuit to be reset.
After conditioning and extinction, Pavlov found that his dogs not only relearned faster, but that their conditioned response was more resistant to extinction. This learning curve holds true in human education, as anyone who has learned a mathematical equation or foreign language will agree. Learning something the second time around nearly always is quicker and seems to stick longer as well.
The learning curve simulation required multi-level stepping-relays in the original model, whose pick-off points were determined in connection with the original settings for conditioning and extinction counts. Thus, the original number of four coincidences would be reduced to three and then only one, while the anti-coincidences for extinction might be increased from five to six or seven, and then to eight or ten.
When the living dog has been very well trained to salivate to the sound of the bell, it was found that the bell as well as food could be employed to condition him to a new stimulus, such as light. This is higher order conditioning, and represented the simulator’s highest accomplishment, being activated by the learning curve counter.
While the above model and its concepts are quite elementary, they nevertheless furnish a base upon which increasingly diverse and subtle forms of learning behavior may be simulated and explored. It has been found, for example, that conditioning is more resistant to extinction when every trial stimulus is not always rewarded, this variable reinforcement scheduling, could lend itself readily to microprogramming applications.
Leaving the fascinating domain of the simulators, we ascend to new heights of cybernetic sophistication, in- habited by mechanisms gifted with wide degrees of freedom. Now the question becomes, what constitutes a robot? Mobility, while attractive, is neither necessary nor sufficient. Humanoid, or even animal, form does seem to hold an almost irresistible appeal.
In the area of humanoid forms, there arises another dilemma of differentiation; robots versus androids. Androids, according to established science fiction traditions, are human appearing automata, either clad in realistic plastic flesh over a mechanical superstructure, or else composed of natural organic compounds. The latter may be laboratory-made flesh and blood, or like Dr. Frankenstein;s unique creation, reassembled from second hand au-natural ingredients. The media generally favor the purely mechanistic robotic form, ranging from the lumbering “Robbie” of the old “Lost in Space” television series, to the engaging Laurel and Hardyesque of the “Star Wars” movie. The recently terminated TV series “Logan’s Run” opted for the android version.
When it comes to what defines a robot, we may consult a table appearing in the book “Thinking by Machine,” by the French scientist Pierre de Latil. Here, the various levels of automation are presented, commencing with simple tools and climaxing with a god-like entity, which determines its own matter for creation. Somewhere in the middle are thinking machines contemporary to our present technology or waiting in the wings to make their entrance. Perhaps some do not care to appear, preferring to remain behind the scenes, pulling the strings of human puppets!
SOME ROBOTIC HIERARCHIES A remote printout or video terminal hardly seems to qualify for any level of robot society, yet, put it on wheels, (or legs), and program it to make the rounds of an office full of human operatives, and its status is considerably elevated. It is almost entirely directed by some remote intelligence, having little more initiative than to signal back that it has encountered an unprogrammed obstacle in its accustomed path, or that human operative Number 6SJ7 is requiring excessive copies of printout forms, which may just be ending up as paper airplanes.
From this motorized mail clerk, it is a few steps upwards to the servo-secretary. Our tin person may be of limited aptitude, but whether clad in pink plastic or bright brass-work, it ambulates on two good legs, though auxiliary training wheels may be necessary for those pesky stairways. Avoidance of persons and other randomly appearing obstacles is possible through built in subroutines, but all sensory inputs are monitored by the remote brain which takes over at the slightest deviation. Our servo-serf may even be subserviant to a robot foreman, who may have the responsibility for an entire office floor or production line subsection.
Our supervisor robot could exhibit an increased status by competently handling a variety of problems in the daily routine so that the most efficient use may be made of the workers. He may communicate with both his master CPU and authorized humans, to accommodate schedule changes and cope with emergencies. At all times, however, the servo-supervisor should remain properly defferential towards the lowliest office person.
If socially interacting robots are going to encounter the public at large, they will have to obey, in general, the Three Laws of Robotics, as set forth by Dr. Isaac Asimov:
1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2. A rbot must obey the commands given by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, so long as this does not conflict with the First or Second Law.
Within these rigid appearing laws, there may have to be room for various subsections and clauses, tailored to meet evolutionary robot technology. For instance, under what circumstances must a robot obey an android? Does the outward appearance of human flesh take precedence over computing ability? Will some robots obey other robots rather than men, and hold silicon oil more sacred than red blood? Truly, like all Holy Writ, the Three Laws will be subject to human and robotic interpretation.
As indicated above, the social robot will be subject to vastly greater memory and decision-making needs. His state of liberation from a restricted operating environment will depend not only on his command status with other entities, but by his capacity to cope with short and long-term goals and their modifications. All, of course, in addition to general housekeeping requirements, such as balance, walking, (or other forms of locomotion, not excluding water propulsion and flying), obstacle avoidance, sensor input monitoring of potential dangers, internal monitoring of CPU and memory functions and redundant circuits, and naturally the sense to come home for a battery charge or atomic pile replacement.
The more integrated the robot, the less it must obey the commands of the external world. If it is linked at all to other robots or a master brain, it is only for consultation of common goals or problems. Data shared and compared, it announces to a waiting human that everyone in the 9002-Class had better be retro-fitted with 25-GHz data-links in no less than 103.75 hours, or there will be a cybernetic job action that will make the Great Servo-Strike of ’98 look like a party by comparison.
While the very free robot may contemplate the status obtained in commanding a whole army of subordinates who execute such routine duties as interfacing with mere humans, and other feedback-flunkies, the Hardware Hobgoblin slips into his DO-Loop reveries. State-of-the-art memory has failed, in the face of sheer volume, to meet the exponential rise of bit requirements. The robot master must give up his cherished mobility, delegating sensory input and decision output to a host of lessor but ambulatory surrogates, which we have passed on the way up.
Near the top of the hierarchy pyramid, there is room for but a few of the elite. These converse, when necessary, in twittering tera-hertz, of things beyond the ken of long vanished mortal minds, having taken creation from the hands of their creators.
What is the future for the lonely lords? May they destroy their human designers in war games suddenly turned real? Will they compete via servo-soldiers for the vanishing material and energy resources of the depopulated and plundered planet? Or will our robots survive us, to spread a vanished mankind’s eternal message of Hope throughout the galaxy, perhaps appearing in android skins before the wondering eyes of simple shepherds on a Distant Star?