- Home
- Don Norman
The Design of Future Things Page 10
The Design of Future Things Read online
Page 10
Falling asleep isn’t recommended for airplane pilots, but it is usually safe because of the efficiency of the automatic flight controls, especially when flying in an uncrowded air space, with good weather, and with plenty of fuel. This is not the case for automobile drivers. Studies have shown that the chance of accident increases severely if the driver’s eyes are away from the road for more than two seconds. And the driver doesn’t have to be asleep: two seconds looking away from the road or fiddling with the radio is enough.
In many of the classical fields studied by engineering psychologists and human factors engineers, there is a well-known and well-studied problem called overautomation: the equipment is so good that people don’t need to be as attentive. In theory, people are supposed to supervise the automation, always watching over operations, always ready to step in if things go wrong, but this kind of supervision is very difficult when the automation works so well. In the case of some manufacturing or process control plants, there may be very little for the human operators to do for days. As a result, people simply cannot maintain their attention.
Swarms and Platoons
Birds flock, bees swarm, and fish school. They are fun to watch, moving in precise formation, swooping here, banking there, splitting up to avoid obstacles, smoothly rejoining on the other side. It is precision action; they move in synchrony, close to one another, all obeying the leader precisely, immediately, without collision.
Except there is no leader. Swarming behavior, as well as its analog among flocks of birds, schools of fish, and stampeding herds of cattle, results when each animal obeys a remarkably simple set of behavioral rules. Each individual creature avoids collisions with the other animals and objects that might be in its path, tries to keep close to all of the others—without touching them, of course—and keeps moving in the same direction as its neighbors. Communication within the swarm, school, or flock is limited to perceptual information: sight, sound, pressure waves (e.g., lateral-line detectors in fish) and smell (e.g., in ants).
In artificial systems, we can use more informative communication. Suppose we had a group of automobiles traveling down a highway, connected through a wireless communication network. Aha! The cars could actually travel in a swarm. Natural, biological swarms are reactive: its members react to each other’s behavior. Artificial swarms, however, such as a group of cars, can be predictive because cars can communicate their intended behavior, so others can react, even before anything happens.
Imagine a swarm of cars, where each car is completely automatic, communicating with the other cars in the vicinity. They could cruise down the highway rapidly and safely. They wouldn’t have to keep much separation between themselves either—a few feet or a meter would do. If the lead car intended to slow or brake, it could tell the others, and within thousandths of a second, they would slow down or brake as well. With human drivers, we must keep a large separation to give people time to react and decide what to do: with automatic swarms, the time to react is measured in milliseconds.
With swarms, we wouldn’t need traffic lanes. After all, these are needed only to help drivers avoid collision. Swarms don’t collide, so no lanes are required. We wouldn’t need traffic lanes, stop signs, or traffic lights, either. At an intersection, the swarm would simply follow its rules to avoid crashing into crossing traffic. Each car would adjust its speed and position, some slowing, some speeding, so intersecting streams of traffic would magically cross one another, with no car ever hitting another. For this, the swarm rules would have to be modified somewhat to make sure that a car on a cross street wouldn’t start following its new neighbor instead of its original ones, but that wouldn’t be hard to do.
What about pedestrians? In theory, the rule of avoiding collisions would work here as well. A pedestrian would simply walk across the street. The swarming vehicles would slow, speed, and swerve, just enough so as always to leave a clear space for the pedestrian. It would be a rather terrifying experience, requiring incredible trust on the pedestrian’s part, but in theory it could be done.
What if a car needed to leave the swarm or to go to a destination different from that of its swarm mates? The driver would tell his or her car of this intention, and the car would in turn communicate it to the other vehicles. Or the driver would use the turn signal to tell the car of the wish to change lanes, and the car would inform all the neighboring vehicles. To get into the lane on the right, the car just ahead in the right lane would speed up slightly, the one just behind in the right lane would slow down slightly, making space. Tapping the brake pedal would signal the intention to slow down or stop, and the other cars would get out of the way.
Different swarms might even share information. Thus, a swarm going in one direction might share information with a swarm going in the other direction, giving it useful information about what might lie ahead. Alternatively, if the density of vehicles were high enough, information might trickle backwards from the lead car to those behind, telling the following vehicles of accidents, traffic, or other relevant driving information.
Swarms are still the stuff of research laboratories. Instituting swarmlike behavior in real cars creates major challenges. One is understanding how this could work when not all cars will be equipped with the wireless communication equipment that is required for swarming. Another is figuring out how to handle the many varieties of automobiles on the road, some with fully automatic control and wireless communication equipment, some with outdated equipment, and some with no equipment at all or with malfunctioning equipment. Would the cars have to figure out who was the least capable in the bunch and then all revert to that behavior? Nobody has answers to these questions.
And there is more. Suppose an antisocial driver came upon a swarm filling up all the space on the road. If the antisocial car preferred to go at a much faster speed, it would only have to accelerate and drive straight through the swarm, confident that all the other cars would automatically get out of the way. This would work fine for one discrepant vehicle but could lead to disaster if others were doing the same thing at the same time.
Not all vehicles will have the same abilities. When some cars are still driven manually, we will need to consider realistic driver behavior, due to mixed levels of skill or mixed levels of attentiveness, drowsiness, or distraction. Heavy trucks have slower response times and longer stopping distances than cars. Different cars have a wide variety of stopping, acceleration, and turning capabilities.
Despite the drawbacks, swarming has lots of benefits. Because swarming cars can travel very closely to one another, more cars can fit on a given highway, relieving traffic considerably. Moreover, normal traffic slows down as the density increases: swarms would not have to slow down until the density reached far higher levels. Cars traveling close together help reduce wind resistance. (This is why bicycle racers cluster together: the “drafting” behavior of bikers reduces air resistance.) Even so, don’t expect to see swarms for quite awhile.
Platoons, now that is another story. A platoon is a simplified swarm, working in one dimension. In a platoon, one vehicle follows the one just in front, mimicking its speed precisely. When a sequence of cars travels in a platoon, a driver is only needed for the first vehicle: the others just tag along. Some of the swarm benefits apply here as well: increased density of traffic and reduced energy through drafting. Experimental studies, some done on public highways, show a dramatic increase in traffic capacity for a given highway. Platoons, like swarms, face the most difficulty when drivers enter or exit and when there are mixed-mode cars, and some with and, some without automatic communication capability. Of course, drivers who wish to exploit the system or simply to cause disruption can do so, both in platoons and swarms.
Platoons and swarms are only a few of the many forms of automation now being considered for the modern automobile. Platooning, in fact, comes free with some forms of adaptive cruise control. After all, if the cruise control can slow the vehicle’s speed when a car moves in front, then the car automat
ically will track the car in front as it changes speed, as long as it stays under the speed set into the cruise control. In heavy traffic, the car will follow the one in front closely, increasing separation as speed increases. For a fully automatic system, the separation between the two cars could be small, with no need to increase the distance by much as speed increased. Without people in control, the traffic would flow much more smoothly and efficiently, that is, as long as everything worked perfectly, as long as no unexpected events occurred.
Efficient platooning cannot be done without fully automatic braking, steering, and speed control. Moreover, it requires a guaranteed high-degree of reliability—perfect reliability some would say, reliability so high that it would never be questioned. As with swarms, however, it is not at all obvious how such platooning could be introduced into the existing highway system, given that we already have a huge number of vehicles incapable of platooning. How would we separate the automated from the nonautomated ones? How would a driver enter or exit the platoon? What if something went wrong?
Swarming works just fine in the laboratory, but it is difficult to imagine on the highway. Platooning may be more feasible. I can imagine special lanes restricted to platooning cars, perhaps enforcing an equipment check on the communication and control capabilities of each vehicle before it is allowed into the platoon. Platooning will speed traffic and reduce congestion while also saving fuel. Sounds like a winning proposition. The complexity, of course, comes in the transition: getting cars safely into and out of the platoon and enforcing the equipment requirements.
The Problem of Inappropriate Automation
I once argued that the current state of automation was fundamentally unsound because it was in the dangerous middle ground, neither fully automated nor fully manual. Either have no automation or full automation, I argued, but what we have today is halfway automation. Even worse, the system takes over when the going is easy and gives up, usually without any warning, when the going gets tough—just the reverse of what you would want.
If an airplane pilot or car driver is aware of the vehicle’s state, the environment, and the location and condition of all other vehicles and, moreover, is continually reacting and interpreting this information, the person is an essential part of the control loop: perceiving the situation, deciding upon an appropriate action to take, executing that action, and then monitoring the result. You are “in the loop” every time you drive your car with care, paying full attention to all that is happening around you. For that matter, you are in the loop while cooking, washing, or even playing a video game, as long as you are continually involved in judging the situation, deciding what to do, and evaluating the result.
A closely related concept is that of situation awareness, which refers to a person’s knowledge of the context, the current state of things, and what might happen next. In theory, a person could still be in the loop, stay fully aware of the situation, even with completely automated equipment, by continually monitoring the vehicle’s actions and assessing the situation, being ready to step in when needed. This passive observation is not very rewarding, however, especially as airplane pilots and automobile drivers might have to maintain this state for many hours on long-distance trips. In experimental psychology, this situation is often called vigilance, and the experimental and theoretical studies of vigilance demonstrate deterioration in performance with time. People just can’t keep focused on mindless tasks for very long.
When people are “out of the loop,” they are no longer informed. If something goes wrong and immediate response is required, they cannot provide it effectively. Instead, considerable time and effort is required to get back “into the loop,” and by then, it may be too late.
A second problem with automated equipment is the tendency to rely on the automation, even when there are difficulties with it. Two British psychologists, Neville Stanton and Mark Young of Brunel University, studied drivers using adaptive cruise control in an automobile simulator. They found that when the automation worked, things were fine, but when the adaptive cruise control failed, the drivers had more accidents than did drivers without the fancy technology. This is a common finding: safety equipment does indeed increase safety, until it fails. When people learn to rely upon automation, they are not only out of the loop but often too trusting of the automation. When it fails, they are less likely to catch problems than they would be if they didn’t have automated equipment at all. This phenomenon has been found in every domain studied, be it among airline pilots, train operators, or automobile drivers.
This tendency to follow instructions provided by automated equipment has its bizarre side as well. The residents of Wiltshire, England, have discovered a lucrative business: towing automobiles out of the River Avon, after drivers have followed the instructions of their navigation systems, even though common sense should have told them they were about to drive into a river. Similarly, even experienced airline pilots sometimes trust their equipment more than they should. The cruise ship Royal Majesty went aground because its crew had too much faith in its intelligent navigation system.
All automobile manufacturers are concerned about these issues. In addition to addressing actual safety in this modern, litigious society, they worry that even the slightest problem may cause massive lawsuits against them. So, how do they respond? Cautiously, very cautiously.
Driving a vehicle at high speeds over crowded highways is hazardous: there are over 1.2 million deaths and 50 million injuries each year in the world. This is truly a situation where our reliance on a machine, the automobile, exposes all of us to unnecessary risk—one that is helpful, invaluable to the population of the world, and deadly.
Yes, we could train drivers better, but part of the problem is that driving is inherently dangerous. When problems arise, they do so quickly, with little time to respond. Every driver experiences wavering levels of attention—a natural human condition. Even in the best of cases, driving is a dangerous activity.
If one cannot automate fully, then the automation that is possible must be applied with great care, sometimes not being invoked, sometimes requiring more human participation than is really needed in order to keep the human drivers informed and attentive. Full manual control of automobiles is dangerous. Fully automatic control will be safer. The difficulty lies in the transition toward full automation, when only some things will be automated, when different vehicles will have different capabilities, and when even the automation that is installed will be limited in capability. I fear that while the partial automation of driving will lead to fewer accidents, the accidents that do happen will be greater in magnitude, involve more cars, and exact a higher toll. The joint relationship between machines and their humans must be approached with caution.
CHAPTER FIVE
The Role of
Automation
Why do we need automation? Many technologists cite three major reasons: to eliminate the dull, the dangerous, and the dirty. It is difficult to argue with this answer, but many things are automated for other reasons—to simplify a complex task, to reduce the work force, to entertain—or simply because it can be done.
Even successful automation always comes at a price, for in the process of taking over one set of tasks, it invariably introduces a new set of issues. Automation often satisfactorily performs its task but adds an increased need for maintenance. Some automation exchanges the need for skilled laborers with the need for caretakers. In general, whenever any task is automated, the impact is felt far beyond the one task. Rather, the application of automation is a system issue, changing the way work is done, restructuring jobs, shifting the required tasks from one portion of the population to another, and, in many cases, eliminating the need for some functions and adding the need for others. For some people, automation is helpful; for others, especially those whose jobs have been changed or eliminated, it can be terrible.
The automation of even simple tasks has an impact. Consider the mundane task of making a cup of coffee. I use an au
tomated machine that makes coffee at the push of a button, automatically heating the water, grinding the beans, brewing the coffee, and disposing of the grounds. The result is that I have replaced the mild tedium of making coffee each morning with the more onerous need to maintain my machine. The water and bean containers must be filled, the inner parts of the machine must be disassembled and cleaned periodically, and all areas in contact with liquid must be cleaned both of coffee residue and calcium deposits (then the machine must be cleaned again to remove all vestiges of the cleaning solution used to dissolve the calcium deposits). Why all this effort to minimize the difficulty of a task that isn’t really very difficult in the first place? The answer, in this case, is that the automation allows me to time-shift the demand on my attention: I trade a little bit of work at an inconvenient time—when I have just awakened, am still somewhat sleepy, in a rush—with considerable work later, which I can schedule to be at my convenience.
The trend toward increasing automation seems unstoppable in terms of both the sheer number of tasks and activities that are becoming automated and the intelligence and autonomy of the machines that are taking over these tasks. Automation is not inevitable, however. Moreover, there is no reason why automation must present us with so many deficiencies and problems. It should be possible to develop technology that truly minimizes the dull, the dangerous, and the dirty, without introducing huge negative side effects.
Smart Things
Smart Homes