- Home
- Don Norman
The Design of Future Things Page 12
The Design of Future Things Read online
Page 12
The Denver airport serves as an example of an automation attempt that seemed straightforward on paper but was made before the technology was up to the task. People’s baggage comes in a wide assortment of shapes. It is placed arbitrarily on the luggage system’s conveyer belts, then must be shuffled to many different connecting flights or baggage-delivery systems. The destination is marked by bar codes on the luggage tags, but those tags are bent, folded, and mutilated and are often hidden beneath handles, straps, or other luggage. This task has far too many irregularities and unknowns to be handled by today’s systems.
Note that the airport shuttle trains that convey passengers between terminals at this same airport are completely automated and work smoothly and efficiently. The difference lies in the environment and the task, not the intelligence of the machines. With luggage, every piece is unique. With an airport shuttle, the route is completely fixed and predetermined. The train runs on tracks, so no steering is required. It can only move along the track, so the system only has to determine when it should move and how quickly. Simple sensors suffice to determine if anyone is standing in a doorway. When conditions are stable and the task well understood, when no mechanical agility is required and the unexpected happens infrequently, then automation can indeed take over. In these cases, automation works smoothly and efficiently and benefits everyone.
Shoshana Zuboff, a social psychologist at the Harvard Business School, has analyzed the impact of automation on a factory floor. The automatic equipment completely changed the social structure of the workers. On the one hand, it removed the operators from directly experiencing the production process. Whereas before they had felt the machines, smelled the fumes, and heard the sounds so that they could tell through their perceptions just how the procedure was going, now they were located in air-conditioned, sound-deadened control rooms, trying to imagine the state of affairs through dials, meters, and other indicators provided by the instrumentation. Although this change did speed up the process and increase uniformity, it also isolated the workers from the work and prevented the factory from making use of their years of experience with anticipating and correcting problems.
On the other hand, the use of computerized control equipment empowered the workers. Before, they were only given limited knowledge of the plant’s operation and how their activities affected the performance of the company. Now, the computers helped keep them informed about the entire state of the plant, allowing them to understand the larger context to which their activities contributed. As a result, they could interact with middle and higher management on their own terms, by combining their knowledge of shop-floor operations with the information gleaned from their automation. Zuboff coined the term informate to describe the impact of the increased access to information afforded by automation to the workers: the workers were informated.
The Future of Design: Smart Things That Augment
People have many unique capabilities that cannot be replicated in machines, as least not yet. As we introduce automation and intelligence into the machines we use today, we need to be humble and recognize the problems and the potential for failure. We also need to recognize the vast discrepancy between the workings of people and of machines. On the whole, these responsive systems are valuable and helpful. But they can fail when they come across the fundamental limitations of human-machine interaction, most especially the lack of common ground that was discussed so extensively in chapter 2.
Autonomous, intelligent devices have proven invaluable in situations that are too dangerous for people; occasional failures are still far better than the risk of human life. Similarly, many intelligent devices have taken over the dull, routine tasks of maintaining our infrastructure, continually adjusting operating parameters and checking conditions in situations that are simply too tedious for people.
Augmentative technology has proven its worth. The recommender systems of many internet shopping sites are providing us with sensible suggestions, but because they are optional, they do not disrupt. Their occasional successes suffice to keep us content with their operation. Similarly, the augmentative technologies now being tested in smart homes, some described in this chapter, provide useful aids to everyday problems. Once again, their voluntary, augmentative status makes them palatable.
The future of design clearly lies in the development of smart devices that drive cars for us, make our meals, monitor our health, clean our floors, and tell us what to eat and when to exercise. Despite the vast differences between people and machines, if the task can be well specified, if the environmental conditions are reasonably well controlled, and if the machines and people can limit their interactions to the bare minimum, then intelligent, autonomous systems are valuable. The challenge is to add intelligent devices to our lives in a way that supports our activities, complements our skills, and adds to our pleasure, convenience, and accomplishments, but not to our stress.
CHAPTER SIX
Communicating with
Our Machines
The whistling of the kettle and the sizzling of food cooking on the stove are reminders of an older era when everything was visible, everything made sounds, which allowed us to create mental models, conceptual models, of their operations. These models provided us with clues to help us troubleshoot when things did not go as planned, to know what to expect next, and to allow us to experiment.
Mechanical devices tend to be self-explaining. Their moving parts are visible and can be watched or manipulated. They make natural sounds that help us understand what is happening, so that even when we are not watching the machinery, we can often infer their state just from these sounds. Today, however, many of these powerful indicators are hidden from sight and sound, taken over by silent, invisible electronics. As a result, many devices operate silently, efficiently, and aside from the occasional clicking of a hard drive or the noise of a fan, they do not reveal much of their internal operations. We are left to the mercy of the designers for any information about the device’s internal workings, what is happening within the device.
Communication, explanation, and understanding: these are the keys to working with intelligent agents, whether they are other people, animals, or machines. Teamwork requires coordination and communication, plus a good sense of what to expect, a good understanding of why things are, or are not, happening. This is true whether the team is composed of people, a skilled rider and horse, a driver and automobile, or a person and automated equipment. With animate beings, the communication is part of our biological heritage. We signal our emotional state through body language, posture, and facial expressions. We use language. Animals use body language and posture, as well as facial expressions. We can read the state of our pets through the way they hold their bodies, their tails, and their ears. A skilled rider can feel the horse’s state of tension or relaxation.
Machines, though, are artificially created by people who often assume perfect performance on its part and, moreover, fail to understand the critical importance of a continuing dialogue between cooperating entities. If the machine is working perfectly, they tend to believe, why does anyone have to know what is happening? Why? Let me tell you a story.
I am seated in the fancy auditorium of IBM’s Almaden Research Laboratories, situated in the beautiful, rolling hills just south of San Jose, California. The speaker at this conference, a professor of computer science at MIT—let me call him “Prof. M”—is extolling the virtues of his new program. After describing his work, Prof. M proudly starts to demonstrate it. First, he brings up a web page on the screen. Then, he does some magic with his mouse and keyboard, and after a few clicks and a little typing here and there, a new button appears on the page. “Ordinary people,” explains the professor, “can add new controls to their web pages.” (He never explains why anyone would want to.) “Now, watch as I show you that it works,” he proudly announces. He clicks and we watch. And wait. And watch. Nothing happens.
Prof. M is puzzled. Should he restart the program? Restart the computer?
The audience, filled with Silicon Valley’s finest technocrats, shouts advice. IBM research scientists scurry back and forth, peering at his computer, getting down on hands and knees to follow the wiring. The seconds stretch into minutes. The audience starts to giggle.
Prof. M was so enamored of his technology that he never considered what would happen if it failed. It hadn’t occurred to him to provide feedback for reassurance that things were working—or in this case, to provide clues when things didn’t work. Later on, we discovered that the program was actually working perfectly, but there was no way of knowing this. The problem was that the security controls on IBM’s internal network were not letting him gain access to the internet. Without feedback, however, without reassurance about the state of the program, nobody could tell just where the problem lay. The program lacked simple feedback to indicate that the click on the button had been detected, that the program was carrying out several steps of its internal instructions, that it had initiated an internet search, and that it was still waiting for the results to come back from that search.
Without feedback it wasn’t possible to create the appropriate conceptual model. Any one of a dozen things could have failed: without evidence, there was no way to know. Prof. M had violated a fundamental design rule: provide continual awareness, without annoyance.
Feedback
“I’m at a meeting in Viña del Mar, Chile,” starts an email from a colleague, “at a nice new Sheraton Hotel perched on the seawall. A lot of design effort went into it, including the elevators. A bank of them with up-down buttons at either end. The doors are glass and slide silently open and closed, with no sound to signal arrival or departure. With the typical ambient noise, you can’t hear them, and unless standing close to an arriving elevator, can hardly see it move and can’t tell when one is open. The only sign that an elevator is present is that the up-down signal light goes out—but you can’t see that from the center of the elevator bank either. In my first day here, I missed elevators that came and went three times.”
Feedback provides informative clues about what is happening, clues about what we should do. Without it, many simple operations fail, even one as simple as getting into an elevator. Proper feedback can make the difference between a pleasurable, successful system and one that frustrates and confuses. If the inappropriate use of feedback is frustrating with simple devices such as elevators, what will it be like with the completely automatic, autonomous devices of our future?
When we interact with people, we often form mental models of their internal thoughts, beliefs, and emotional states. We like to believe we know what they are thinking. Recall how frustrating it can be to interact with people who do not show any facial expressions, give no verbal responses? Are they even listening? Do they understand? Agree? Disagree? The interaction is strained and unpleasant. Without feedback, we can’t operate, whether it is with an elevator, a person, or a smart machine.
Actually, feedback is probably even more essential when we interact with our machines than with other people. We need to know what is happening, what the machine has detected, what its state is, what actions it is about to do. Even when everything is working smoothly, we need reassurance that this is the case.
This applies to everyday things such as home appliances. How do we know they are working well? Fortunately, many appliances make noises: the hum of the refrigerator, the sounds of the dishwasher, clothes washer and drier, and the whir of the fan for home heating and cooling systems all provide useful, reassuring knowledge that the systems are on and are operating. The home computer has fans, and the hard drive makes clicking noises when active, once again, providing some reassurance. Notice that all these sounds are natural: they were not added artificially into the system by a designer or engineer but are natural side-effects of the working of physical devices. This very naturalness is what makes them so effective: differences in operation are often reflected in subtle differences in the sounds, so not only is it possible to tell if something is operating, but usually one can tell what operation is being done and whether or not the sounds are normal or possibly signify problems.
Newer systems have tried to reduce noise for good reason: the background level of noise in our homes and offices is disturbing. Yet, when systems make no sounds at all, it isn’t possible to know if they are working. Just like the elevators of the opening quotation, sound can be informative. Quiet is good; silence may not be.
If sound is intrusive and annoying, even as a feedback mechanism, why not use lights? One problem is that a light, all by itself, is just as meaningless as the beeps that seem to erupt continually from my appliances. This is because the sounds are naturally created by the internal operation of the systems, whereas added on lights and beeps are artificial, signifying whatever arbitrary information the designer thought appropriate. Added-on lights almost always signify only some simple binary state: working or not, trouble or not, plugged in or not. There is no way for a person to know their meaning without recourse to a manual. There is no richness of interpretation, no subtlety: the light or beeps means that maybe things are good, or maybe bad, and all too often the person has to guess which.
Every piece of equipment has its own code for beeps, its own code for lights. A small red light visible on an appliance could mean that electric power is being applied, even though the appliance is off. Or it could mean that the unit is turned on, that it is working properly. Then again, red could signal that it is having trouble, and green could mean it is working properly. Some lights blink and flash; some change color. Different devices can use the same signals to indicate quite different things. Feedback is meaningless if it does not precisely convey a message.
When things go wrong, or when we wish to change the usual operation for a special occasion, we need feedback to instruct us how to do it. Then, we need feedback for reassurance that our request is being performed as we wished, and some indication about what will happen next: does the system revert to its normal mode, or is it now forever in the special mode? So, feedback is important for the following reasons:
• reassurance
• progress reports and time estimates
• learning
• special circumstances
• confirmation
• governing expectations
Today, many automatic devices do provide minimal feedback, but much of the time it is through bleeps and burps, ring tones and flashing lights. This feedback is more annoying than informing, and even when it does inform, it provides partial information at best. In commercial settings, such as manufacturing plants, electric generating plants, hospital operating rooms, or inside the cockpits of aircraft, when problems arise, many different monitoring systems and pieces of equipment sound alarms. The resulting cacophony can be so disturbing that the people involved may waste precious time turning all the alarms off so that they can concentrate on fixing the problems.
As we move toward an increasing number of intelligent, autonomous devices in our environment, we also need to transition toward a more supportive form of two-way interaction. People need information that facilitates discovery of the situation and that guides them in deciding how to respond or, for that matter, reassures them that no action is required. The interaction has to be continuous, yet nonintrusive, demanding little or no attention in most cases, requiring attention only when it is truly appropriate. Much of the time, especially when everything is working as planned, people only need to be kept in the loop, continually aware of the current state and of any possible problems ahead. Beeps won’t work. Neither will spoken language. It has to be effective, yet in the periphery, so that it won’t disturb other activities.
Who Deserves the Blame? The Technology or
Ourselves?
In The Design of Everyday Things, I show that when people have difficulties with technology, invariably the technology or the design is at fault. “Don’t blame yourself,” I explain to my readers. “Blame the technology.” Usually this is correct, but no
t always. Sometimes it is better when people blame themselves for a failure. Why? Because if it is the fault of the design or the technology, one can do nothing except become frustrated and complain. If it is a person’s fault, perhaps the person can change and learn to work the technology. Where might this be true? Let me tell you about the Apple Newton.
In 1993, I left the comfortable life of academia and joined Apple Computer. My baptism into the world of commerce was rapid and rough—from being on the study team to determine whether and how AT&T should purchase Apple Computer, or at the very least form a joint venture, to watching over the launch of the Apple Newton. Both ventures failed, but the Newton failure was the more instructive case.
Ah, the Newton. A brilliant idea, introduced with great hoopla and panache. The Newton was the first sensible personal data assistant. It was a small device by the standards of the time, portable, and controlled solely by handwriting on a touch-sensitive screen. The story of the Newton is complex—books have been written about it—but here let me talk about one of its most deadly sins: the Newton’s handwriting recognition system.
The Newton claimed to be able to interpret handwriting, transforming it into printed text. Great, except that this was back in 1993, and up to that time, there were no successful handwriting recognition systems. Handwriting recognition poses a very difficult technical challenge, and even today there are no completely successful systems. The Newton system was developed by a group of Russian scientists and programmers in a small company, Paragraph International. The system was technically sophisticated, but it flunked my rule of human-machine interaction, which is to be intelligible.