In this lecture you'll understand some of the key ingredients that make up good design. We'll look at what makes an interphase easy, hard, or natural. Let's start with an everyday example, like this measuring cup, a measuring cup has an user interface. There's a task, fill a liquid to the desired amount. How might we be able to improve the design of something so simple as a measuring cup like this? Here's an example from Axo as related by Alex Lee at the JL 2008 conference. They got a prototype of a potentially better measuring cup, something they hadn't necessarily been thinking about. So they went out into the field, they watched people measure liquid. They found that people would pour some liquid in, get it to the level, check it out, pour some more, check it out. And again, pour some more, check it out again. But when they asked people about any ideas they had for making the measuring cup better. Nobody mentioned the difficulty of reading the level. People might say something about the handle, because users want to be nice enough for feedback. They might say it was something slippery if you get some oil there. But the measuring, nothing. But here is what Axo came up with, this is really cool. They've got the same plastic measuring cup and the difference is that if you'd like to be able to measure. You can just look right down from the top and so this one, if I'm filling it from the spout. I can just look up to the right level, no problem at all. And you can see, this design caught on, here's one from Pyrex that has the same idea made out of glass. Where you're able to look down and see directly what the units are. Nice example of thinking outside the box, and the importance of getting behavioral measures from users, what they do. As opposed to just asking them and seeing what they say. These measuring cups also have another advantage that if you hold them right-handed, you see English units. And if you hold them left-handed, you see metric units. So to summarize, these measuring cup redesign gives us a couple of important lessons. For starters, if we simply ask people what they want, we might miss important opportunities. Second, there's real value in going out into the field and gathering behavioral measures, as opposed to just asking people. The third is that when you go out into the field, bring along a prototype, because that'll change the interaction. And the fourth is that the world is full of people who are tinkering in their garage. And that when the stars align, we can bring together these diverse talents to offer a new, exciting user-experience for the world. With measuring cups, like all user interfaces there are two fundamental questions that we want to ask. And I'd like to share with you a drawing that I've got from my colleague, Bill. Bill points out that really we've got these two main questions. The first is that when I encounter a measuring cup or a new computer software or anything else. I need to figure out what is it that I can do? Then after I do that, fill the liquid, try to print a document, retrieve it something from the Internet. Then I need to figure out, what is it that happened? And so the interface is teaching me, how do I know? What are the great insights of cognitive science is that if you understand what's in people's head. You can do a way better job of explaining their behavior. We're going beyond, well, the user's dumb, to well, if they did that, then they must be thinking this. I'm used to driving a normal car that you'd start with a key that you put in the steering column. Recently, I got a rental car, got into the driver's seat, had keys, but there's nowhere to put them. That's an example where I didn't know what, I how to do. You can see that my experience taught me what I should expect, but my expectation was violated by the newfangled car experience. Similarly, once you do get the car started, how is it that you know that you achieved it? Well, you might see the RPM on the tachometer go, you might hear the engine, some lights might come on. You're getting that feedback which informs your mental representation of what just happened. Here's a wonderful example, this is the classic Honeywell thermostat designed by Henry Dreyfuss. What I like about this from the perspective of our actor right here. Is that when I asked the how do I do question, I'd like to set the the thermostat's temperature, it's pretty easy. It's really obvious, right on the front, there's a strong signal that there's a dial, I can turn. I know that because it has these ridges, it's got an indicator here, and when I turn it I, get direct, immediate feedback. What in computer terms we now call direct manipulation, and that feedback is tied directly on top of the output. You want it 60 degrees, 70 degrees, heck, 80 degrees, no problem. For many years, thermostats moved away from this more direct manipulation approach to have buttons or other things. And that in direction, like with VCRs and other electronics, made things actually a lot harder to use. And more recently, we're seeing this directness come back into information appliances. The Nest thermostat for example, in many ways is paying homage to this wonderful classic Dreyfuss design. Donald Norman and Jacob Nielsen gave us some really useful questions as designers for being able to figure out. Whether the interface that we're creating, will be natural and easy to use and I'll leave these here for your own work. People obviously behave differently depending on how they think the world works. In our minds, we build these mental representations. Often, the representations in our minds are using analogies that come from experience. The Nest thermostat is like the classic Honeywell thermostat, when I'm writing a computer document, that's like using a typewriter. If I want to explain how electricity works, I might explain it by analogy to water. That the wires are pipes, where bigger pipes allow more water or electricity to be carried. That there are reservoirs or capacitors, and a lot of basic principles of electricity, you can explain using the water analogy. That's not to say that everything is neat and tidy upstairs, our mental models often aren't right. They're almost never complete, electricity is like water in some ways and not like water and others. And they're often rife with superstitions, computers are an amazing catalyst for superstitious behavior. We all have these superstitions, even really technologically savvy people. That we know in some level aren't real, and yet because we don't have a better explanation, we use them anyway. Over time we build more and more representations, these can dovetail together,. So I can think that electricity is like water in some ways and might be like teeming crowds in another way. Or I can layer on more abstract representations using equations, they coexist at the same time. When we change or add our representations, in many ways that's actually what learning is. It's this accumulation and evolution of our mental representation set. The goal for you as the designer, is to be able to beacon what the model of how I do is, what the model of how I know is. A challenge as things get more sophisticated, is that as designers we often are. Or at least over the design process, become experts with the technology. We've built these richer mental representations, and we expect that the user's representations are going to be like ours. But as you know, they often aren't, and this mismatch, when what's in my head is wrong. That mismatch can lead to slow performance, to errors and to frustration. The goal then as the designer, if I'm over here, is I want to be able to have a representation in my head. It's almost like you're trying to take a part of your brain, and bring that into the user's head. So if our user makes a mistake, why did they do it? That's essentially the cognitive question that we're asking about user interface design. Well, using what we just learned, we can ask did they have the right mental model? If I'm operating a keyboard, typing real fast, I mean to hit the equals key but I hit the delete key instead. I had the right model, I just slipped. If you're a designer and you're able to understand that something is a slip, right model, accidentally wrong action, you can fix that. With our keyboard for example, we could spread things out so it's less likely that you'll hit the wrong thing. Or you can make the targets bigger, autocorrect in typing. By contrast, sometimes I can do exactly what I intended to do, but I have the wrong mental model. One example of this appears to have happened to a number of people in Florida during the 2000 election. This is what was known as the butterfly ballot, we had a ballot, which had names on one side and holes. And the holes didn't quite line up with the names. So there might be a name over here, and some people might select the wrong hole. In this case, this was a mistake, as opposed to a slip, because I hit exactly what I wanted to. But it was out of the mistaken belief that it corresponded to this. When in reality, the tally was being recorded from what should have been this hole. The ballot is also a nice example of how you can use data to be able to infer when people might be making slips or mistakes. In this case you had a majority party candidate, Al Gore. And a candidate that in the rest of the state received a relatively small fraction of the vote, Pat Buchanan. When Buchanan got unusually large share of the vote in Palm Beach County, people started to ask why that might have been. So it's a great example of how you can look into the data to be able to make a guess as to where you might look for design flaws. Here, when people looked at the user interface that voters were using. You can see that many people could easily have made an error. In this case 0.85% voted for Buchanan, whereas for a comparison set, that number was 0.23%. So you had more than triple the number of people voting for this candidate. Here are two takeaways from the ballot, first, when designs are ambiguous for this. In this case it was for engineering reasons, to have the holes line up. As opposed to usability reasons, of having the interface be as clear as possible, this is when you'll likely have errors. Another lesson we can learn from the butterfly ballot, is that of consistency. If the voting interface changes frequently, from county to county, election to election, it's harder to debug errors. Every county has to invent its own sometimes. And voters have a difficult time, because they may be asked to use an interface that they haven't seen before. When people have a consistent user interface, errors are much less likely. Because they can benefit from learning that mental model over time.