In this video, I'm going to wrap up our discussion of evaluation without users before we move into the next module on user testing and evaluation with users. So, as I said when we started, we learned about many forms of evaluation. We learned about action analysis, the quantitative evaluations using both formal techniques like the Keystroke-Level Model or GOMS, and informal back of the envelope action analysis. We learned about qualitative methods, expert evaluation, cognitive walkthroughs, heuristic evaluation. And specifically, you learned to conduct three of these forms of evaluation, informal action analysis, cognitive walkthroughs and the Nielsen and Molich version of heuristic evaluation. But, which ones of these methods should you use and when? That's really the purpose of this video. So when you're planning an evaluation, you start with three key factors. What is a particular evaluation method good for? When can you carry it out successfully in the process of your own development and design? And how much cost or effort is involved? And then you have to match the evaluations you might conduct to your project, but also to your team. If an evaluation needs a particular set of skills, you need a certain set of people, you have to look at, do you have the people together, can you get the people together? And that's how you start planning an evaluation strategy. So if we look at the techniques that we've learned to do here in this course, action analysis is good for certain things. It's good to identify bottleneck paths in carrying out things in an interface. It can help identify likely error spots where things get complex. There's too many steps, there's too much you have to know to get something right. Formal action analysis is expensive and it's probably most useful for engineering systems that are going to be high-use, high-investment. We talked about that with things like telephone operator control systems. But, yeah, you could see an argument for formal action analysis if you were developing the cash register interface for a fast food chain, or anything that you're going to have large numbers of people using over and over and over again. Informal action analysis is quite cheap to carry out, and you can do it at a prototype stage. And that might suggest that there are cases where that's an appropriate technique to spot some of these bottlenecks as early as possible. The walkthrough techniques, like the cognitive walkthrough that we went through, address task-specific challenges. They're great for finding things like problems with labels, potential confusion for inexperienced users. They really don't do much if you're worried about the expert user because the expert user knows what's right from previous experience. But a walkthrough will catch the first time somebody does something. The investment is moderate. You need to get a group of people together. You need to prepare the detailed action sequences, the walkthrough scenarios. And a walkthrough is going to take you a few hours, plus preparation work, plus the results being put together. You need a detailed prototype, the labels have to be finished. You can't say, well, we'll figure that out later. And you need detailed task descriptions and user descriptions. You can't answer questions like, will the user understand the feedback, unless you know who the user is and what the feedback is. So, these are a good opportunity to check vocabulary, matching whether your design matches your users. It's not something you can do remarkably early. You can't do it with an early sketch and idea, but you can use it at the time that you have a fairly complete prototype. The heuristic evaluation, the checklist approaches. They compliment what you find with the walkthrough because they're mostly task-free issues. Roughly, those are two types of issues. Things like consistency are things handled the same way across different parts of the application, but also best practice design guidelines. Are we supporting exploration? Are there easy exits where you need them? Is the needed information on the screen when it needs to be on the screen? The investment's about the same, maybe slightly more than a walkthrough. You need a bunch of people that don't have to coordinate getting them together, but you do have to get them to do their independent evaluations and then have somebody combine them in some meaningful way. Again, you need a detailed prototype. You need user descriptions. You don't actually need task descriptions to carry this out because it's task-free. And it's probably most effective for sanity-checking, the implementation of your design ideas. Did you get what your idea was into a form that actually matches the way a good interface should be developed? So, if we put all these together, you'll notice that these are largely complementary techniques. Most projects will benefit from more than one, not necessarily all. We've had a lot of projects where we have skipped an action analysis and tried to catch that through the walkthrough. There's others that I've seen that have used a heuristic and an action analysis and never did a walkthrough. But, typically one will provide benefit, two will provide more benefit. And most important, these are not used by themselves. They're used in conjunction with actual user testing, what we're covering in the next module, but also in some cases, just simple user review of designs, focus groups, discussions, presentations of your prototypes and design drawings to users to get early feedback. And again, the goal of all of this is to iterate towards a better design so that what you finally produce and release is something that meets your goals, meets your users' goals and is actually usable for the things people are trying to use it for. So that wraps up evaluation without users. We look forward to seeing you shortly as we move into user testing.