[MUSIC] One of the most valuable things that you can accomplish with a prototype is learn whether your concepts meet the needs of the people they're intended to serve. With that in mind I've invited Rob Olsen to join me here today. Rob is a Principle Scientist in the Product Research Function at Proctor and Gamble. And he's also the industry's innovator in residence here at Northwestern's Design Institute. Welcome Rob. >> Thanks Ed, really happy to be here. >> Terrific, now Rob I understand that you've been at Proctor and Gamble for 33 years and that your teams have done a lot of prototyping. >> Tons, hundreds maybe even thousands by this point and across some of our big brands like Always, Pampers and recently on Tide. But also some on the small brands like Puffs and that's been a really fun and enlightening experience. >> All right, sounds like a very exciting career. What I'd like to do today is really drill down into the idea of testing prototypes with real users. How to prepare for that, how to do it. And ultimately, how to learn a lot from this interaction. So Rob, I wonder if you can sort of step me through it. I imagine that you've got something that you're ready to test. How do you get ready to do that? >> So you have to be clear on, first of all, what it is you're trying to do for that user. How are you trying to improve his or her life? Because ultimately the design process is in service to the user. And then you turn that objective into a learning plan. Let's take Swiffer for an example, something that a lot of people might have heard of. We started out by just observing behavior, and we saw two things. One, floor cleaning isn't a lot of fun, so there's a real opportunity to be much more engaging. And then the second thing is, brooms aren't very effective. They mostly push dirt around, especially the small particles and they just stay behind and you have these dull floors after all that effort. So then we started prototyping, hypothesizing how we could solve that problem. And again it was at both levels. Both technically, how we could actually result in a better clean? But also emotionally, how could we relieve some of that mundane onerous task of floor sweeping and make it more fun and engaged? >> Okay, so you get these hypotheses based both on the technical aspects of making it work better, and the emotional aspects of making it more fun, and you've started to build prototypes, and you're building this learning plan. So how exactly do you create a learning plan? >> So I like to think that a learning plan should have at minimum three parts. There's a who, a what, and a where. So on the who, you of course have to go engage with the folks that you're actually trying to help. Again in the Swiffer example, if somebody had hired a cleaning service and never did their own floors, that just wasn't of any interest, right? Those aren't the people we're trying to serve. It was people who were actually putting all that work in and not getting a very good outcome. So that's the first part. Then the what part is, what is the task you're going to go study with her? All right, and that starts with behavioral observation. Step back, don't interrupt the process, and just see what happens today. And watch for those tensions, in the current process, that you want to be addressing with your prototype. Then you can actually specify a task for the consumer to perform and see if the prototype does better at that task. It may be the entire cleaning process but importantly, early on, it's going to be pieces of the cleaning process. >> And the last part is where? >> Yes, indeed, where. That is, in what context do you want to test your prototype? It has to be a context that's realistic and relevant to the user you're trying to serve. So again, continuing with the Swiffer example, doesn't do us any good to have users come into a research facility and clean the floors there. Because they're not emotionally invested in the outcome. They don't care if the research facility is clean. They care if their own house is clean. So in general that means going into the user's world, home, wherever that context is, that the prototype is intended to work. That's how you should be testing. And then having who, what, and where defined, you create a schedule of tasks that you're going to go do with each user, a script almost if you will, that you'll go through. And that plus the stimulus of your prototypes, composes a step in the learning plan that teaches you what you need to know to get to your next goal. >> Excellent. So to recap, a good learning plan should address the question will a prototype deliver the benefits we expect it to? And this can typically be done by selecting a few consumers, maybe about half a dozen who realistically use the product. Then preparing these purpose built prototypes, what we often call minimum viable prototypes. And testing them with the consumer in the appropriate context. The minimum viable projects of course, they may be focused again on just one aspect of the user interaction. Indeed, early in the process, that's often the case. In the next video we're going to go on to explore what happens when you're actually there with the consumer. Thanks Rob. >> Thanks Ed, it was a pleasure. [MUSIC]