In our fourth question, what works? We take our ideas to real stakeholders, first on one on one conversations and then, in field experiments. The challenges in what works are first designing the experiments and then listening to the feedback from them non defensively. Team members who struggled with the first two design questions at the fuzzy front end of idea generation are often more comfortable now in the what works stage. Because it requires a different kind of expertise. Whereas Jeffrey's generally love what is and what if, it's the George's who tend to excel at what wows and what works. So an ideal team will include both sets of skills as we talked about in module three. Because designers are generally taught in studio settings where critiquing is key, they learn early on to detach their egos from their creations and to hear criticism non defensively as part of their training. But you know, the opposite is true for most of us. When we ask teams to seek feedback and present learning launches, one major principle is they may not defend their choices. What matters instead is whether the teams understand the criticisms they're hearing. As long as team members understand the assessment, they have the choice. They can accept it and change or decide that any disparaging analysis is not important and ignore it. But first, their job is to listen carefully. For many people, especially the Jeffreys, design thinking actually gets more difficult in what works. In many ways we've talked about design thinking as being about emotion, empathy, human centeredness, an understanding what someone else is thinking and feeling. All of these are concepts where Jeffreys flourish. But the experimental phase is about focusing on data. It's okay to fall in love with your stakeholders, but not your solutions. So hearing what works for conducting test on our hypotheses and we need to think like scientists. Assumption surfaced and prototyping in hand you're now ready to turn to your key stakeholders to seek their feedback. Remember, your primary reason is to learn. Let them teach you. You might continue for three or four rounds until you've worked through the issues and the concepts. Then, you're ready to design your learning launch. Designing a learning launch itself is pretty straightforward. First, you need a working prototype that focuses on the key assumptions you're testing. Think of mass agro, planting new and old crops side by side in a test plot. You'll do a series of learning launches as you iterate your offerings to match the new learning that each one is producing. Each launch, based on what you learned in the prior one, narrows the search and focuses the desired outcome until you've addressed the metrics you set out in your design brief. Ultimately learning launches result in decisions. If you decide to move ahead with additional development, the learning launch should tell you how. You'll continue your learning launches until all issues and critical assumptions have been addressed and we hope solved. And then you're ready to move into implementation. Let's return to visit our friends at Monash to look at the power of learning launches in action.