The ability to test and experiment in design, it is everything. The only way to really dig down and resolve a design, is to build a prototype and put it to its merits. Yeah. So when we're testing a product, we want to actually approach it in a couple of different ways. The first is finding the things that we want to test on the product itself. The biggest thing that I see that people make a mistake of early on and doing prototyping and testing, is they don't enter into it with a research script. A thing that it really just shows like, hey, here's what I'm trying to get out of this research. Defining that ahead of time will make sure that you don't get off topic. In addition to that though, you want to make sure that you're targeting the right demographic. If you're targeting just some random subset of users without identifying the ones that are actually having the problem that you're trying to address, then you're not going to get the feedback that you really need. So working with those two things hand-in-hand, is critical to making sure that your feedback is correct and it really is something that's going to be valuable for you moving forward. The primary step for testing a product, is number one, understanding which metrics are you measuring in the first place. So, the metrics that you measure can really impact the way that you design something and it's really important that we work closely with our PMs to understand that we're measuring the right metric to get to the right solution. Set number two, once we're confident that we're looking at the right metric, that's when we start to think about, "Well, how can we move that metric in a way that's good for us?" and that will inform the design decisions that we make, and it kind of acts as a goal for us to understand are we succeeding in the project. One of the things that we're really keen on improving is, how quick is it for a help seeker that's come to the portal to go from landing on the page to raising a request and getting in contact with the people, with the agents on the website. So, that informs our design to think how can we make it quicker for them to search through stuff? How can we better surface our request-type? How can we make it easier for them to find the right request-types, so that they know that they're asking for help from the right person? And all of these questions that we ask are based around that metric of how fast can we make it for a user to go from landing to raising a request. For me, testing is really about adopting this Agile culture. It's about a constant improvement process. If I'm looking at an e-commerce website and the client comes to us with a problem that the page isn't performing how they want it to, we would start by looking at how, what it's doing, what are the customers doing, where they're going, what are they clicking on, are they starting to buy a product from where are they falling out? We'll develop a range of different things that we think might help change that then we go about executing each of them. And once we launch it, we then monitor how that actually performs, and if it doesn't perform what we thought it was going to then we would go back to the drawing board and start again. It's it's absolutely imperative that designers are prepared to break their products and to push them as hard as they can, to put them in front of as many people as they can, to put them in as many unlikely scenarios as they can and really try to iron-out any potential issues. The way we look at testing is test fast, test often and test quickly. So, if you see that you're going down the wrong path you pivot at the right points. We'd like to test as early as possible really, and that means that we can conduct all sorts of proof of concepts, and we avoid having to commit all sorts of time, resources and money to an idea that isn't proven. So, there isn't like, "Oh, it's done, let's go test this now." So, the initial investment is quite light on and then as you more confident in the choices that you're making, obviously this is for a larger flow, you are able to show the user more and more advanced things that you already know that they're gonna like or they're gonna interact with in the way that you intend them to. More often than not we keep it fairly internal. If we're running lean and mean, but then for some of our bigger corporate clients who do invest very heavily in user insights, voice of the customer, focus groups, then the process can be more exhaustive, where we'll actually build things like non-functional block models and we'll get those in front of customers to test out basic intuition. How's something plugged in, how do you access a particular part of a design, like, literally give simplified designs through to users to get a feel for just how intuitive a given design is. So, the people that I normally test with will be anybody that I can find. Anybody could be someone using my product. But for another designer on my team who may be in charge of the agent experience, they need to be more refined in who they are speaking with. So they'll be looking at lists of customers who are specifically agents and they'll ask themselves things like "Has this agent been working with the product for very long?" "What did they understand?" "What's their level of expertise with the product?" Failing can be tough, but at the end of the day it's probably one of the best learning experiences you can have. Dealing with failure is part of a designer's job. It's only way to know what to improve upon. It's key to iterating and working towards, you know, an innovative solution in the end. So, I don't think of failures as failures. If we have the right framework in place, then we're able to measure the impact of what we're trying to achieve, and from there, we can alter the design or shift our priorities accordingly. And, if something doesn't perform as we needed it to, it really presents us with a whole another range of opportunities. We find that fail designs aren't as scary as they sound. It means that we've learnt something that can inform our next step in the design process. That's when we start to think about, well, where did we go wrong? How can we learn from that, and how can we take those failures and really turn them into something positive and something successful in the future. There's just this process ultimately, and the end goal is to land on something that is doing what we want it to do, it's doing what the customer wants it to do, so, I guess just learn from your failures and move forwards. Measurement of performance is really key to being able to identify any areas for improvement. In terms of what goals and metrics specifically, it really comes down to what we're trying to test, or what the experience is. Because of the broad appeal of the consumer electronics projects that we work on, they do go through quite rigorous usability testing. More often than not it involves sort of behind the mirror focus groups, where the position of everything is heavily scrutinized from just how intuitive it might or might not be. All of those discoveries are tabled and they're put against different business metrics I guess. You know, what effect will this have on the business if it was out in the field and it failed. When we take that kind of feedback from that sort of testing, we can see in these matrixes that have formed the differences where attention needs to be put to improve a design. So, an example might be for an e-commerce client. If we're looking to increase performance, we'll be looking to track measurement of visits to the page, what products somebody viewed, did they start to purchase anything, and what did they purchase or how long did it take to purchase? We might be presented with the opportunity to redesign a whole page and rather than just diving in and changing the entire page, we might break it down into smaller chunks, and by doing that, we can measure the incremental impact of each individual change and ultimately deliver a design or experience that not only delivers a good customer experience, we also deliver results for the business and we can communicate exactly what we did in order to achieve that. There's a product that we worked on with Telstra, which was to design a new modem gateway. This product has, well had, a lot of requirements that need to be, sort of satisfied. One of the the big requirements that we needed to do was Wi-Fi range and speed, and also heating and cooling. The more powerful, the better performance the gateway got, the hotter it gets, and that sort of really, sort of degraded the quality of the product. Each chipset had a certain metric that needed to be satisfied in terms of the temperature, 90 degrees, 100 degrees, 70 degrees, and we constantly had to do prototypes and simulations to try to determine where the venting needed to go, to try to get the cooling down as much as possible. This is a bit of a balancing act to try to make sure that the product was going to be not only looks good, but also performs really well. There's nothing new that can come out of doing the same thing over-and-over again. Experimenting is all really about playing. You know, having sort of having a go at it and really sort of pushing things to the boundaries, molding things in different ways. I think the experimenting and testing is a critical part of the design process because that's where you get to be the most creative. It's where you get to come up with all of the ideas and start thinking about which ones are the ones that are sticking. 'cause we often find that the first design that we come up with is never the right one. So, in order to get to the right design, we need to constantly be experimenting with what we have and testing it to make sure that we are coming to the right solution for the problem that we're trying to solve.