So far, we've taken a close look at how you can measure the outcome of your marketing. Specifically your advertising campaigns. We've looked at the metrics you can use to evaluate the success of your campaign from a cost versus return point of view. We also looked at how you can use experiments to prove that your campaigns were responsible for the results you're seeing. In this module we're going to ask ourselves, How can I make my campaigns better? Is there something the data can tell me that can help me optimize my campaigns? Can I better spend my budget now and in the future? One way you can optimize your campaigns is through the use of A/B Testing. A/B testing is the process of comparing two variants of an ad against each other to evaluate which one performs best. A/B testing can help reveal ways you can make your campaign better while it is still running A/B testing is quite simple. Here's how it works. You create two different versions of an ad Version A and Version B. Every time there's an opportunity to show your ad, the advertising platform will randomly choose whether it shows version A or version B. The results off your ads are measured against your goal, and the ad that's the best at achieving your goal will be declared the winner of your test. A/B testing is used extensively in online advertising. It's an easy way to quickly learn what works best and optimize your campaigns based on these insights. But A/B testing is also used in other disciplines. For instance, website or app, designers will often use it to understand which website or app layout works best. To do that, they'll serve version A off a website or app to some people and version B to the other people. And then they assess which website or app drives better results like checkouts, for instance, it may seem that varying just one variable and testing it, is not such a big deal, but insides derived from A/B tests can have a really big impact. Let me tell you about an example of A/B tests conducted by Microsoft's Bing as it was written up in Harvard Business Review. Bing ran a few A/B tests, testing very slight variations in colors used on the Bing search engine results, pages. They used slightly darker blues and greens and titles and slightly lighter black in captions on those pages. The tests showed that these color changes improved the users experience a lot. In fact, when they rolled out these new callers to all users, it increased the annual revenue from these search results pages by more than $10 million. Actually, when I worked in Yahoo's search research team, we used A/B testing very rigorously, and we always found small changes to the user experience could have a very big impact on the revenue generated from search ads on those search results pages. There is one very important best practice you should keep in mind when conducting A/B tests. When you create your A and B version of what you're about to test vary only one variable. What do I mean by that? Let me walk you through an example. Imagine I'm running a campaign to sell jewelry. I have this ad with an image of a ring with a blue stone and a copy of my ad says, celebrate memorable moments with a special ring. Check out our new spring collection and I have a call to action button that says Learn More. I can run an A/B tests by considering this ad to be my A version and creating a new ad that will be my B version. The best practice in A/B testing is to make the B version differed in only one variable from the A version. In this case, I'm going to vary only the image. Instead of a blue stone ring, I'm going to show a ring with three red stones. I will keep the ad copy the same, and I will also keep the call to Action Button to say, Learn More. Why does this matter? Well, now when I get my results back and I see that ad B won, I'll learn something from my experiment. I'll know that B won, because the image was better at helping me achieve my results. Since everything else in the ad was the same. I know the image was the reason my results were better. Now I know that under these circumstances this image is best, so I can keep this as my optimal image. I may decide that I want to test some or in fact, maybe I could improve the copy so I can conduct another A/B test. My A version ad has the image of the ring with a three red stones, and I know this is the best image and the copy reads, Celebrate memorable moments with a special ring. Check out our new spring collection just like before. The call to Action Button is Learn More for my B version, I keep the image constant, but now my copy reads, Celebrate memorable moments with a special ring. Receive 10% off this week, and the call to action is Learn More. My test runs and I learned that B version wins again. So the ideal copy is Celebrate memorable moments with a special ring. Receive 10% off this week. I know that because that was the only thing that was different between the two ads, So the better results must be because of the copy. Finally, I can test the call to Action Button. I start with the image and the copy that I know is optimal, and I keep Learn More as the call to action in my A version of the ad. For my B version, I keep everything constant, but I changed the call to Action Button to Shop Now. Now version A wins. Turns out that learn more button works better for this ad than shop now button. So I know to keep running my campaign with the optimized ad. You may wonder, couldn't I just test the ads that I think will be the best ones and pick the winner without going through the step by step testing? I just explained you can, but you're most likely not end up with the most optimal version off your ad, and you won't know which part off your ad contributed to the improved results. Imagine if you had tested the first ad we started with the one with the blue stone ring against the final ad we put in our last test with a different image, different copy and a different call to Action Button and say in this case, ad B won. We wouldn't know why ad B won. Was it because the image was different? The copy was different or the call to action? We would also not have ended up with the most optimal version of the ad, which we learned was this one with a different call to action button. So in order to get the most out of your A/B tests, test only one variable at a time. Conducting a B tests on a regular basis as part of your online advertising strategy is a really good practice, and you'll find that most online advertising platforms have the option to run. These tests build in, so it's usually quite easy to make them a regular part of your campaigns. In our next video, we'll look at how this works on the Facebook platform.