So at this point you might be asking, does targeting on Twitter even work? The answer is yes. I want to show you a use case that I recently implemented, and I want to walk through why it worked. There are many hats here at CU-Boulder, and one of them is doing research. I do research in an area called agenda-setting. So a group of scholars came to me and said, let's host a conference. So how do we promote this conference to the broader academic community? Immediately I thought Twitter would be a perfect use case because of that niche interests that I mentioned earlier, academics love Twitter. They loved to talk on Twitter. They love to talk professionally on Twitter, and they love to read Professional content, in this case, professor-related academic research content on the platform. So I immediately thought Twitter would be a great way to garner more conference registrations. So you can see here is a tweet that I built that was designed to drive conference registrations via a specific website. As an academic, I have limited access to photography and creative resources, so we happen to have this photo or illustration of these three big scholars in the field. They were all going to be attending this conference. So I said this is a nice piece of content. Why did I choose it? Because it was unexpected and unusual. I didn't choose it because it had the highest contrast or because it was the best piece of photography that I could find. But instead, it was unusual and unexpected, and therefore, would force people to stop and look at it and at least process the message and some broad way. In the ideal world, what I would do here is actually be able to track the number of people that saw the ad, clicked on the ad, and went to the event page. Then finally, registered to attend the conference. In reality, I actually could only track the number of people that visited the [inaudible] page. This was largely due to my inability to access or even change the event page at all. So I ran some campaigns, and you can see here, there's three campaigns and they all ran and they all costed money. The first campaign generated 66 clicks and cost a $100. The next campaign was more expensive. It only had eight clicks and cost about a dollar per click. The third campaign was somewhere in the middle and had a hundred clicks, the most of all three, and cost just under $0.80 a click. So which campaign did better? Again, because I was not able to track the conversion all the way to registration, I was left with a little bit of wanting. I wanted to know what percentage of clicks actually registered for the conference. Now, remember the key thing here that I was looking for, the key business objective that I was tracking, was clicks. I was able to find this data in a matter of clicks by simply going to the analytics platform built inside of ads.twitter.com. The campaigns here I sorted by cost. Campaign Try 3 was the most expensive, with $1.52, Campaign 2 was the second most expensive at a $1.06, and Campaign 1 was the cheapest at a average click cost of $0.78 or $0.80. So if I imagine that all clicks were equal, Campaign 1 worked the best. Each campaign here has different parameters. In particular, you can see that the objectives, the things that are denoted in light gray, are different for all three campaigns. For the first campaign, Campaign Try 1, I actually used an awareness model, that is, a model that was really just designed to show the add to as many people as possible. You can see that almost 15 thousand people saw the ad, and only a 100 people clicked. That's a very small click-through rate, but not something unexpected. The third try actually had a number of impressions, but it just costs more to actually get people to click. This was actually using the engagement model. The websites click model actually performed worse than the awareness model. So what it highlights here is the need to test all objectives, even if you're sure that you want to drive website clicks, you can see that in my limited tests, and this is a very limited tests, that the website click objective actually performed worse. So again, Twitter ads are iterative and I encourage you to workshop and try as many different combinations as possible. So I couldn't actually assess the quality of the clicks, that is, whether someone clicked and then actually registered for the conference. The next best thing is actually to look at the clicks, and look at the demographics and geographics associated with those clicks. Here in this example, I downloaded the click data and sorted it by region to see what regions tend did click the most. So Column L here is actually sorted from high to low, and you can see different regions that clicked more than others. So here, I started to look at it and I started to think critically, are there major universities in the states that I would expect to register for this very niche in narrow conference? The answer was yes. In fact, when I looked at the registrations, we had several from California, several from Texas, and of course, several from Colorado. So at a broad level, I felt confident that the targeting parameters that I was using was actually tracking clicks that were meaningful, and that mapped to my expectations. When I dived to the city level, I could actually see that the cities of the registrant's matched up with the cities of the clicks. This gave me confidence that the actual clicks, again, were genuine LA, Denver, and New York all have registrants for the conference. The next set of demographics that the Twitter ads platform gives you is the ability to look at age group. One thing you may know about professors, is they tend to be older. I was able to use this intuition to test the genuineness of the Twitter advertising clicks. Column M here actually denotes click-through rate, which is simply the number of people that clicked divide it by the number of people that saw it or impressions in Column K. I sorted it, click-through rate, from high to low, and I looked at the age groups. Lo and behold, the older age groups tended to have the higher click-through rate. That is, the age groups that most correlated with professors that registered for this conference also mapped with the age groups that I found in the actual click data. You could actually use this data to further enhance the quality of my ad serving. Knowing that my number one audience is folks that are 50 and up and my second best audience is 35 and up, I can use these as additional targeting parameters to hone in the quality of my advertising targeting precision. Just one more interesting thing here, you actually can see that a lot of the stereotypical interests that are associated with professors also tend to have the best click-through rates. So in this case, folks that were interested in hybrid cars and electric vehicles, had the highest click-through rate. We see other types of associations as well. We see that the professors here also are interested in graphic software. It just so happens that these professors teach in mass communication colleges and graphics software such as Premier and Final Cut are often taught in these programs, therefore, professors are more likely to be interested in these things. So these are targeting parameters that I didn't even necessarily think about on the offset when I was designing this add. These are parameters that I could use to make my adds more precise and drive the click-through rates of the adds up. Again, Twitter ads is an iterative process. The primary way in which I targeted this set of ads was through handles, and what I mean by that is that I used look-alike audiences. Twitter provides the ability to target people based off of accounts that they follow. So for instance, if you would like to target people that follow a specific Twitter account, you can do so with rough precision. You can see here that some accounts worked well and other accounts didn't work well at all, so I learned a little bit about that. So in this case, I learned that some accounts were more relevant for this type of ad than others. Again, I was pleased to see that the accounts that I most thought would be related, actually were the accounts that were most related. I see a little bit of duplication here because campaign data has being reported. Remember, I ran three campaigns. So in theory, an account could show up a maximum of three times. It's easy to be able to download this analytics data. There's a lot to learn when it comes to Twitter ads, and I truly appreciate you taking the time to dive into the data. Twitter ads is an ecosystem in and of itself, you can't become a master of it in one hour. Please read the readings this week to dive in further, and if you find resources that are helpful to you, please be sure to share them in our Coursera so that we can continue to learn together. I thank you for taking the time.