We need to start our study of probability with a little terminology. How do we talk about probability? We learned early in the course in the second week about guarding terms, where people say things like, well it might be the case that a meteor's going to hit your house, so you need meteor insurance. But might's just possibility, and that's different from probability. If you ask how probable is it that a meteor is going to hit your house, the answer is going to be pretty low. But how do you say how low it is? One way to talk about probability is to say the number of times that it might happen out of the number of times that occur. So you might say three times out of ten. This horse will win the race three times out of ten. In ten races, it'll win three times. You can also do that in terms of three in ten, or simply, there's a 30% chance. Three times out of ten, 30% chance, same thing. But the way we're going to put it, for the rest of these lectures, is to talk about the probability of 0.3. Probabilities range from zero to one. One means it's absolutely certain that it's true. If you say the probability is one that means it's going to happen. Zero means there's no chance it's going to happen, it's certain that it won't happen, that it's not true. So, every probability has to be between that. Between it's absolutely certain that it won't happen and it's absolutely certain that it will happen, when you're somewhere in the middle, you're between zero and one, you've got different levels of probability. That's the way we talk about it. Now, probabilities come in many different kinds. The first kind of probability we're going to call a priori probability. Because you figure it out prior to experience. Prior to any kind of experimentation. So, take, for example, a coin. It's got tails, it's got heads. You flip it, and what do you get? Well, you can't see, but that time, I actually got tails. What's the probability it'll come up tails? We tend to just assume that it's a fair coin, which means that the probability of coming up tails is equal to the probability of coming up heads. So if they're equal, when you add them together you should get one, because it's absolutely certain that you're going to get either heads or tails. One of the other and that means that you assume the probability is 0.5. But all that's just assumption. You're just assuming that the outcomes are equally likely. Now, let's turn from coins to dice, little bit more complicated, six sides. You roll one of them and if you get a two, as I just did, then the odds of getting a two when there's six sides, if you assume they're all equally probable is going to be one out of six. What if you rolled two of them, like we did when we were discussing the gambler's fallacy. Oh, my gosh, I got seven again, pretty cool. [LAUGH]. And what's the likelihood of that? Well, how many ways are there to get seven on a pair of dice? You can get one and six. You can get two and five. You can get three and four. But you can also get four and three, five and two, six and one. So there's six ways to get a seven on two dice, and there are 36 possibilities, six times six, so, the probability of getting a seven when you roll two die are six out of 36, or 1 6th. But notice that we're just assuming that these dice are fair. It gets a little more complicated when you have to give up that assumption. So let's start with coins, no tricks up my sleeve. It's a normal coin, heads, tails. What are the odds of getting heads? [SOUND]. Well, two possibilities, heads or tails. We assume they're equally probable. The odds of getting a head are one in two. We can figure out the probability just by assuming that heads and tails are equally likely. And that's a priori probability. And now let's get violent. If you take the coin and you put it in your pliers, like this, but check the desk. And you get your hammer and you start [NOISE] and you bend the coin okay. Good see, now it's bent okay, now let's check the probability again. Here's our bent coin, now we flip it. What are the odds that it's going to come up heads? Heads, heads, heads. Pretty good. Heads. Well I can't get it to do it. But sometimes it'll come up tails and we don't know how often. Looks like most of the time when it comes up heads but sometime it'll come up tails. How often, we don't know. The only way to figure that out is to flip it hundreds and hundreds of times and see what percentage come up. That's because once you bend the coin it's not a priori probability. You can't just assume that the coin has equal chances of landing heads or tails. You have to turn it into a statistical probability and look at the frequency of the actual flips for that particular bent coin. Here's one more example of the difference between a priori probability and statistical probability. One of my favorite games, Pass the Pigs. I highly recommend it, it's a really fun game. Now, what about this? You roll the pigs, and sometimes you get a snouter, or a leaner, or a jowler, or a razorback or a trotter, they're different results and, ooh there we go, a double razorback, that's pretty good. And you roll them, ooh look at that a leaning jowler, that's a lot of points there. And we, you have to roll them a long, a lot of different times in order to get the probabilities. And we actually did that. We did 1817 rolls. We got one side up 1174 times, trotter 150. Razorback 441, snouter a 39, leaning jowler 13. So what are the odds of getting a leaning jowler on one roll of a Pass the Pig? 13 out of 1817. Now notice that you could never do that a priori, just by assuming probabilities. You have to do an actual experiment and look at the frequencies because this probability is not a priori probability, it's a statistical probability. And the nice thing about statistical probability is you can apply it to a lot of things besides coins and dice, right? What's the probability it's going to rain tomorrow? Well, how do they figure that out? They do a lot of observations and when it rains, under what kinds of circumstances. And they ascribe a probability on the basis of how many times it's rained in similar circumstances in the past. And what's the probability this batter's going to get a hit in baseball? Well, they look at his batting average over the previous part of the season. Especially under certain circumstances versus right-handers, pitchers versus left-handed pitchers and you get a probability that he's going to get a hit this time. The same for cricket, if you've got a bowler, a spin-bowler or a speed-bowler. Well, this batter might be better against one than the other, and you can figure the probability against the different types of bowlers. So, in sports you use statistical probabilities a lot. And in life you use statistical probabilities a lot. When you try to decide what kind of car to buy and you want to know what's the likelihood that this particular car will breakdown in the first year, you go to Consumer Reports. And you see how many cars of this sort breakdown in the first year. So we use statistical probabilities for a lot of different areas. Really a priori probabilities are only applicable when you can assume that the outcomes are equally likely. And that's the real difference between a priori probability and statistical or empirical probability. One of them is just based on assumptions that the outcomes are equally likely. That's a priori probability. The other is based on empirical evidence about how often things happen, the frequency of events in the world. That's statistical or empirical probability. But no matter which kind of probability you're talking about, a priori or empirical. They still have to follow the same general rules. And it's those rules that we're going to study for the next few lectures.