In this lecture, we'll talk about how we can encode things in a computer using bits. And we'll also talk about how many bits do we need to encode a particular number of unique things. So everything in the computer is in binary, and what's binary? Binary is base two. Why do we care about that? Because we can use ones and zeros to represent anything in a computer. We can represent numbers. We can represent characters. We can represent pictures. We can represent sound. We can represent anything with zeros and ones. That raises the question, though, first, how do we do it? And, second, how many bits do we need to represent a particular number of things? And that requires me to tell you a Christmas story. So my family when several years ago, were trying to figure out who gets to open the first present. That's part of our tradition of Christmas, and we wanted to do with coin flips. So we were thinking about how many coin flips did we have to pick out of four people, one of my Children was not there yet, out of four people, how many coin flips did we have to do to figure out who opened the first present. Now, me and my son, a computer scientist, came up with one answer, and my wife and daughter came up with a different answer. So you should pause the video for a minute and think about how many coin flips you would need to do to actually pick from four people, the person who gets to open the first present. Once you've figured that out, go ahead and unpause the video and I'll talk through the two different solutions that we came up with that Christmas morning. Okay, one of the common solutions that we might use would be to have a tournament. This is a standard structure that you see in play offs, in hockey, or some other inferior sport to hockey. So you have these ladders, if you will, and you have A plays B and somebody wins, and that would be one coin flip. C plays D and somebody wins, and that would be another coin flip. And finally, the winner of those first two games, if you will, play with one more coin flip. So you can do that with three coin flips, and that's a very natural, normal way to come up with a solution to this problem. However, I'm going to be very dramatic here, we can use the power of binary, and we can actually encode A, B, C, and D, each of those four different things by using two bit. And so if we encode A as 00 and B as 01 and so on, we can actually determine who wins with only two coin flips. And I will freely admit that we spent far more time talking about this than it would have taken to do a third coin flip, but that's how my family works. But the next question is, how many bits do we need to represent a certain number of unique things? This was a really interesting conversation because it was how we go about encoding stuff in the computer. But if we're going to encode stuff, how many bits do we need to encode stuff? So let's say we have one bit, we can encode two unique values. We can have zero, or we can have one, so two unique values. What if we have two bits? Well that was the example we just did. We can encode four different things, and if we have three bits, we have eight different possible encodings. 00, 00, 01, 010, 011, 100, 101, 110, and 111, and let me fix my glasses. I was counting up in binary, but those are the eight different combinations we can get with three bits. Now, some of you may have already realized that if we have one bit, we can represent two things, and two to the power of one is two. If we have two bits, we can represent four things and to do the power of two is four. And if we have three bits, we can represent eight things, and two to the power of three is eight. So, now I'll show you a slide with the biggest font you'll see throughout the course. The relationship is two to the b equal n. So if we know how many bits we have, we just raised two to the power of b and that tells us how many unique combinations of those bits we get, which tells us how many unique things we can represent. If you know how many things you want to represent, you can use the inverse relationship, right? We can take the log base two of n to calculate b. But I'll tell you, I personally never do that, right? I go through the process, well, I'm trying to represent 12 things. How do I figure that out, is three bits enough? No, two to the third is eight. Is four bits enough? Sure two to the fourth is 16. We can never use partial bits, you can't say, I need 3.7 bits. It's not how it works. You only get whole bits, but I don't usually use the log base two n equation, I just usually use two to the b equal n and work my way into however many bits I need. So, that's it for this lecture. The two powerful ideas are, we can encode anything with zeros and ones in a computer, and we have to because computers use binary. And we have this relationship, this two to the b equal n, that will tell us how many bits do we need to encode n unique things