The first example is PAM. Pulse amplitude modulation. We split the incoming bit stream into chunks of M bits. So that each chunk corresponds to an integer between zero and two to the M minus one. We can call this sequence of integers K of N. And this sequence is mapped onto a sequence of symbols. A of N like so. But there's a gain factor G like always. And then we use 2 to the m minus one, odd integers around zero. So for instance, if m equals 2. We have 0, 1, 2, and 3 as potential items for k(n). And a(n) will be either, let's assume G equals 1. Will be either -3, or -1, or 1, or 3. We will see why we use the odd integers in just a second. The receiver deslicer will work by simply associating to the received symbol, the closest odd integer. Always taking the gain into account. So graphically again, PAM for M equal to two and G equal to one will look like this. Here are the odd integers. The distance between two transmitted points, or transmitted symbols is 2G. Right here, G is equal to 1. But it would be, in general, two times the gain. And using odd integers creates a zero mean sequence. If we assume that each symbol is equiprobable, which is likely. Given that we've used the scrambler in the transmitter. Then the resulting mean is zero. The analysis of the probability of error for PAM is very similar to what we carried out for bilevel signaling. As a matter of fact, bilevel signaling is simply PAM with m equal to one. The end result is very similar. And it's an exponentially decaying function of the ratio between the power of the signal and the power of the noise. The reason why we don't analyze this further is because we have an improvement in store. And the improvement is aimed at increasing the throughput, increasing the number bits per symbol we can send. Without necessarily increasing the probability of error. So here's a wild idea. Let's use complex numbers and build a complex valued transmission system. This requires certain suspension of disbelief for the time being. But believe me it will work in the end. The name for this complex valued map and scheme is QAM. Which is an acronym for quadrature amplitude modulation and it works like so. The mapper takes the incumbent bit stream and splits it into chunks of M bits with M even. Then it uses half of the bits to define a PAM sequence, which we call a of r of n. And the remaining m over two bits to define another independent PAM sequence, Ai of N. The final symbol sequence is a sequence of complex numbers. Where the real part is the first QAM sequence. The imaginary part is the second PAM sequence. And of course in front, we have a gain factor, G. So the transition alphabet a, is given by points in the complex plane with odd valued coordinates around the origins. The receiver deslicer works by finding the symbol in the alphabet which is closest in Euclidian distance to the relieved symbol. Let's look at this graphically. This is a set of points for QAM transmission with M equals 2, which corresponds to 2 bilateral PAM signals on the real axis and on the imaginary axis. So that resolves into four points. If we increase the number of bits per symbol we set M equals 4. That corresponds to two PAM signals with two bits each, which makes for a constellation. This is how these arrangement of points in the comp explain are called. A constellation of four by four points at the odd valued coordinates in the comp explain. If we increase M to eight. Then we have a 256 point constellation with 16 points per side. Let's look at what happens when a symbol is received and how we derive an expression for the probability of M. If this is the nominal constellation, the transmitter will choose one of these values for transmission. Say this one, and this value will be corrupted by noise in the transmission. And the receiving process and will appear somewhere in the complex plane. Not necessarily exactly on the point it originates from. The way the slicer operates is by defining decision regions around each point in the constellation. So suppose for this point here, the transmitted point. The decision region is a square of side 2G centered around the transmitted point. So what happens is that when we receive symbols that will not fall on the original point. But as long as they fall within the decision region, they will be decoded correctly. So for instance here, we will decode this correctly. Here we will decode this correctly. Same here, but this point for instance falls outside the decision region. And therefore, it will be associated to a different constellation point thereby causing an error. To quantify the probability of error, we assume as per usual. That each received symbol is the sum of the transmitted symbol, plus a noise sample. A tau of n. And, we further assume that this noise is a complex value, Gaussian noise, of equal variance in the complex and real components. We're working on a completely digital system that operates with complex valued quantities. So we're making a new model for the noise. And we will see later how to translate the physical real noise into a complex variable. With this assumptions the probability of error is equal to the probability that the real part of the noise is larger than G in magnitude. Plus the probability that the imaginary part of the noise is larger than G in magnitude. We assume that real and imaginary component of the noise are independent. And that's why we can split the probability like so. Now if you remember the shape of the decision region. This condition is equivalent to saying that the noise is pushing the real part of the point outside of the decision region, in either direction. And same for the imaginary part. Now if we develop this, this is equal to one minus the probability that the real part of the noise is less than G. And the imaginary part of the noise is less than G. This is the complimentary condition to what we just wrote above. And so this is equal to one minus the integral over the decision region d, of the complex valued probability density function for the noise. In order to compute this integral. We're going to approximate the shape of the decision region with the inbound circle. So, instead of using the square. We're going to use a circle centered around the transmission point. When the constellation is very dense, this approximation is quite accurate. With this approximation, we can compute the integral exactly for a Gaussian distribution. And if we assume that if the variance of the noise is sigma zero squared over two in each component, real and imaginary. It turns out that the probability of error is equal to E to the minus G squared, over sigma zero square. Now to obtain a probability of error as a function of the signal to noise ratio, we have to compute the power of the transmitted signal. So if all symbols are equiprobable and independent. It turns out that the variance of the signal is G square times one over two to the power of M. Which is the probability of each symbol, times the sum over all symbols in the alphabet of the magnitude of the symbols squared. Now it's a little bit tedious, but we can solve it exactly for M. And it turns out that the power of the transmitted signals is G squared, two thirds, two to the M minus one. Now if we plug this into the formula for the probability of error that we've seen before, we get that the result is an exponential function. Where the argument is -3 that multiplies 2 to the minus M plus 1 that multiplies the signal-to-noise ratio. We can plot this probability of error in a log-log scale like we did before. And we can parametrize the curve as a function of the number of points in the constellation. So here, you have the curve for a four point constellation. Here's the curve for 16 points, and here's the curve for 64 points. Now you can see that for a given signal to noise ratio. The probability of error increases with the number of points. Why is that? Well, if the signal to noise remains the same, and we assume that the noise is always at the same level. Then it means that the power of the signal remains constant as well. In that case, if the number of points increases, G has to become smaller. In order to accommodate a larger number of points for the same power. But if G becomes smaller then the decision regions become smaller. The separation between point becomes smaller and the decision process becomes more vulnerable to noise. So now that we have the performance curves form QAM. It's time to see why we go through the trouble of defining a complex value transmitter. So here on this plot we have the probability of errors for equivalent PAM and QAM systems. And by equivalent I mean that we use the same number of bits per symbol. And therefore the alphabets, the signaling alphabets are going to have the same cardinality. We haven't formally derived the expression for the probability of error of PAM, but you can find it in the literature. And what this plot shows is that for a given probability of error. I can generally use two bits more per symbol if I use QAM than if I use PAM. So for instance here this operating point at a given signal to noise ratio. Either I use two bits per symbol in PAM, the dashed curve, or I use four bits per symbol in QAM, the green curve. And similarly here, the difference signal to noise ratio for instance. I can either use four bits with PAM, or six bits with QAM. So in the end, QAM is more spectrally efficient than PAM. In the sense that for a given reliability figure, I can send more data over the same bandwidth. So in the end, here's the final recipe to design a QAM transmitter. First, you pick a probability of error that you can live with. In general, ten to the minus six is an acceptable probability of error at the symbol level. Then you find out the signal to noise ratio that is imposed by the channel's power constraint. Once you have that, you can find the size of your constellation by finding m. Which based on the previous equations, is the log in base two of one minus three over two times the signal to no ratio divided by the natural logarithm of the probability of error. Of course you will have to round this to a suitable integer value and potentially to an even power of two. In order to have a square constellation. The final data rate of your system will be M. The number of bits per symbol, times W, which if you remember is the baud rate of the system. And corresponds to the bandwidth allowed for by the channel. So, we know how to fit the bandwidth constraint via up-sampling. With QAM, we know how many bits per symbol we can use given the power constraint. And so we know the theoretical throughput of the transmitter for a given reliability figure. However the question remains. How are we going to send complex value symbols over a physical channel? It's time therefore, to stop the suspension of this belief. And look at techniques to do complex signaling over a real value channel.