So Linear Predictive Coding, or LPC, is the model that is most commonly used in speech coding. So let's see how we can compute all of these parameters using the LPC model. [COUGH] The current sample x(n) is related to the past samples, x(n-i) and some input. So the value we use, typically p past samples, and p is the order of the LPC. In practice we use 8 to 16 samples, so the LPC order is anywhere from 8 to 16 [COUGH]. Given a speech segment of length n [COUGH], we want to compute all these parameters. a(i), G, the gain factor, and the input. So typically we choose N as corresponding to about 20 milliseconds of speech. The duration over which it is reasonable to assume that speech is short term stationary and wide band signals, n is 320. [COUGH] So the LPC filter itself is a(i) and I think that's all I need to say about this slide. Let's move to the next one. So in the LPC formulation, since we assume that we can fairly faithfully represent the past sample using the current sample using P past samples, there is an error that we make. And this is really the ignorance of the model. Whatever is not captured by this linear predictive model is captured by the error. So the trick is to use arithmetic on the error to help us compute the best possible set of filters. The capital En is the total error in the current 20 millisecond frame that we are operating on. So if we differentiate En with respect to each of the filter parameters a(i) and then set the result to 0, this will give us a set of linear equations for p unknown a(i), a set of p equations. And these set of equations we can solve using linear algebra to get the filter parameters. So in the equation that I show here, R(i) is the auto-correlation function of the n samples that we are working with. Essentially, if you take this signal accent and multiply with the shifted version and add them all up, you get the auto-correlation function. And R0 is the maximum value of the sequence of the auto-correlation function and it has the same periodicity as the signal x of m. So if xm, the periodicity is the pitch period, that is also reflected in the auto-correlation function. So using R(i), we built this matrix R(|i- k|), and this matrix has a special structure. It's called Toeplitz matrix, in which each descending diagonal has the same value and it is symmetric. So they are efficient recursive solutions called Levinson-Durbin equations to solve for a1, then a2, then a3, so on, up to ap. And that is how we get the filter parameters. [COUGH] Then we are left with the gain of the filter G and the actual input that we provide to the filter. Since we have already computed the parameters a(i) and R we can use the equation here to estimate the value of g. If we first estimate the pitch period then the voiced, unvoiced decision comes as a by-product. So as I said before, the auto-correlation function R(i) is itself periodic, with the same period as the input signal x. So instead of operating on the auto-correlation function, which tends to be quite expensive from the processing period, we can actually low pass filter x of m and down-sample [COUGH] maybe to one kilohertz and then find auto-correlation of this. And then we can use the periodicity to estimate the pitch. That is another really clever solution, in which it's called center clipping. If the signal is peak-to-peak, point, peak-to-peak towards, then everything in the middle, lets say half of the maximum, which I set it to 0. So you just see some of the bumps. And then we can compute the auto-correlation on the center clip signal, and then look for the pitch barrier. If we see a very strong peak compared to the initial value or 0, we have voiced signal. Un is a chain of impulses. Otherwise, we just use random signal. And using these values, a(i), G, the pitch period, and the voice unvoiced decision, the decoder can reconstruct the signal.