Hello, and welcome back. Before we continue with sparse modeling, I want to make a side note on the topic of Compressed sensing. Compressed sensing, basically, is using some of the same concepts that we have been learning. But its goal is not to model a signal to represent a signal, but the goal is to provide a new something or a new sensing paradigm. Let us explain that. We have, as we have before, a signal x, and we are going to consider that the signal is exactly what we have seen before. A dictionary multiply by a sparse vector. So, this is the signal that we want to sense. In compressed sensing, we multiply that by a matrix that reduces the number of rows. So, we had a signal which was N dimensional and we are going to represent it as dictionary times a sparse vector. But, we are not going to sense N points, we're going to sense the product of these Q matrix with x, and Q is what's called a fat matrix. So, it has, here, it has N. But here, it has less than N. So, we are in a sense less that's why it's called compressed sensing. Now, let us write that down in formulas. We have D alpha equal x, that's what we have here. But we multiply both sides by Q, the sensing matrix. So, Q times D defines a new dictionary, this D tilde, is kind of a new dictionary, times alpha, the same sparse vector is equal not to x but to x tilde, the sense vector which is shorter than it was before. So, we end up basically sensing less points. And the goal is to recover from this, which is what was sensed, and the knowledge of D, because we know the sensing matrix, we know the dictionary. The goal is to recover alpha. So mathematically, this looks very similar to sparse modeling. We have a signal, we have a dictionary and we need to reconstruct alpha. Now, what compressed sensing does, it deals with this fundamental problem. And basically, the idea is to provide conditions for the recovery to be possible. The conditions have to be on Q, the sensing matrix, they have to be on D, the dictionary that we use for the sparse representation. And also, on the level of sparsity of alpha. So, compressed sensing provides very nice mathematical theory. And conditions under which we can do the recovery. If we can recover alpha from the sense x tilde, then we will be able to recover the original signal x by doing D alpha. And, once again, the same way that we have a lot of interesting math in sparse modeling, there's a lot of related math in compressed sensing that provides conditions for this to be possible. But, it's very important to make a clear distinction. What we have been discussing about sparse modeling is the model of signals. Compressed sensing is about designing new sensing protocols. Now, for what type of signals with a sign new sense in protocols? Most of compressed sensing has been dealing with the design of new sensing protocols for sparse signals. Meaning, signals that can be represented with a dictionary in a sparse fashion. So, those use the same type of tools but they're very different goals. And basically, what compressed sensing is trying to do is trying to extend Nyquist, which is a classical sampling or sensing theory for band limited signals. Compressed sensing says, what should be the sampling or sensing theory for sparse signals? And that's the relationship between compressed sensing and sparse modeling. After this very short note on compressed sensing, and we could spend weeks teaching about compressed sensing but it's not the topic of this week or of this, of this class. But, after this short note, we're going to come back in the next video with one more concept in the area of sparse modeling. See you then. Thank you very much.