Hello, and welcome back. In this video, what we're going to do, is we're going to show a different technique of imaging painting that is based on calculus of variations. Instead of starting with a partial differential equation from the very beginning, we're going to start with an energy formulation that through the Euler–Lagrange is going to be transformed into a differential equation. Now, we're going to keep the same concepts that we learned before. If we have a region of missing information. We're going to try to kind of mold in the continuation of the edges as we represent here, and then fill in the color. When I do both of them at the same time, as we are going to see in a second, but we want to keep that important concept of. Continuing edges and then. Letting the color flow in, as we say, as water. Actually, why do I say as water? Because some of these equations are very related to the same type of mathematical equations that model, the motion of fluids and gases in nature. So this analogy of letting the pixel values. Flowing like water is not an artificial analogy because it is the same type of equations what we are doing here are transport equations. So we are going to basically derive a variation and formulation that does this type of process. Now, let us start by assuming that somebody came and already told us the boundaries. Somebody basically say, Hey, I don't know really what's happened in there. But here are the boundaries. I'm going to extend them for you. And basically, it has given us the normalized gradient at every single pixel inside the region that we need to inpaint. Remember, the normalized gradient, we basically say that theta is the gradient. Divided by its magnitude. Now, we have to be a bit careful when the magnitude is zero. They say there is no gradient. So, we basically leave that aside for the second. So, this is what we were giving. Somebody gave us basically the gradient. And if you pluck that here, you get gradient square divided by gradient which is equal to gradient. And the goal now is to find that image inside the region to be impainted. That is consistent with this gradient. And we do that in a variation and formulation. We basically are going to optimize for the image. In such a way that is consistent with the gradient that we were given. That consistency is exactly what we have here. When the image holds this, This, this becomes zero and then what's inside here is as small as possible. Once again the basic idea is you find an image, then when you take its gradient and you normalize it you get what was given to you. So you look for consistency. And the interval is inside the region to be painted and a small band around it. Where, basically we take the information to propagate in, and in that band we have the real image, nurtures the even gradients, the real image. And that's why, as I said before, you basically optimize in the region, and as, in the region to be impainted and a small band around it. So, this is your equation. You're looking for consistency of the image with basically. The gradients that you were given. The Euler-Lagrange of this, is what we have, here. That's the Euler-Lagrange of this equation and, as we have seen many times, we basically deform the image in time according to the Euler-Lagrange and then we go to steady state. And we get to steadystate, we basical that the divergence, remember this is for divergence of, this vector. We get, basically the divergense of the normalize image is equal to the divergence of the, graudience, the normalized graudience that we were giving and then we got an image that is consistent with the information that we were given. That's exactly what we wanted. Now, if we do that, for example for this image We basic remove the I. There's no I here. We block it. But remember somebody gave us the gradients, the normalized gradients which is equivalent to give us basically the edges. And then we can recover. The grade values are not exactly as we want them. We are going to repair that very soon. But you can observe the basic geometry here. And that's because somebody gave us theta. Somebody gave us the normalized gradient. That's a first step. Now, we need to basically take those normalized gradients, because in real scenario we don't have nothing in there. And here is the complete equation so. [SOUND]. Here is what we have before, we had before. Somebody gave us theta, we basically recover an image that is consistent with theta. This is this term. What this term is doing. Is exactly helping us to propagate the gradients inside the region to be inpainted. I'm going to explain that more in just one second. Again the integral is as before in this same region and A, B and C are just some parameters that control basically the weight between these two terms. And we are minimizing not only over theta. Not, sorry, not on the over I, as within the previous slide, we also normalize over theta. If we had theta, this is all we need, but here is, we're building the edges, and the image consistent with the edges, at the same time. There, here is the consistency. And here is building the edges. And here, what we have is as most continuation of edges. As we saw, as we learn from professional restorators. Let as look at this turn for a second. If theta is the normalized gradient. As we write it here. What is the divergence of this? We saw that in the previous week. What's the divergence of the normalized gradient? Let's think for a second. That's the curvature. This term. Is curvature. Of what? Of the leg lines of my image, which are kind of edges. So we're saying. Propagate the edges inside in such a way that the curvature doesn't go crazy. If it, if the curvature basically goes crazy, then we're going to be paying a lot of penalty for that. P is a parameter that says, how much penalty do we want to pay for that? So basically, we're doing a smooth propagation of edges in such a way that the curvature is relatively mild inside, and at the same time, we're recovering an image that is consistent with those edges that we're propagating. This term is to make everything even smoother and more regular, and so it has a mathematical justification. With this term, this compete formulation you can prove a lot of beautiful math for it and without a term, the things become a bit more difficult and are what is called ill-posed. They're not very well defined. So it's a technical term that we add to make this equation actually look okay. Now, how do we solve this? Euler-Lagrange equations. And now we have two unknowns, theta and the image. It's exactly the same, you compute the Euler-Lagrange, like, I is constant, you compute all the Euler-Lagrange for theta, that gives you one equation. And then you keep theta constant and you compute Euler-Lagrange for I that gives you the second equation. So instead of one equation as we have before when we knew theta, you get two equations. One that is evolving eye, the other that is evolving feet. And you are solving both of them at the same time. So it's like fixing one for a short time, solving for the other, fixing the other, solving for one of them. then you iterate. And you get this. Copy. Partial differential equation on both I and theta, but it is very elegant, because we can think about what we want, and with a sign of variation of formulation that, that achieves that for us. We want the image to be consistent with the edges. We've assigned that. We want the edges to continue smoothly. We know smoothness is curvature. We design for that. We put that into an energy that penalizes for not achieving what we want and then we compute the Euler-Lagrange and we solve that equation. Let us see some examples. Again we start by artificial examples, so we block these line. So basically, this is what we get. By blocking it, and by running this equation we'll smoothly continue it. A very nice continuation. Similar example as we had before, we block this and we get a smooth and nice continuation of this. We do the same here. We block this cross and we get a smooth continuation. Now you may ask: why did we get a smooth continuation of the bright and not the dark? Now this is a deterministic algorithm when you implement it, basically it chose one over the other. There's no way for the algorithm to know which one is the one that you might want. Here we have the famous at symbol. We block it and look. We get not the add symbol, we get this. Now, this is examples like the chair I show in the very beginning of this week. The computer doesn't know the @ symbol. The computer is just looking for the minimal energy to complete. Now the completion, the feeling that painting is beautiful is perfectly fine. Is not the add symbol we started. But remember imaging painting is trying to give you back something that looks okay and there's no way for the computer to know that what you wanted is to get back the @ symbol, not in this fashion. So you get a perfectly fine, actually this has lower energy according to the energy that we just defined that the @ symbol and that's why it went into that one. So perfectly fine, but without this higher level information of the @ symbol which cannot be achieved with these type of local algorithms for inpainting. It works nice for artificial examples. Let's see if he works for real ones. Here, there is a nice picture of Groucho Marx. And then we have regions that we want to in paint. Here, we kind of see an evolution. And this is the result. A nice recover along filling in along those regions. Another example as we have seen before, we have letters and we get the letters removed. So the basic image pictures are flowing in by solving this variational formulation. Now, those are simple examples. And I want to conclude these examples with going back to the very beginning when I say. We are in painting, structure, and colors. What about kind of, the noise, the granularity of the image? This is one way of doing that, and even going beyond that. So you start from an image. And these are the regions that we want to in paint. You first take this image and decompose in two parts. One, and I'm going to say in second how we do that. Let me just explain the diagram first. One, it's like this was smooth. It has no granularity. No texture at all. And the others were all the texture has done. Okay? Now you in paint the structure part with the techniques I just mentioned to you. You basically continue the boundaries and continue these flat colors inside. And you get a very beautiful continuation. The texture, the granularity, you use a technique that is designed to impend granularity. We're going to explain that technique actually in the next video. So see for example how here the texture has been very nicely continued. Here. And then you go and add those two again. And you get that great reconstruction that has this structure continue. Has the colors continue. Has also the texture and the granularity continue. Here's the structure and the colors. Here is the texture. How do we do this decomposition? There are a few techniques out there that follow something developed by Yves Meyer. Just to give one example and we get this decomposition into something which is pitch-wise smooth, so it has not a lot of oscillations and variations and something that has all the oscillations and variations. You in paint here with the techniques that I just showed to you. Either the partial differential equations base one. Or the variational one, for example. You in painted texture. Here, the granularity was something we are going to discuss next. Which is kind of a cut and paste type of technique. And then you add them back. And you get a beautiful reconstruction. And you get everything in one image. So in the next video I'm going to talk a bit more about this type of cut and paste technique. You actually already know it. And I'm going to remind you what it is in the next video. I'm looking forward to to that. Thank you very much.