0:00

Hello and welcome back. Let us now describe the first image

Â Inpainting model. That's going to base, be based on partial

Â differential equations that we learned the last week.

Â But again, this is going to be self contained, we're going to understand

Â exactly what kind of partial differential equation are we going to use?

Â When we talk about image in painting we talk about the following.

Â We have an image. Here and we have a region.

Â Where we want to change the content of the image.

Â In this case it's the cable. Here.

Â Someone's going to provide us this. Image and painting.

Â Includes the topic where we basically have these two inputs into the system.

Â We are not going to detect, this is a very subjective, very often a very

Â subjective topic about what is that we want to change.

Â Maybe somebody wants to change the shoes. In this case, we want to change this.

Â Now, how we detect these. We talk a lot about segmentation.

Â So we can segment out objects. And those are the regions of missing

Â information. We are going to call them origins to be

Â inpainted. With these two images, we want to make,

Â basically, in this case, the cable disappear.

Â We want to inpaint in. These regions.

Â And the first topic we're going to discuss is how do we inpaint with rich,

Â with information from the surrounding regions.

Â We're not going to take information from very far away.

Â We are only going to use information from the surrounding regions.

Â So we get two images. The original image and an image which is

Â a binary mask of the, representing the regions that we want to in paint.

Â When I personally started working on Image Inpainting with my group, we

Â visited a professional restaurenteur, a professional curator at the Minneapolis

Â Institute of arts. And we asked this person how do you

Â inpaint, how do you restore? And its very interesting because they

Â were doing it the way that children draw. And here is the illustration.

Â We have a region to be inpainted and we have the image.

Â So, they first continue the boundaries of the regions.

Â So, if there is a boundary here going inside the region to being painted.

Â So then they continue the boundary inside.

Â The same way that a child would basically draw the outlines of the objects.

Â Then, they complete the color. So first, they continue the boundaries.

Â Then, they propagate the colors inside the region.

Â And then, lastly, they normally add kind of noise because paintings, and in this

Â case it was a professional restorator of paintings don't look flawed so you have

Â to kind of add this granularity to make it look more natural.

Â And we are going to talk about these. First we will talk about these two steps.

Â Then we are going to discuss a bit about this step which there are many techniques

Â to do that. That's what.

Â So remember, there's kind of a continuation of the geometry, basically

Â the boundary and then filling with the colors.

Â It's like if this is water, we are going to let the water flow into the

Â region, but because we have the boundary, it's going to actually, the right water

Â is going to flow into the right regions. Now, the job is to design a partial

Â differential equation that emulates this process.

Â And here it is. So we have basically the image outside

Â and we have a region of missing information inside.

Â And we want to propagate the information from outside in.

Â The way we're going to do that is with this equation.

Â This is the first component of imaging painting with partial differential

Â equations. So what's happening here?

Â Is the information that we want to propagate.

Â I am not telling you yet what is that information.So is the information.

Â That we want to propagate. And n is the direction that we're

Â propagating that information. So n is the direction that we're doing

Â the propagation. Now look what happens here.

Â We have gradient l meaning the change of L.

Â Inner product with N = 0. This is a change of A and it's projected,

Â because of the inner product, into N, the direction of propagation and we want it

Â to be equal to zero. So this is of course a vector.

Â It's the gradient of information. So we have a vector.

Â And we want that vector to be perpendicular to the direction that we

Â are propagate, and we're have, so this is going to be the gradient of L, and this

Â is going to be the direction. We want them to be basically

Â perpendicular in the sense that we want to propagate information L.

Â In such a way that is it does not change, it does not change in the direction of

Â the propagation.That's exactly what it means propagating information.

Â I move it in the direction that is constant along that direction.

Â That's what this equation is basically telling us.

Â That L, the information is constant in the direction of propagation, constant

Â along these directions. Now how do we solve this with partial

Â differential equations? Assume that the image is I.

Â [SOUND] Now, we have this and we want to deform the image, we want to change the

Â image, this is a partial differential equation, in such a way that at the end,

Â we get this equal to zero. We have seen how to do that when we're

Â talking about Euler-Lagrange equations. When we want something to be equal to

Â zero, how do I deform the image in order to accomplish that?

Â Let's think for a second how to do that. Remember last week.

Â How do we do that? Very simple.

Â We basically do this. Let me just go back one sec.

Â We basically do this. We make the image change in time

Â according to what we want. Now, when this doesn't change anymore,

Â meaning that steady state, we get this equal to zero.

Â That's the definition of steady state: no more change.

Â Actually, the computer, when this very little change, we can stop.

Â When this is ideally zero, we got what we wanted.

Â L. Constant in the direction of propagation.

Â So, this is actually a very simple trick. Every time you want to do something like

Â A = zero, whatever is A, some information of the image, you do dI dT = A.

Â And you run that to a steady state. That's basically what we did for the all

Â day Lagrange. So this is the basic equation for this

Â type of imaging painting. Now, we haven't finished.

Â I have to tell you what's L and what's M. What's the information that we want to

Â propagate? And in which direction do we want to

Â propagate? And there's a lot of art here.

Â So I'm just giving you examples. I'm teaching you the concepts.

Â And these are examples that are already very powerful.

Â But you can come with different examples. Now, let me start by the information.

Â We want this propagation to be smooth. We don't want to see a big jump inside

Â the region that we are inpainting. If we see a big jump, we're going to

Â notice that. So we want it to be smooth.

Â So it has to be anything that represent basically smoothness in the image, and

Â one of the things that represent smoothness is the Laplacian of the image,

Â the second derivative of the image represent smoothness.

Â Now. Why not the first derivative?

Â Wait a second, and you're going to understand why.

Â Because the first derivative is also a representation of this bothness.

Â Now, in which direction are we going to propagate?

Â And now we go back to what we learned from professional restorators.

Â They say, continue edges. Now, we know that if we have an edge, the

Â gradient of the image is perpendicular to the edge.

Â We learned that, that week, last week, so we are going to want to propagate along

Â the edges, which means that we have to take the perpendicular to the gradient,

Â so n is the perpendicular to the gradient, as written here, or we can

Â write that n is basically the gradient. But the perpendicular to it.

Â So that's important thing here, that we want to continue edges.

Â And then, for that, we have to go perpendicular to the gradient.

Â And that basically also explains why we cannot take the first derivative of L.

Â Because if we were to put L. Equal to the first derivative, let's say

Â the gradient. These two guys are always zero.

Â So I'm not solving anything. Because the gradient and its

Â perpendicular are always zero. And I wanted basically to go kind of.

Â Constant in that direction. This is not enough.

Â There is also another reason why, okay this is always perpendicular, it's not

Â very good. Although remember we're taking the

Â gradient of L. But the other reason is because we want.

Â Two things to be smooth inside. We want to continue the grey values

Â inside to be smooth. We also want to con, continue the

Â boundaries to be smooth inside, so we need higher order of the realities.

Â Not only because these two guys are perpendicular and then there's not enough

Â information, even if we take the gradient of L, which is what actually N pending is

Â propagating. We want to go a bit higher order so we

Â get even more smoothness. And as I said, the Laplacian is one of

Â the simplest measures of smoothness. So, we get smoothness.

Â In the opposite direction or perpendicular to the gradient or on the

Â level line's direction. And that basically gives us our equation.

Â So, we have the change of l. And this is the normal, is the

Â perpendicular to the gradient. What we're going to get at the end of

Â this, when we get to steady state, when this becomes zero we basically get that

Â the Laplacian became constant along the edges.

Â And we have propagated the Laplacian along the edges.

Â That's what basically this basic equation makes and once again you just discretize

Â it in the computer and you run it and you get Image Inpainting.

Â You get propagation from outside in. Every differential equation has to have

Â boundary conditions. So, if this is my image and this is my

Â region to be in painting, what do I assume here?

Â And these boundary conditions for this type of equations, a bit of a strange

Â equation, it has three derivatives, two because of the Laplacian, one more

Â because of the gradient. Remember.

Â We talked about this in the past. But the Laplacian is the second

Â derivative of the image. In one direction plus the second

Â derivative of the image in the other direction.

Â And then we take the gradient of this. And we get a third derivative of

Â considering both gray values and edges in the boundary and those are smoothly

Â propagating inside thanks to have basically the Laplacian here and these

Â are as we talked the level lines of the image the isopoles the regions of edges,

Â the directions of edges. These are very simple but very, very

Â interesting equation. Let us see some examples.

Â So always when you do image processing, start with very simple examples to see

Â what's happening. And here is a very simple example.

Â White is the region. Then it's to be inpainted.

Â And you can see here that there is a very smooth continuation.

Â Really, really nice. A smooth continuation.

Â It continues the boundaries nice. It, of course, continues the colors,

Â black and grey. But even more, more important, it really

Â completes the circle. It doesn't know that there is a circle

Â there. It just goes in the normal direction.

Â Perpendicular to the normal direction in the edge direction.

Â Remember the gradient is in this direction.

Â The perpendicular to the gradient is in this direction.

Â So it's going and it's filling in. Inside the region.

Â Propagating information in that direction.

Â Really, really nice result. Of course, if it works only in artificial

Â images it's not very good. Hopefully, it works on the, also on real

Â data. Here is an example.

Â What is being impainted? The letters.

Â We wrote on top of this image. This is a nice image of New Orleans.

Â And we draw with a basic rough and tough and boom, the letters are gone.

Â So the regions to be painting are all these letters and we go inside and we

Â make the letters disappear. We're propagating information from

Â outside in. So hard to see here and now you get the

Â very nice image where we invented the information that is covered by the

Â letters. Based on information that is surrounding

Â the letters. Very nice result, I think.

Â Here is another example of basically an image that has deteriorations and

Â degradation. The red are the images, the regions, that

Â we are going to in paint. Sometimes, you mark regions that are a

Â bit larger than the actual missing information.

Â Look here. Is very thing.

Â You do it a bit larger. It's like giving an impulse.

Â So this is propagated from outside, in. You start from a, be far away.

Â It helps you have to move even more smooth as we want.

Â And here are the results. And it looks pretty good, some place you

Â might see a tiny error here because it's covering a lot of the eye of this girl,

Â so it's very difficult to invent information when there is nothing in the

Â surrounding area that hints about what needs to be basically put in.

Â But of course you get an extremely good. At least initial result looks almost

Â excellent and the a professional restorator you see different programs may

Â basically be able to go and repair only that tiny region if needed to.

Â Sometimes, there's no need for that. Here is the original one.

Â We have already seen it. A person is jumping with a cable.

Â Now the cable is gone, and now you see a zooming region.

Â Once again, look how nice the socks continue.

Â Very nice continuation of this thread H because that's what we wanted, we were

Â continuing HX that's exactly what we wanted, and that's exactly what this type

Â of algorithm does. So, on our example, just a special

Â effect, let's just look at what's happening here.

Â And basically they're all gone. I'm going to just show you that again.

Â There are people in the boat and they're all gone.

Â We're basically in-painting what's basically been covered by the people

Â sitting on the boat and basically they're all gone.

Â Again, in-painting means modifying an image in a form that is then

Â non-detectable, at least by the regular consumer.

Â Here is another example of, we have scratches and now the scratches are gone

Â basically, we basically removed those scratches.

Â We inpaint in the information from the surrounding areas and again, looks very

Â nice. It looks very natural as it was before it

Â was all scratched. Now there is one example.

Â And this is the last example I want to show in this video where it's actually

Â not very hard to detect the regions that need to be inpainted.

Â In other previous examples we, basically the user, defined somehow the regions.

Â Now, this is an example, for example, of wireless transmission of images.

Â Remember, images are transmitted, are restored, using JPEG.

Â JPEG works in blocks. What happens is, if when you're

Â transmitting an image, there's a signal drop.

Â For example, on the cell phone. So half a block gets dropped.

Â Now, signal drops are not very hard to detect, and basically.

Â You, you have no signal and you detected it.

Â Okay? So when there's a drop you can say, hey,

Â I'm basic going to recover that block by mention painting.

Â And that's what we are simulating here. And this is the original and all these

Â blocks that you see here basically are dropped signals, simulation of dropped

Â signal. You see that there's no signal, you plug

Â that into a painting and then boom, you reconstruct your image.

Â Let me show you that again, you basically see signal drop.

Â And you reconstruct. So basically you may have a better

Â cellular phone, a cellular phone that detect signal drop in images and boom

Â does imaging painting, this is still JPEG, still the person that is sending it

Â to you doesn't know. That the signal might drop.

Â At the receiver end, you see a signal drop.

Â This actually can help even compression. You can drop on purpose some of the

Â blocks, because we know how to recover them with image in painting, and in that

Â way, you basically improve the compression ratio.

Â Some blocks are very easy to recover, and then you can actually not even send them

Â or not even store them. Makes the reconstruction a bit more

Â computationally expensive, but makes the compression possibly much higher.

Â So this is the last example from this video.

Â But there is one thing I haven't told you and that was on purpose just to keep you

Â wondering a bit. I gave you the equation of.

Â Image in painting, but here we have three colors.

Â How do we do that? How do we go from that equation that I

Â gave you before to colors? Remember the equation was something like

Â IT equal gradient of ii and the I. In a product.

Â Gradient I. Perpendicular, so.

Â You can basically do this for each one of the channels.

Â Lets say in red, green, blue or in any other color space.

Â Or you can think about extending this to color images, to vectors.

Â We know the concept of gradient for color vectors and we can basically define the

Â Laplacian for color vectors as well. So, both possibilities exist.

Â You can treat every color independently or you can try to use definition of

Â vectorial fields for these, every component here Laplacian and gradient in

Â vectorial fashion. All the examples I show you, every color

Â is treated independently. So, this is an example of imaging pending

Â with partial differential equations. Next video, I'm going to show you an

Â example of energy painting with variational formulations that we learned

Â last week. Very simple variation of formulation that

Â will lead us to also a very nice energy painting technique based on the same

Â colors of propagation and boundaries. I'm looking forward to seeing you in the

Â next video. Thank you.

Â