To illustrate this concept of the nonlinear least square we will return to two examples from before. The first example is Perspective-n-Point algorithm. That is if they give a set of three dimensional points, and two dimensional image of those points. How do I estimate the rotation and the translation between the cameras. As we saw this can be done in a d square formulation by accumulating a three p times twelve matrix times unknown vector consists of all the elements of p, the three by four matrix in a single column equal to zero. And we can solve that by making an SVD of the purple matrix. However we see that the projection matrix P, is consists of k times rotation in translation, and this is no longer a arbitrary Vector that can be stringed together. We have a particular constraint the rotation matrix must be also normal, in fact in the sub space SO3. So that constraints is lost when we simple treat this as long factors. So that, non linear solution space, then seek for this particular rotation, at t. In fact a three dimensional freedom rotation matrix, and we want to only look in this non linear space, such that f, of R and t equal to b. Which means the projection over the three dimensional point is through this non-linear representation, a perimeter representation of rotation, through for example three angles of four quaternion angles. That equal to the projection in the image space, so that's a non-linear square formation. Our next example for this, is inverse of the situation. Where we have the camera pose given, and we simply want to triangulate those points in 3D space. As we saw, this also can be done as a least square problem where we cumulate this purple matrix on the left followed by the N known vector in this case is still by one vector with a three dimensional position XYZ followed by one that also with a zero. So we have a least question to this but Y is insufficient? The reason is the following Imagine there is a case where we have several views, shown here and a corresponding point marked. When there is no noise in the data, meaning that the points are exactly corresponding. There's no lens distortion error. There's no correspondence error. Those points exactly intersect that point. Again the v squared solution is correct because the error is zero. However, as noise comes in due to the lens distortion factor or the correspondence errors, those rays no longer intersect that point. So A actually is no longer zero, but some values. In fact we look at those triangular the points back get an imagine. We see that in image space is no longer corresponding in two different we want to move the space around in three dimensional space advected to my image into this blue points, and when I find this particular three dimensional points such that when a back project the image there are as close as possible to the red dots meaning that we would want to minimize error in the pixel space not in the three dimensional space. So this is illustrated in this slide if a point is in 3D space, and we want to project into the image, and we want to minimize the distance between the projection, which is blue and the red which is measured, and we're going to seek this projection pints three x, xyz such that this is minimized. And this can be computed the following ways, projection of xy is simply is the projection matrix p in axle, y direction, divided by the z direction. So we now have the division comes in, and this is division comes in because we're looking at xy coordinates on the pixel domain, no longer looking at homogeneous ray anymore. So, every time we take a homogenous ray back to the pixel's base, we have to divide And [INAUDIBLE] is to make things non linear. And we avoid a whole step of this non linear operation up until this point. So we have the following error that I want to minimize. We have the red [INAUDIBLE] which is measured quantities. The three matrix given, but what's unknown is x trio. Which consists of x, y, z points in the three dimensional space. And we have factor because we take the division between x coordinates and z coordinates. And together they form In this square, a linear cross function. We are looking again for the particular XYZ, that transform cells into the image space through a division process. And we want to minimize that for it's location error image, showing blue to the red. And this error should be reduced across all the pictures. Again produce a set of least square errors but nonlinear.