Until now, we've considered the problem of controlling the quadrotor at states that are not too far from the hover equilibrium configuration. Because of this we are able to make some assumptions. We said that the roll and pitch angles are close to zero and all the velocities are close to zero. But as you see from these pictures, as the vehicle manuevers at high speeds and exhibits roll and pitch angles far away from zero, these kinds of controllers are not likely to work. We want to overcome these limitations. Specifically, we want to consider non-linear controllers that allow us to control the vehicle far away from the equilibrium state. Let's recall the trajectory control problem. Typically we're given a trajectory, its derivatives up to the second order, where specified x of t, y of t, z of t, and the yaw angle psi of t. We're also able to differentiate them to get higher order derivatives. We define an air vector that describes where the vehicle is relative to where it's supposed to be. And then we try to drive that error exponentially to zero. By insisting on this exponential convergence, we're able to determine the commanded accelerations, which then get translated to inputs that can be applied by the quadrotor through its motors. You'll also recall that we approached the controller design problem in two stages. Specifically there's an inner loop that address the attitude control problem and an outer loop that then addresses the position control problem. The inner loop compares the commanded orientation with the actual orientation and the angular velocities and determines the input U2. The outer loop looks at the specified position and its derivatives, compares that to the actual position and its derivative, and computes the input you want. We want to do the same thing, except now we want to look at trajectories that can exhibit high velocities and high roll and pitch angles. That is, trajectories that are not close to the equilibrium or hover state. We start with a very simple control law for the outer position loop. This is essentially a proportional plus derivative control law. You will see the proportional term, which multiplies the error e sub r, and the derivative term that multiplies the error e sub r dot. In addition, there's a feed-forward term which essentially is the acceleration of the specified trajectory. And we're compensating for the gravity vector. If the system were fully actuated, in other words, if the system could move in any direction, then this would essentially solve the control problem. U1 would be a three dimensional vector and you could control the vehicle, so that the air and the position would go exponentially to zero. However, u1 is a scalar quantity, in fact the only direction in which the thrust can be applied is along the b3 direction. So because of that, we do the next best thing that is possible. We take this vector and we project it along the b3 direction. So that gives us a projection that is close to the desired input, but not quite because t is never perfectly aligned with b3. We next turn our attention to the attitude control problem. Knowing that we want t to be aligned with b3, and knowing that we have very little flexibility in what direction t will point in, we do the next best thing which is rotate the vehicle, so that b three is aligned with t. This gives us the constraint. The desired rotation matrix should be such that the b three vector should point in the t direction. We also know what the desired your angle needs to be. Armed with these two pieces of information, it's possible to solve for the desired rotation matrix. Using this information, we can now compute the air in rotation by comparing the actual rotation R, and the desired rotation R desired. We next follow the moral equivalent of a proportional plus derivative control log. There are two terms in the parenthesis. One is proportional to the area in rotation, the second is proportional to the area and angle of velocity. We also have to compensate for the fact that the vehicle has a non-zero inertia eye, and there are gyroscopic terms, as we know from the Euler equations of motion. This gives us the control inputs u1 and u2. There are a couple of steps in the derivation of the control law that are not straightforward. First, how do we determine the desired rotation matrix? Recall that we have these two pieces of information. First, we want the vector B3 to be aligned with the vector, T. T is known and we want to find R desired, so that this equation is satisfied. Second, we want the yaw angle to be equal to the desired yaw angle side desired. We also know from the theory of Euler angles that the rotation matrix has the form shown here. So the question is can we use the two equations shown above to solve for the two unknown angles, the roll angle and pitch angle. The second non trivial step in the derivation of the controller is the calculation of the error term in the rotation matrix. So given the current rotation R and the desired rotation R desired, what is a rotation error? You cannot simply subtract one matrix from another. If you do, you will get a three by three matrix, but that matrix is not rosagnal, and is not a rotation matrix. The correct way to calculate the error is by asking what is the magnitude of the rotation required to go from the current rotation to the desired rotation? Well this required rotation is delta R given by the transpose of the rotation matrix, multiplied by the desired rotation matrix. You can multiply R by delta R to verify that this is in fact true. Once you have delta R, then the angle of rotation which is really the magnitude of rotation, and the axis of rotation can both be determined using Rodrigues formula. While I am not taking the time to discuss this, it turns out that is controller that we've just gone through guarantees that the vehicle is stable through a wide range of angles and velocities. In fact, it's basin of attraction, that is the region from which it will converge to a desired equilibrium point, as almost all of the space of rotations, and a very large set of angular velocities and linear velocities. If you look at the inequalities here, they characterize this basin of attraction. The first inequality specifies the set of rotations from which it can converge to the hover configuration. The second inequality tells you the range of angular velocities from which it can converge to the hover configuration. If you look at the second inequality, you notice a term in the denominator which relates to the eigen values of the inertia pincer. Lambda sub min is essentially the smallest argon value of the inertia matrix. The smaller this argon value the larger the quantity on the right hand side, which tells you that as you scale down the inertia matrix, the set of angular velocities from which the robot can converge to the hover state increases. In other words, the vehicle becomes more stable. You can see this in nature, there are many examples where the advantage of small size can be seen. In this particular video, which is one of honey bees, slowed down by a factor or 20, you see the honeybees, because they're small, are able to react to collisions and recover from them. Again, because the inertia is small, they have a controller whose base of attraction is fairly large, and they're able to exploit the advantages of small size and small inertia to recover from large derivations. This is one of the motivations for us to build smaller quadrotors. In this picture, you see a prototype of the Pico Quadrotor, which is only 11 centimeters tip to tip, weighs only 20 grams and has a max speed of six meters per second. Vehicles like this are not only safer, but they're also more maneuverable. You can see the advantage of small size in this video clip, where these two vehicles are commanded at a relative speed of two meters per second to collide. And they're able to recover from the collision. Again, it's because of this large basin of attraction for the non-linear controller that the vehicles are robust to derivations like the ones you see here. In fact, the size of this basin of attraction scales are 1/L to the power of five over two. Again, smaller the characteristic length L, the bigger the basin of attraction. Here are a few experiments that illustrate the advantages in nonlinear controller over the linear controller. You can see in these videos that the roll and pitch angles deviate significantly from the hover position and the velocities can scale up to five meters per second. In this experiment, our non-linear trajectory controller is used along with two other controllers. The first one being the linearized hover controller and the second being a rotation controller or an attitude controller. By piecing together these three controllers, the robot is able to gather momentum before twisting it's body to go through this window, whose width is only a few centimeters more than the height of the robot. The ability to use nonlinear controllers also allows us to design trajectories that are more aggressive. In order to see that, it's useful to exploit the geometric structure that's inherent in these quadrotors. We want to construct two vectors. On the right hand side you see the state, q, q, dot. The inputs u1 and u2, but now we've augmented the state and the inputs with two additional terms, u1 dot and u1 double dot. On the left hand side, the vector consists of position, velocity, acceleration and then the jerk, j, the snap, s. Again, jerk being the derivative of acceleration and snap being the derivative of jerk. Psi is a yaw angle and psi dot is the rate of change of yaw and psi double dot is the second derivative of yaw. It turns out that there's a one to one correspondence between the variables in the vector on the left and the variables in the vector on the right. And you can see this from the equations of motion. For example, if you know the acceleration in the Z direction A3, you can calculate U1 by differentiating that equation, you can get an expression for U1 dot. And by differentiating it again you can get an expression for u one double dot. You'll see that all the terms on the right hand side of these equations are obtained from the vector consisting of position, velocity, acceleration, jerk, snap, the yaw angle, it's derivative, and it's second derivative. So this is showing you how you can get the terms in the vector on the right hand side from the terms in the vector on the left hand side. You can go through a similar exercise to go the other way around. What does this mean for us in terms of trajectory planning? What it really means is that if I know the position and the yaw angle, and it's derivatives, I can determine the state and the input. Let's consider the equations for a planar quadrotor in which it's easier to visualize this mapping between these two vectors. This equivalence between the two vectors is due to a property called differential flatness. The key idea is that all state variables and the inputs can be written as smooth functions of the so called flat outputs and their derivatives. The flat outputs for a quadrotor are simply the y and z position. And the other way around. If we know the state vector, the input u1, its first and second derivative, and the input u2, we can recover the flat outputs and their derivatives. This relationship between the two vectors is called a diffeomorphism. The term diffeomorphism refers to a smooth map between these two vectors, and the definition can be found in most differential geometry textbooks. Similarly, for a three-dimensional quadrotor, you have a longer vector on both sides, but once again, there's a one-to-one relationship between these vectors. If you know the flat outputs and their derivatives, you can recover the state variables and the inputs. The three-dimensional quadrotor is differentially flat. And the implication for trajectory planning is quite simple. Rather than think about the dynamics of the system when we plan trajectories, we can focus our attention on the flat space on the left-hand side. We can think about mapping obstacles on to this flat space and planning trajectories that are safe, provided these trajectories are smooth. If they are smooth, since these are the flat outputs, we're guaranteed to be able to find feasible state vectors and inputs corresponding to these trajectories. So once we find smooth trajectories, and as we've discussed before, we prefer to make these trajectories minimum snap trajectories, we can then transform these trajectories into the real space with state vectors and inputs. And again we're guaranteed that these state vectors and inputs will satisfy the dynamic equations of motion. Here's an example of a minimum snap trajectory that starts off at a position denoted by A, goes to a goal position denoted by B through an intermediate point specified by C. So this trajectory is generated in the flat space consisting of the flat outputs, without specific knowledge of the dynamics. But because these trajectories are smooth, they can be automatically transformed into trajectories that are realistic. Trajectories that can be executed by a real robot. This reduces the motion planning problem to a problem in computational geometry. If we know how to synthesize smooth curves going through specified weigh points, a subject that we've already addressed in the past, we can easily transform these curves into dynamically feasible trajectories. Trajectories that a non-linear controller can execute faithfully.