So, we're doing nonlinear spacecraft control. I have lots of arguments of my colleagues are like, 'ah, but our control is linear' and, 'look, I can guarantee all these things and that's better'. Yes, but there's a problem with that statement. Life is not linear. Spacecraft motion. We've seen the equations of motion of an object. Immediately go till you make it till the I Omega stuff and the MRP's in there. All of the suden, get everything glass couples and it's really nonlinear system. So, you can linearize, and we will show, talk a little bit about that. And that opens up immediately all the linear analysis tools that you can use for doing spacecraft control. But if you're linearizing, you have to linearize about some point in your state space. So, is it this equilibrium, like the pendulum's down and is this thing going to be stable? Or, are you putting it upside down? And, it's going to be unstable? But, you know what? I can stabilize this one with control. And that's what we're getting into with these lectures. Control to me is a way to take the natural dynamics and go, 'oh, that's close but not quite what I want. And, you know, modify with some feedback mechanism and say, 'well, you know, there's a motor here and that voltage will be X amount if I am over here', right? So, you're basically just redoing what nature gave you, modifying it, and putting it in a form that now gives you a stable system. And what does that mean stability? That's what we'll be getting into a lot. So, to get going with this, the nonlinear side is that's what we're dealing with. It's a spacecraft motion. We are doing 3D attitude. We're not doing much really in 1D, maybe some examples and so forth, but we are going to go through a bunch of stability definitions here. And, you know, what you get out of this class, at this section at least, in the end we can have control that we can guarantee with this algorithm, you know, no matter what the attitude is. It is going to recover and track something. And it can track an equilibrium point, that's typically called 'the regulation problem'. If you're just trying to drive all the states to zero, and the Hubble's supposed to point in this direction, you can always define the inertial frame such that is lined up with your desired direction. So then you just drive all the x's all the states to zero. That's the regulation problem. The tracking problem means, look, you're flying over the Earth, you're going over this asteroid or the moon or Mars or something, and you want to look at a particular point on there. Well, then the attitude you have to have actually will vary, and maybe it speeds up more then slows down and then it has to speed up again to keep pointing at it. So, there could be a completely general time varying of reference trajectory. So, that's kind of like a tracking problem. I'm not just trying to get your positions all to converge to the 000 coordinates but you're saying, 'look, there's a path you have to follow'. And then we can write our errors relative to that path, you know, time set zero, I'm supposed to be here; time step one, I'm supposed to be here. So, if you're at the right place here, great, but don't do anything. The next time step you have an error, right? That's the tracking problem. And that works very easily in your mind for that, for the translational motion. We're doing the same thing in attitude motion. So, you will see us deriving a lot of these different properties, and in the end we'll have something that can actually asymptotically globally track any attitude reference trajectory. But you have to have certain feed forward terms, the references, the kinematics, all of that comes back in together again. Okay? The dynamics itself, the kinetics, we're keeping reasonably simple at this stage. For the most part, it would just be a single rigid body. And then that external torque that acts on it, that is our control which you could think it's done through thrusters or something, you know, some pure coupling that's what's generating it. So, at the very end when you do more complicated dynamics, and then I'll show you the structure and how really it's the same concept, it's just more more math that goes in there. But it's easier to follow on a rigid body. So that's kind of, little bit of a highlight to where we're going, what we're going to be doing. Some of you guys are even more tired than myself. That's amazing. Must be getting close to end of the semester. Okay. So, this is the outline. Today we're definitely covering stability definitions. We're also going to be going through the Lyapounov functions, and let me just get this one thing going. There we go. So, the Lyapunov functions is a means to... Okay, good. The Lyapunov functions are a means for us to actually argue stability. The linear stability analysis, once you turn everything into spring mass damper system, you go classic equations, there's state space form, frequency space forms. You don't need to [inaudible] transform into that thing. You look at roots, Nyquist Stability Theorum, root locus plot, all those different tools that you have, right? We're going to be talking though a lot about nonlinear systems, and they Lyapunov functions are a basically an energy-based method. And it's nice cause it doesn't have a requirement that the states are, somehow, linearly dependent on other stuff. You can modding in very general form. So, it's very powerful theory. You see there's a lot in nonlinear control. But the question is always, what Lyapunov function do you pick? And that's where some of the magic in art comes into play. In this class, I'm not teaching you a whole course of the Lyappunov theory. I'm essentially giving you the Cliff notes version, you know, the abbreviated summary. This is the key thing that you need to know. In particular, these are the stuff that relates to the attitude controls we are doing, you know? I still encourage you if you like these things, take a full class on nonlinear control. There's a lot more to this stuff than what I'm showing here, but I'm kind of showing the key things. And particularly, there's always the trickery of how do you come up with a good velocity based measure, or attitude state based measure, but attitude problem because we're working on the SO3 group, right? The attitude errors can only be 180 and any more it gets better again. How does that manifest itself into these functions? I'm going to show you some very nice candidate functions and we'll be driving some of the math. And then it almost becomes plug and play. You can pick these coordinates and this right measure, and apply it to this tracking problem or regulation problem and kind of like Lego. You can, kind of, you know, build your control what you want to do. So, right now we're just going back to the building blocks essentially. Then we're going to be applying these building blocks with a nonlinear feedback control on the spacecraft. We're not doing any estimation in this class. There's just simply no time. Estimation is a huge thing unto itself. That's why we have classes on estimation. That's where you do that. Here, we'd assume we have all the states we need, and now we're working on the dynamics and the control and the stability, kind of wrapping everything together. We'll talk little bit about Lyapunov optimum, what that means, but also do some samples there on different kind of control strategies that I'll show. That's more illustration purposes.