0:04

So, we're doing nonlinear spacecraft control.

Â I have lots of arguments of my colleagues are like, 'ah, but our control is linear' and,

Â 'look, I can guarantee all these things and that's better'.

Â Yes, but there's a problem with that statement.

Â Life is not linear.

Â Spacecraft motion.

Â We've seen the equations of motion of an object.

Â Immediately go till you make it till the I Omega stuff and the MRP's in there.

Â All of the suden, get everything glass couples and it's really nonlinear system.

Â So, you can linearize, and we will show, talk a little bit about that.

Â And that opens up immediately all the linear analysis tools

Â that you can use for doing spacecraft control.

Â But if you're linearizing, you have to linearize about some point in your state space.

Â So, is it this equilibrium, like the pendulum's down

Â and is this thing going to be stable?

Â Or, are you putting it upside down?

Â And, it's going to be unstable?

Â But, you know what?

Â I can stabilize this one with control.

Â And that's what we're getting into with these lectures.

Â Control to me is a way to take the natural dynamics and go, 'oh,

Â that's close but not quite what I want.

Â And, you know, modify with some feedback mechanism and say, 'well,

Â you know, there's a motor here and that voltage will be X amount

Â if I am over here', right?

Â So, you're basically just redoing what nature gave you,

Â modifying it, and putting it in a form that now gives you a stable system.

Â And what does that mean stability?

Â That's what we'll be getting into a lot.

Â So, to get going with this, the nonlinear side is that's what we're dealing with.

Â It's a spacecraft motion.

Â We are doing 3D attitude.

Â We're not doing much really in 1D, maybe some examples and so forth,

Â but we are going to go through a bunch of stability definitions here.

Â And, you know, what you get out of this class,

Â at this section at least, in the end we can have control

Â that we can guarantee with this algorithm, you know, no matter what the attitude is.

Â It is going to recover and track something.

Â And it can track an equilibrium point, that's typically called 'the regulation problem'.

Â If you're just trying to drive all the states to zero,

Â and the Hubble's supposed to point in this direction,

Â you can always define the inertial frame such that is lined up

Â with your desired direction.

Â So then you just drive all the x's all the states to zero.

Â That's the regulation problem.

Â The tracking problem means, look, you're flying over the Earth,

Â you're going over this asteroid or the moon or Mars or something,

Â and you want to look at a particular point on there.

Â Well, then the attitude you have to have actually will vary,

Â and maybe it speeds up more then slows down

Â and then it has to speed up again to keep pointing at it.

Â So, there could be a completely general time varying of reference trajectory.

Â So, that's kind of like a tracking problem.

Â I'm not just trying to get your positions all to converge to the 000 coordinates

Â but you're saying, 'look, there's a path you have to follow'.

Â And then we can write our errors relative to that path, you know,

Â time set zero, I'm supposed to be here; time step one, I'm supposed to be here.

Â So, if you're at the right place here, great, but don't do anything.

Â The next time step you have an error, right?

Â That's the tracking problem.

Â And that works very easily in your mind for that, for the translational motion.

Â We're doing the same thing in attitude motion.

Â So, you will see us deriving a lot of these different properties,

Â and in the end we'll have something

Â that can actually asymptotically globally track any attitude reference trajectory.

Â But you have to have certain feed forward terms, the references, the kinematics,

Â all of that comes back in together again.

Â Okay?

Â The dynamics itself, the kinetics, we're keeping reasonably simple at this stage.

Â For the most part, it would just be a single rigid body.

Â And then that external torque that acts on it,

Â that is our control which you could think it's done through thrusters or something,

Â you know, some pure coupling that's what's generating it.

Â So, at the very end when you do more complicated dynamics,

Â and then I'll show you the structure and how really

Â it's the same concept, it's just more more math that goes in there.

Â But it's easier to follow on a rigid body.

Â So that's kind of, little bit of a highlight to where we're going,

Â what we're going to be doing.

Â Some of you guys are even more tired than myself.

Â That's amazing.

Â Must be getting close to end of the semester.

Â Okay.

Â So, this is the outline.

Â Today we're definitely covering stability definitions.

Â We're also going to be going through the Lyapounov functions,

Â and let me just get this one thing going.

Â There we go.

Â So, the Lyapunov functions is a means to...

Â Okay, good.

Â The Lyapunov functions are a means for us to actually argue stability.

Â The linear stability analysis, once you turn everything into spring mass damper system,

Â you go classic equations, there's state space form, frequency space forms.

Â You don't need to [inaudible] transform into that thing.

Â You look at roots, Nyquist Stability Theorum,

Â root locus plot, all those different tools that you have,

Â right?

Â We're going to be talking though a lot about nonlinear systems,

Â and they Lyapunov functions are a basically an energy-based method.

Â And it's nice cause it doesn't have a requirement

Â that the states are, somehow, linearly dependent on other stuff.

Â You can modding in very general form.

Â So, it's very powerful theory.

Â You see there's a lot in nonlinear control.

Â But the question is always, what Lyapunov function do you pick?

Â And that's where some of the magic in art comes into play.

Â In this class, I'm not teaching you a whole course of the Lyappunov theory.

Â I'm essentially giving you the Cliff notes version, you know, the abbreviated summary.

Â This is the key thing that you need to know.

Â In particular, these are the stuff that relates to the attitude controls we are doing,

Â you know?

Â I still encourage you if you like these things, take a full class on nonlinear control.

Â There's a lot more to this stuff than what I'm showing here,

Â but I'm kind of showing the key things.

Â And particularly,

Â there's always the trickery of how do you come up with a good velocity based measure,

Â or attitude state based measure,

Â but attitude problem because we're working on the SO3 group, right?

Â The attitude errors can only be 180 and any more it gets better again.

Â How does that manifest itself into these functions?

Â I'm going to show you some very nice candidate functions

Â and we'll be driving some of the math.

Â And then it almost becomes plug and play.

Â You can pick these coordinates

Â and this right measure, and apply it to this tracking problem

Â or regulation problem and kind of like Lego.

Â You can, kind of, you know, build your control what you want to do.

Â So, right now we're just going back to the building blocks essentially.

Â Then we're going to be applying these building blocks

Â with a nonlinear feedback control on the spacecraft.

Â We're not doing any estimation in this class.

Â There's just simply no time.

Â Estimation is a huge thing unto itself.

Â That's why we have classes on estimation.

Â That's where you do that.

Â Here, we'd assume we have all the states we need,

Â and now we're working on the dynamics and the control and the stability,

Â kind of wrapping everything together.

Â We'll talk little bit about Lyapunov optimum, what that means,

Â but also do some samples there on different kind of control strategies that I'll show.

Â That's more illustration purposes.

Â