0:08

So, let's start with this briefly.

Â This, up to now,

Â we've talked about stability.

Â We made it global, we made it asymptotic,

Â we made it robust,

Â if we have external unmodeled disturbances.

Â What we haven't addressed is actuation limitations.

Â I think you mentioned this too about torque limitations, right?

Â At some point you're going to saturate where a thruster only be full on,

Â there's nothing more that you can do,

Â that's as big a torque as you can get.

Â Right? So, how do we deal with that?

Â That's a different nonlinear phenomenon that often happens with spacecraft.

Â So, now we want to deal with what's called the Lyapunov optimal feedback.

Â It's a different kind of a way to look at it.

Â We're still going to build on

Â the Lyapunov optimal control or the Lyapunov of control theory,

Â so we build the Lyapunov function, take its derivative,

Â we have to prove it's going to be negative,

Â semi definite at least, right?

Â But we're going to deal with the control solutions that aren't just continuous.

Â So far, we had the Q was minus again times,

Â you know, the rates.

Â That's a nice linear function.

Â But if the rate errors get huge,

Â your control gets huge and you would saturate.

Â How do we handle this?

Â And there's also actually a related homework problem you currently working on that kind

Â of leads into this spirit a little bit.

Â So, let's go back to a really general dynamical system,

Â and then we'll specialize it for the spacecraft again.

Â But I want to show you these theories actually apply in a much more complex way.

Â I want to bring my rates to zero.

Â That was the original rate feedback,

Â cues in my states,

Â that could be the multilink angles,

Â it could be your attitude,

Â it could be your position, whatever coordinates define your dynamical system.

Â And cue dots would be those coordinate rates essentially.

Â And then if you do look around in dynamics or something,

Â you end up with this big equation.

Â Details aren't important, it's just it has this form.

Â And we went through this process already,

Â we said, 'hey, we can make this kinetic energy',

Â then a bunch of math later,

Â this is your work energy principle that the rates times to

Â control effort has to be equal to your power equation.

Â So we have to come up with a control such

Â that this is stabilizing and what we picked was,

Â I think it was a game matrix,

Â but I'll make it diagonal here.

Â So my controls are minus and positive gain times my rates,

Â and I can guarantee now this system is globally,

Â asymptotically stabilizing, to get the rates to zero, right?

Â That's our goal, only the rates.

Â But this control, if my rates go to infinity,

Â my control goes to infinity. And that's an issue.

Â So, how can we modify this?

Â Or are there better ways?

Â This just gives you a control performance.

Â Is this the best we can do?

Â If you have X amount of control authority,

Â are you using it to its maximum capability in this case?

Â And that's something that actually leads to the Lyapunov optimal constrategies.

Â So that's what we want to look at next.

Â What if our control authority is limited?

Â So, the first approach it's very popular actually,

Â it's you look at the system and go, 'you know what?

Â How unlucky do you feel?

Â How bad could this error get?

Â And often driven things like tip off velocities,

Â or if you lose communication for a certain amount of time and have these disturbances,

Â how bad could this tumble happen?

Â You know, you come up with some bound and say,

Â 'that's the worst tumble I have to deal with', right?

Â If I then pick the worst case tumble,

Â I have to pick a feedback gain such that I never saturate.

Â In that case, this holds.

Â I can guarantee stability.

Â But what have I done?

Â I've actually probably reduced my feedback gain to

Â compensate that I never- You're basically avoiding saturation.

Â You're dealing with saturation by never hitting it.

Â But the consequence is,

Â you've reduced your performance,

Â you've reduced your gains.

Â So with this I could have settled in 30 seconds,

Â now I'm going to settle in 30 minutes and I did that by bringing down my gains,

Â all the gains, so the control requirements never flat-lined,

Â you know, they never hit that limit.

Â So this can work, and it's done actually quite

Â often and people are very paranoid about saturation.

Â What I want to show you next is other approaches where we can let it saturate,

Â and in several conditions still guarantee stability on the system.

Â Right?So this would work,

Â but there's a performance hit,

Â it limits how much you can do. We can do better.

Â I can still saturate,

Â guarantee stability and detumble it in a short amount of time than what I get with this.

Â So, it's true for all that dot wonder on me, that should be up there.

Â So that would have worked, but the key result is reduced performance.

Â That's our worry.

Â So here's the second approach.

Â We have limited actuation,

Â our control can only go so big.

Â And for stability, what we really need to guarantee

Â is that V dot is as negative as possible.

Â If we make it negative definite fantastic,

Â but you just want it to be negative.

Â If it's negative and guaranteed stability,

Â it doesn't have to be a minus gain Q dot squared form.

Â You can have different forms as long as it's negative,

Â that's all that Lyapunov theory requires,

Â there's no smoothness requirements on this one, at least here.

Â So this is a first order system.

Â So let's focus on that.

Â What will make my V dots negative?

Â And the cost function that we have, that's our V dot,

Â so Lyapunov optimal control is designed to make my V dot as negative as possible.

Â And that's all of course, if you have unconstrained control- If unconstrained control,

Â you know, U minus k Sigma,

Â minus P that, to make it as negative as possible,

Â you make those gains infinite.

Â And now it's really negative.

Â Right? But we can't do that because we have limited actuation.

Â So, this is done as a constrained problem.

Â Right? You can only make Q so big.

Â So then the question is, what do you make Q such that J,

Â which is our cost function here, V dot,

Â which is our cost function J,

Â make it as negative as possible?

Â If we look at this now,

Â we come up with this control strategy.

Â In the end I've got this max limit.

Â I can do one Newton meter of torque, that's all I can do.

Â Both clusters around, I get a pure couple torque, right?

Â I get one Newton meter.

Â If I define my control torque to be

Â the max torque that I can generate times the sign of the rate air,

Â so I'm tumbling at one degree per second or five degrees per second.

Â This doesn't care.

Â It only feeds back on the sign of the radar.

Â And that says, 'hey, you are tumbling at a positive sense,

Â I need to torque in a negative sense',

Â that's where the negative sign comes in, essentially.

Â And it's good to torque at maximum capability.

Â It's going to hit it hard.

Â If you plug this in here,

Â Q dot times this control,

Â there is a minus sign,

Â Q max comes in and you get QI time- Q dot times sign of Q dot.

Â Which, of course, is guaranteed to be positive, right?

Â Regardless of size. Yes, sir.

Â Does this mean that if you are minimizing J,

Â we want the control to go as fast as possible as...

Â Yes. You are maximizing your perform- you're making your-.

Â End of time.

Â Maybe we want to maximize the interval energy of fuel consumption or something like that.

Â Then it wouldn't be a Lyapunov optimal one.

Â Lyapunov optimal really is defined as you've made your V dot as negative as possible.

Â So that's the time derivatives,

Â so at this stage, I'm picking my steepest gradient.

Â It doesn't give you a global optimization, you know,

Â there's whole books on global optimizations and trajec-

Â Maybe moving left first helps you get there quicker,

Â I don't have that kind of an optimization.

Â It's at this instant what control solution will make

Â my Lyapunov rate as strongly negative as possible,

Â so I'm coming down as steep as I can.

Â So it's kind of a local optimal thing in that sense.

Â Yup. So you can do that,

Â and then this is what you get.

Â And, well, actually, let's talk about this then.

Â So that's the control that we can implement,

Â this is very much a bang bang.

Â Right? This just says,

Â 'are you positive or negative?'

Â And then, you either hit positive, or hit negative.

Â Any practical concerns with that one?

Â If you do a bang bang.

Â Bryan.

Â It's goint to be really key?

Â There is that.

Â If you hit it with an impulse,

Â you might be exciting on model torques, that's a good point.

Â You might be sloshing fuel that you didn't expect,

Â you might be sloshing panels, flexing panels,

Â that could be happening, so that could be a concern with sharp bang bangs.

Â Okay. What else could be an issue?

Â If you miss your target.

Â Well, if I have continuous control with this theory,

Â I would hit the target.

Â But this Q dot comes from rate gyros,

Â if it's an attitude problem.

Â How good are your rate gyros?

Â Are they perfect?

Â So, if you add a little bit of Epsilon.

Â Right? So that's kind of where we can think of this.

Â If you think of a rate gyro measurement and,

Â you know, 'hey, we had a maneuver it's kind of noisy, but we had a maneuver'.

Â Now you're getting close to zero and close to zero is kind of doing this.

Â Right? I can't even draw the Gaussian noise too much,

Â but it will do some weird stuff.

Â This point means hit it full on one way.

Â This point is below,

Â hit it full on the other way.

Â And there's no real rate driving this.

Â There's no real error driving it,

Â its purely measurement errors.

Â Right? So, if you're driving this kind of assigned functions that

Â mathematically optimal it made my V dot as negative as possible,

Â but there's some really strong practical considerations to implement this in that.

Â You know, Epsilon off when you're hitting it hard.

Â Now, you can fix this, for example,

Â with dead zones, some people do that, and they say, 'look,

Â any error less than some value is close enough,

Â hopefully less than the noise level' and then you going, 'that's good'.

Â But, you know, then that limits how far you can go,

Â or we'll find- we'll start here next time,

Â there's other modified ways that we actually blend different behaviors

Â with nice smooth response and saturated response.

Â But I can still guarantee the V dot being negative part.

Â So we start to construct different ways to assemble these things.

Â The issue is the response around zero.

Â Infinitesimal to the left, whack, maximum response.

Â Infinitesimal to the right, whack, maximum response.

Â Right? That's too sensitive.

Â But we liked the part that,

Â 'hey, babe, big error',

Â basically if you're tumbling to the right,

Â you need to stop it and just give max torque to the left,

Â to arrest it as quickly as possible.

Â So is there a hybrid approach?

Â And still be able to guarantee stability,

Â and that's what you see outlined down here.

Â In fact, what we have here is,

Â we still have a saturated response.

Â If we have large errors,

Â well, you can tune it here my- I'm going linear up to my control authority.

Â And then I am saturating.

Â Right? And at that point I just if it's more than one Newton meter,

Â I just give it one Newton meter.

Â And the nice thing is,

Â with this control, I can stil,

Â l if you plug that cue in here,

Â you can still guarantee that V dot is always negative.

Â This one would not be Lyapunov

Â optimal because you're not making V dot as negative as possible.

Â For smaller tracking errors,

Â you end up with a linear control.

Â But then you can say handle saturation smoother.

Â So, if you have little noise levels it's, you know,

Â it scales with- if I measure intent to the minus 16 times again,

Â it's going to ask me to torque to that direction but only by a little bit.

Â Right? And you're not gonna have just to whack the whole system and excite all the modes.

Â So, you get a linear response coupled with the nonlinear ones,

Â that's a saturated function.

Â Yes.

Â Can we modify the maximum value at which we switch between the two?

Â Yes. There's lots of ways you can do this in the end because if you look at...

Â Let's play with some ideas here.

Â Now, I need to move this over,

Â hold on. There we go.

Â So, if I'm doing Lyapunov optimal,

Â I get a response that's basically like this.

Â Now you're a little bit- or actually I think I got it backwards.

Â It was a minus sine that we had, right?

Â So if this is- This is Q dot,

Â Q was minus Q max,

Â sine of Q dots.

Â If this is positive, my control authority should be negative, right?

Â And if you're negative,

Â the control authority should be positive,

Â and that is the Q max value.

Â And that's going to guarantee that you're always negative definite.

Â But, if you look at this function,

Â that we had the V dot, was simply Q dot times Q.

Â What we have here, right?

Â Let's pretend we only have one degree of freedom

Â otherwise- There's a summation involved once.

Â You can do it for one of the degrees,

Â you can do it for all the degrees individually with this approach.

Â So all I need is just to have the right sine of this.

Â So I need this to have opposite sine of Q dot.

Â If you'd come up with a control and say, 'you know what?

Â I don't like using max force,

Â maybe I want to use half of it'.

Â You could actually just, you know,

Â this kind of a control would also

Â be saturated control and would be asymptotically destabilizing,

Â which is not Lyapunov optimal.

Â Because it didn't hit the maximum control authority.

Â You could have made V dot a stronger negative,

Â but maybe you don't like it because you'll be shaking

Â around the astronauts too much or to payloads or,

Â you know, flexible structures get excited and so forth.

Â So that can be one.

Â So let's look at the hybrid approaches.

Â So, if we do one of them,

Â basically it says you're using a linear response,

Â until you saturate, and then you saturate.

Â Right?

Â So, that was the one that I showed you modified the purple line that's really there.

Â So, Kevin what was your approach then.

Â You wanted to do what?

Â If you modify like the point at which we go to saturation.

Â I mean, do we need to do the control to be continuous or-.

Â No, this doesn't require continuity there because V dot,

Â the continuity we need is in V, not V dot.

Â So you can actually switch to controls and V dot doesn't have to be smooth even,

Â just has to be in a negative definite and with guaranteed stability.

Â So we can switch between two controls.

Â So you could, if you wanted to,

Â you could go up to here and then and from here go on good enough.

Â Maybe you have the linear part only to hear to handle noise around here,

Â and then if you get past it, jump up.

Â But then you deal with a discrete jump and you control authority that might,

Â you know, excite on unmodeled dynamics,

Â that's probably my biggest concern I would have.

Â But what I'm showing here too,

Â is a single- I'm doing a linear feedback,

Â that's one saturation function.

Â In the homework's actually, there's the one with

Â the robotic system that you just- it's just X dot equal to U,

Â and you're trying to track something,

Â and then you have a maximum speed that you can control.

Â It can't go more than some one meter per second or something.

Â There we use an A10 function,

Â that's a very popular function,

Â because really all I need here is that this function, the control,

Â has to be odd that if my rates- if my error measure is positive,

Â the control is negative.

Â Right? Or Vice versa.

Â That's all you need here to guarantee that way,

Â if this times this is always negative,

Â I'm guaranteed it's stable.

Â Right? So that allows you now to design all kinds of responses,

Â and that's why the robotics love X dot equal to U

Â because they get to shape and do this exactly how they want.

Â We have X dot equal to U that we haven't.

Â Torques and spacecraft is different.

Â So this is just linearly saturated and then you have a kick and when you hit saturatio.

Â The A10 function would look something more like this applied to this problem.

Â It's infinitely smooth response.

Â But, you know, the tangent function, or the A10 function,

Â around the origin linearizes to basically a linear response.

Â So it's kind of cool because we can pick then,

Â okay, for small responses, this is the slope I want,

Â that's stiffness that I want for disturbance rejection,

Â for closely performance considerations.

Â All of that stuff. But then as it gets large,

Â it's going to smoothly approached that one and never jolt the system.

Â So, you can modify this actually,

Â in a lot of ways and deal with saturation.

Â If you're dealing with a first order system,

Â all we need is V dot to be negative.

Â If it's negative definite,

Â we have asymptotic stability.

Â Right? You could switch between controls,

Â as long as this is true and is still guaranteed, you know?

Â We don't typically because, again,

Â I have to deal with the jarring every time I'm

Â switching which you could smooth out with the filter and stuff.

Â But if you don't have to deal with it.

Â It's my post my most popular one is probably the Green Line, the A10 function,

Â we've used that quite a bit to deal with saturated responses on these kind of things,

Â especially when it's a first order system like this.

Â That makes sense?

Â So being first order is nice.

Â It's a really simple V dot and it gives you huge amounts of freedom

Â in how you want to design and shape that response and deal with saturation.

Â The linear control we had was this one and just would extended,

Â but it's not necessarily realistic.

Â Right? So... So it's nice.

Â So, let's see.

Â So now we're going to switch from a general mechanical system and

Â apply this to specific to spacecraft.

Â And we have rate control we want to talk about and we also have the attitude,

Â you know, the rate and attitude or just rate control.

Â Those are two considerations,

Â and the rate control is a first order system as you've seen.

Â We make very strong arguments.

Â The attitude and rate control gets a lot more complicated

Â and sometimes we can come up with conservative bounds for stability,

Â but they're conservative as you will see.

Â So, this is typically the setup.

Â This was the control we derived at the very beginning for our tracking problem.

Â And so, UUS means map control authority U in the unsaturated state.

Â Right? If you had infinite capability,

Â this is what you should do,

Â and you plugged in the V dot and this is what you would get.

Â And if you plug in this U in here,

Â this whole thing would be minus Del Omega transpose P Del Omega.

Â But that assumes you can really implement this control.

Â So, now we can look at what happens if we saturate.

Â And here's some of the challenges.

Â So just like that modified one,

Â this basically gives me the control up to

Â the point that I've reached my saturation limit,

Â and then I'm enforcing a hard saturation limit

Â where I'm not getting the six Newton meters,

Â but I'll give you the max of five.

Â The key is you're giving it at the right sign that you need.

Â That comes out of that controls.

Â So either you need a positive.

Â So this USI, that is the... Actually,

Â that should be U max I believe,

Â that shouldn't be USI, that's a typo.

Â Okay. US that should be the U max.

Â You give it max with the right sign.

Â So I should have had six, but I want five.

Â If you want minus six, you wouldn't give plus five,

Â you would give minus five, the closest neighbor with the right sign.

Â Right? That's what you're doing.

Â And you can come up with an argument here.

Â If you look at this function,

Â if V dot has to become negative,

Â if your control authority is larger,

Â the maximum control authority is larger than all these other terms combined.

Â You can basically overpower whatever their values are and then

Â ensure that whatever's left is done in a way such that this term,

Â you know, this is five,

Â I need something slightly negative to make five times negative a negative number.

Â And the V dot becomes negative, and this is the vector.

Â So, you know, it's a three by one,

Â so you'll have three of those terms.

Â So as long as you control authority exceeds all these individual terms,

Â I can always come up with a control that guarantees at least I have the right sign to

Â dropped my V dot to a smaller value and I can guarantee stability.

Â Now, performance again, another question.

Â But it turns out this is a very conservative bound.

Â If you can do that, that's great,

Â but you're probably being overly conservative now with the gains you

Â pick and how you can perform.

Â So, yes saturated control is actually really

Â tough to guarantee areas of convergence analytically.

Â People often resort to numerical tests.

Â Monte Carlo is running this stuff.

Â And then you get exactly what the numerical response was and for all these cases,

Â this is how it converged.

Â So let's look at something simpler than the full on reference tracking.

Â Reference tracking is tough too because your reference motion impacts,

Â you know, it's my control going to be less than that?

Â Well, are you tracking something that's moving very slowly?

Â Or are you tracking something that's spinning very quickly?

Â Within the control authorities quickly going to be exceeded to try to track it.

Â Omega R was part of that bound discussion.

Â So, we'll look at regulation problem.

Â It's an easier way to get inside,

Â and this is an area where the MRPs will actually have some nice properties as well.

Â So, for a classic unsaturated control,

Â just a PD, we proved actually,

Â we didn't have to feedback compensate for the Omega tilde I Omega,

Â that term vanished because it was Omega transpose,

Â Omega tilde I Omega.

Â So you can include it or not. It impacts performance,

Â but not the stability argument.

Â So we use a very simple PD control,

Â we know it's globally stabilizing all the one asymptotic.

Â And then, you look at the corresponding V dots that you

Â get with the classic Lyapunov functions we had last time.

Â This is what you get.

Â So now you can see here that U has to compensate for this,

Â and then add a term that makes this thing negative,

Â semi definite at least, right?

Â So, we can see now similar bounding arguments.

Â If K times sigma is always less than the maximum control authority,

Â you can cut- you can guarantee, you can come up with a control, U,

Â that's going to make this V dot negative,

Â and therefore, guarantee stability.

Â Well, okay.

Â But because we are dealing now with the dual MRP set and these implementations,

Â you definitely want to be switching MRPs because that means

Â my attitude measure is going to be bounded to 180 degrees.

Â Anymore than 180 and I would switch to the other set.

Â That means my MRPs are actually upper bounded by one.

Â One is the worst attitude error you have.

Â Which is very convenient when we're doing game design.

Â I know, you know, what's the worst case I could have on this?

Â How much effort with the with

Â the proportional feedback required of the control system in here?

Â Well, it's basically, you know,

Â this could be at worst one so K, essentially Newton meters,

Â tells you right away with this gain, 180 off,

Â you would ask for k Newton meters.

Â So what you can look at here is,

Â that means with MRPs,

Â as long as K is less than your maximum Newton meters that your torquing can do,

Â you can guarantee that you can always stabilize the system, guaranteed.

Â Again, this is a somewhat a conservative thing, but it's a nice bound.

Â It's a very simple bound where we take advantage of the boundedness of attitude errors,

Â and the MRP description that gives us a very elegant-

Â the worst error is one in MRP space at least, right?

Â So, that's the way you can bound it.

Â So that's now, this is a regulation attitude and rate thing that we're looking at.

Â Now, let's look at just the rate regulation problem.

Â So, here the goal is always to bring the Omegas to zero.

Â I don't care what the attitude is.

Â And as we saw with the mechanical system,

Â you can do this in a variety of ways.

Â If you just wanted to see it here, I have the max.

Â Okay.

Â Here I've got the hybrid solution.

Â This was not Lyapunov optimal,

Â because I've got a linear approximation so I don't have

Â that noise sensitivity issues that appear bang bang would have.

Â You could replace this whole thing with

Â an A10 engine function if you wished, or other limits.

Â But I'm just going up to the max and then I'm saturating at the max.

Â And you can do this control and in the end for this system,

Â let's say you have of your three axes,

Â lets say two of them are not saturated but one of them is saturated.

Â In that case here, big M would be two,

Â two of the axes you're just applying a linear control,

Â and in that case their contributions are guaranteed negative definite.

Â And the one that is saturated it applies this control,

Â which gives a different Lyapunov rate function.

Â But this is where you can mix and match nicely because if you're saturated or not,

Â you have one V dot or another, but both V dots,

Â regardless of saturated or not,

Â are negative definite expressions.

Â So the sum of two negative expression,

Â negative definite expressions, is still negative definite.

Â So this kind of the mathematical structure.

Â This is why you could replace this with something else,

Â a different saturation limit,

Â but you're always guaranteeing this property,

Â that V dot is negative,

Â and that's what guarantee stability.

Â So, this is actually a really handy control.

Â The simple feedback on the control torque is minus a gain,

Â times your angular velocity measures.

Â We argued already this is insensitive completely to inertia modeling errors,

Â because inertia doesn't appear.

Â And now you can actually also show that hey,

Â even if your omega measured is wrong,

Â you're measuring one radian but it's really two radians per second.

Â It won't perform, it won't close up quite as quickly as it should,

Â but it's guaranteed to be stable.

Â It's guaranteed to converge.

Â If you just measured half the rate that you actually have,

Â it may take double the time to converge,

Â but you're still guarantee it will converge cause that's often the issue.

Â People think of stability as somehow being tied to performance,

Â those are two separate questions.

Â Here we're always arguing the performance, no,

Â the stability is arguments the same,

Â the performance will be different if you measure the wrong Omegas.

Â So, as long as you don't get the sign wrong,

Â that's the one error that's going to kill you if you [inaudible] dumbling.

Â But this is great application for orbital servicing,

Â we're talking about picking up pieces of debris,

Â servicing satellites, docking on it,

Â picking up boulders of asteroids.

Â We really don't know what the inertia mass properties are,

Â and you're picking it up and you want to first to stabilize yourself.

Â You don't want to throw in stuff that

Â requires knowledge of inertia and knowledge of this,

Â knowledge of that, very precisely.

Â You want something that's really robust,

Â and this simple rate control allows you to prove amazing levels of robustness on how

Â to stabilize these kind of tumbling.

Â So I know inertia is- which we just mentioned, very robust.

Â All right that one. So here's a quick numerical example.

Â Show you how this all comes together.

Â You can see initial conditions,

Â were going to be having large attitude, large rates,

Â principle inertia's three different gains on P,

Â one on K, maximum torque is one,

Â just an easy number.

Â And am using the classic,

Â it's just the proportional derivative feedback K Sigma and P Omega here.

Â So if you run this now,

Â you can see the response had big rates.

Â It tumbled actually, one, two, three, four,

Â five times before it stabilizes.

Â But it's always fighting and doing this.

Â If you look at the control authority,

Â am actually saturating all this time.

Â And despite saturating, I am still actually converging,

Â and working, and the rates, you know,

Â the big one tumble rate took a long time to bring down,

Â but once it all comes together,

Â it all stabilizes nicely.

Â What I want to illustrate here though is I picked gains,

Â this K is 7.11.

Â U is one.

Â And we said, if we made K less than U max,

Â I could guarantee this would always stabilize.

Â Which it would have, it would have taken a lot

Â longer to stabilize because the gains are less.

Â It would have worked.

Â Yes, sir. You got a question.

Â In these examples, are we applying the previous controls we said?

Â Like, we switching between saturation and then a linear bar in the middle?

Â Yes. Yes. So we're doing U is equal to minus Sigma,

Â minus P Omega, it's unsaturated.

Â And then if I hit one on one of the axes,

Â am just letting that axis saturate individually.

Â That way, you know, that's the control that I'm applying.

Â Yup. Good. That's basically this and then we saturate each access to this value.

Â So you can see here, I'm grossly violating actually that one condition I had and say,

Â 'hey, if this were less than one,

Â I would be guaranteed, analytically,

Â this would always be completely stabilizing and

Â V dots would always be negative' and life is good,

Â but that's not the case here.

Â So this is also an illustration of this is an 'if statement', right?

Â If V dot is negative definite and guaranteed asymptotic stability.

Â If it's not negative definite,

Â I'm not guaranteed instability.

Â And that's why these bounds that we have,

Â what I'm trying to illustrate here is how conservative are there.

Â So, when I do this response,

Â I'm taking the time scale that was 300 seconds and I'm showing

Â you roughly 100 seconds worth here zoomed in.

Â Right? And I'm showing you the attitude response and superimpose,

Â and shading out regions where I'm taking all my current states.

Â You know, if you go look at your V dot function, what was its form?

Â And I'm computing the V dot that comes out of the actual states.

Â And if I had a guarantee of stability,

Â V dot would always be negative.

Â But in these regions because of that high gain of K,

Â you can see there's these great regions that actually

Â indicate regions where we have positive V dots.

Â So, temporarily, our error measures actually increased with that gain function.

Â Right? Which in this case I wouldn't have it and a little guarantee of stability.

Â But you can see from the performance,

Â it behaves extremely well, still, and stabilizes.

Â So this is one of the lessons learned with this stuff in Lyapunov theory.

Â This is necessary condition for stability,

Â but they're not necessary, you know,

Â there's not an if and only if and this kind of stuff.

Â So, these analyses can often help you guideline,

Â 'hey, what are some cons-' If I know within here, I'm good.

Â But these types of Lyapunov energy based controls have been proven to be very,

Â very robust, including saturation.

Â So, if you're designing them, I would say,

Â if you can live within the natural bounds and guarantee

Â stability for what you need from a performance point of view,

Â great, but if not, try to push them too.

Â I bet they're going to work quite well.

Â But then you have to use numerical methods to, basically,

Â run lots and lots of cases to guarantee, yup,

Â within this area, within this domain,

Â within this neighborhood, all of these things converged and I'm good for this mission.

Â Right? You still have the analytic guarantee,

Â unless you invoke other fancy math.

Â This whole feels unsaturated control as well.

Â Anyway, so it kind of shows you saturation is hard.

Â As a summary, with the MRPs being a bounded measure.

Â The worst error is one,

Â we can take advantage of that in some cases and come up with bounds.

Â But just keep in mind these tend to be very conservative bounds to do that.

Â The overall system tends to be actually far more

Â stable than what we're predicting with these conservative bounds.

Â