0:00

So with the last lecture, we are actually declaring success in terms of designing

Â controllers. The useful placement, which add

Â controllability, and off we go. the big problem, though, is well, we

Â don't have x. And we have to, when we do.

Â u=-Kx. Well, x is there, but we don't have it.

Â So, what about y. Ultimately, we don't have x.

Â We have y coming out of the system. And somehow, this Y has to translate into

Â a u. It's not enough to say x translates into

Â u because we actually don't have, y. Well here is the, cool idea.

Â I'm going to put a little magic block here.

Â And the output of that block, somehow should become x meaning I would like to

Â be able to take y push it through a magic block.

Â And get the state out. Now I'm not going to get x exactly, in

Â fact I'm going to put a little hat on top of it.

Â This is my estimate of a state. Meaning I'm taking my sensor measurments,

Â y and based on those measurements I'm going to estimate what x is.

Â And I'm going to call that x hat, in fact the magic block.

Â The thing that allows to get x from y is called an observer.

Â So in today's lecture I'm going to be talking about these observers and how do

Â we actually be design them. Well, it turns out the general idea

Â behind the observer design can be summarized in the predictor-corrector.

Â Under the predictor corrector banner. So, let's say that we have, a x is ax.

Â Forget about u for now, that doesn't matter.

Â And y is cx. Well, here is the idea.

Â The first thing we're going to do, is we're going to make a copy of this

Â system. And our estimator is going to be this

Â copy. So I'm going to have x.

Â is =. Sorry.

Â xhat. is = to Ax hat, so my estimate is

Â going to evolve, according to the same dynamics as my actual state.

Â And this is known as the predictor, which allows me to predict what my estimate

Â should be doing. But that's not enough, what I'm going to

Â do now is I'm going to add some kind of notion of a wrong, or right the estimate

Â is to the model. And one, one thing to note is the

Â actually output is Y, the output I would have had if the state was, was exact is

Â c*x hat * exact. So I'm going to compare y.

Â To c*x hat. And, in fact, what I do, is, I add the

Â piece to my predictor. So, x.

Â is ax or hat, + this difference. y-cx hat.

Â which tells me how wrong I am. And then I add some game matrix here, l.

Â And this. Gives me a predictor and a corrector.

Â So, this part here is the predictor, and this part here is the corrector.

Â And this kind of structure is known as a Luenberger observer named after David

Â Luenberger, but the point is that, when you have this predictor correct repair,

Â you have some way of hopefully figuring out the state, or at least a good

Â estimate of the state, from the measurements, y, that show up here.

Â So the only question now. Well, one question is, does it work? The

Â other question is, what is this L? So the first thing we should ask is, how do I

Â actually pick a reasonable L? Well the first thing we'll do.

Â Is, let's define an estimation error, e, as the actual state - my estimated state.

Â And I should point out that we don't know e.

Â Beacuse we don't know x .but we can still write down e as x-x hat.

Â Well, I would like E to go to 0, right. 'Cuz, if I can make e go to 0, the x hat

Â goes to x. Which means that x hat is a good estimate

Â of x. So what I would like to do is actually

Â stabilize e. Make e asymptotically stable.

Â So, what we need to do first, is, write down the dynamics for my error equation.

Â So e dot well that's x dot-x hat dot. Well x dot is just Ax and x hat dot.

Â Well, we have this format the Ax hath+ L(y-Cxhath) and then we get the minus

Â signs in front of everything. so this is my estimation.

Â Now y Is equal to c*x, right? So what I actually have here is e dot

Â being A(x-xhat)-LC(x-xhat). But x-xhat is e so e dot is (A-LC)e.

Â 5:29

Actually, I don't wonder, we know how to do it, pole-placement.

Â We know how to do control design this looks just like control design but it's

Â actually observier design. well, we wanted to import the values from

Â (A-LC) to be the negative.

Â So, let's just pole place away. Okay.

Â So here's an example. x dot equal to this, y is equal to that.

Â Fine. Now, I want my error dynamics to be

Â asymptotically stable, so if I write down A-LC.

Â And I should point out that in this case C is a 1x2, that means that L has to be a

Â 2x1 because these things have to cancel out.

Â And I get a 2x2 left so L is actually a 2x1 matrix in this case.

Â So, if I write down what A-LC is, it becomes this semi-annoying matrix but at

Â least we know what this matrix is. What do we do now? Well, we compute the

Â characteristic equation to A-LC. And if we do that, we compute the

Â determinant of lambda i. So this is the determinant of lambda i.

Â Minus A-LC. Right, if we compute that, we get the

Â following expression. Well, now we do what we always do in

Â these situations. We pick our favorite eigenvalues.

Â And it seems like I am very, very fond of lambda equal to negative 1.

Â If I do that, I get this as the desired. Characteristic equation.

Â Well, what do we do now? Well, we line up coefficients, of course.

Â These coefficients have to be the same, and these coefficients have to be the

Â same. And if you actually solve this, I'm not

Â going to go through the algebra. I encourage you to do it on your own.

Â You get that L 1 = -2/3. And L2 is a third.

Â And if fact, the way this would look my observer gain is well, L1 was -2/3, there

Â is L1. And L2 was the third which is there.

Â So my observer dynamics is x dot. Well, x hat dot is Ax hat plus this is

Â L*Y-CL. So this is my observer dynamics.

Â What I'm showing here in the plot in blue, this is x1, the actual x1 and how

Â it's evolving and in red you see my xhat1.

Â and you see that after a while, they end up on top of each other very nicely.

Â Similarly, in the right figure, in blue, you have x 2, and in red, you have x hat

Â 2. And as we can see, the state, the

Â estimated state, x hat, thus, indeed converge 2.

Â The actual set. So here is what's going on right now.

Â I have x as Ax, y is Cx out of this thing I can suck y right? Because that's what

Â I'm seeing, these are the measurements. What I'm doing now is I'm feeding this y

Â into my server That has a predictor part, which is the dynamics, plus a corrector

Â part, which looks at the difference between the actual output and what the

Â output would have been if x hat was my state.

Â And then out of this comes x hat. Which means that we have some way of

Â figuring out what the state of the system is.

Â Now, obvious questions are. Well, well.

Â There's only 1 question, actually. Does this work? And the answer is, no.

Â It doesn't always work. Just like pole placement doesn't always

Â work when you're doing control design. For the same reason pole placement

Â doesn't always work when we do observer design.

Â And, what we need is we need something that's related to controllability.

Â So, controllability tells us. Do we have enough control authority, are

Â actuater is good enough. Well, for observer design, the concept is

Â known as observability, which means do i have a rich enough y, meaning rich enough

Â sensor suite so that I'm able to figure out what the system is doing.

Â Meaning estimate, estimate x from y. And the topic of the next lecture is

Â exactly this observability. When can we indeed figure out x from y.

Â