0:06

The QUEST method.

The biggest difference with QUEST is speed, as we just mentioned earlier.

It's solving the same cost function.

So really, if you fully implement QUEST,

there should be no numerical difference in the answers.

I should get exactly the same accuracy that I saw earlier.

But it's the effort to get to that answer, is going to be much less.

So it's all about avoiding an eigenvalue, eigenvector problem, and

we have to do other stuff.

That's what Malcolm Shuster came up with,

with the QUEST algorithm many many years ago.

But it's a very famous algorithm so it's flown a lot.

So here's this cost function, right.

We had j, some of the weights minus this.

This was basically the g function.

And with Davenport's q method,

we said the minimization of j is replaced with the maximization of g.

And Davenport proved that these g's are all the eigenvalues of this k matrix and

picked the biggest one to maximize it, great.

So it was the optimal eigenvalue, which was the largest eigenvalue in our case.

That was the answer.

So now let's rewrite this.

This J function is really the sum of the weights

minus the second part is the g function.

And we know the g function should be just the optimal eigenvalue.

So the cost function is the sum of the weights minus an optimal eigenvalue.

1:25

Let's solve for that optimal eigenvalue.

That was Malcolm's idea, which was quite brilliant.

Because his invert stat says,

okay, the j comes over here, optimal eigenvalue comes over there.

So the optimal eigenvalue will simply be the sum of the weight minus j.

1:44

If you have reasonably good measurements, we're not having sum

heading errors that are 80 degrees or something usually have hopefully

a fraction of a degree magnetic field might be a degree or two.

Those are all pretty small errors compared to the four pie to radians

of altitudes that we have to consider.

J ends up being typically pretty darn small.

2:15

So that is too funny, okay.

If we look at this, and you find the cost one, look at this lambda value.

1.99967, that was the optimal eigenvalue.

What was the sum of the weights?

1 + 1,2, it's like whoa, this is really, really close, right.

So here's Malcolm's idea.

2:40

We're going to take, to find the optimal eigenvalue,

yes, I can make the K matrix and solve a complete eigenvalue, eigenvector problem,

which you can do, but it's a lot of math, it's a lot of CPU requirements.

Yes?

>> Wasn't, maybe I'm just confused with the math,

I thought g was, would bend in the sum.

The K matrix has v's, if we could go back a slide.

3:06

>> Okay what's the question?

>> Shouldn't G have a sub-script on it, in the very bottom equation?

>> This G has the summation already here.

>> It's the sum of them.

>> Right, so it's the sum of the weights minus g.

And we've shown the g is the optimal eigenvalue, the largest eigenvalue.

So that's that, so what is this eigenvalue?

If I don't have to solve an eigenvalue, eigenvector problem,

Malcom just says well, look, as a first guess, and I'll show you how good

this guess is, some people just the first guess and good enough.

You can say it's just a sum of the weights.

If you only have two measurements it's pretty darn close.

And there will be an iterative way now that we find the optimal one.

Instead of solving an eigenvalue eigenvector problem we actually have

to solve a root solving problem.

But have a guess at where that root has to be that's pretty darn good.

Your site was 1.999 something and it's supposed to be 2.

Man, you're going to be all over it, right?

So you can get there to the same accuracy,

there's no accuracy lost doing this if you do these iterations.

If you have multiple values, like in your homework, it's more sensitive.

If you just take that first guess, it's not that good.

You have to get it really down to machine precision.

So those iterations I'm about to show you are really important if you have four

measurements, as you do in this one homework.

4:24

So what is this iteration?

Well, to find an eigenvalue, you go back and you have to look at that, right.

You did this once by hand.

That was your warm up for this stuff.

You take that matrix minus a scalar times the identity of the same dimension.

And the determine of that matrix has to be zero.

So for the three by three, you end up with a third order polynomial.

And you had to find the roots of that, which you could use MatLab at the time.

But that's essentially what you have to do.

Here, I end up with a fourth order polynomial,

because it's a four by four matrix.

And i need to find what is the four roots of that.

Fortunately, I have a really good guess of where the root is that I care about.

I will have four zero crossings, because there's four roots to this K matrix.

But I need the one for the largest eigenvalue.

And I have a really good guess it's going to be somewhere around two if my

weights are one and one.

So you basically start your guess at that root as simply the sum of the weights.

This is my f function for which I have to find roots.

This is the fourth order polynomial.

And then I'm using here a Newton-Raphson method which you've all,

I'm assuming people have seen Newton-Raphson.

Otherwise, go look it up.

How to do root solving.

There's different secant methods, there's Newton-Raphson.

There's a bunch of different methods.

This one finds the local slope and say, it's okay.

You know that's zero, but zero seems to be more to your left or right.

And it projects you down, and you resolve it again, that's the math for that.

So I do this math, and evaluate how close I am to zero.

I get the gradient, and the end of a fourth order is pretty trivial.

Gives you a cubic polynomial.

And then I get an updated version and then you repeat this process.

If this is net zero, you do it again and again.

And this will iteratively get you to where you have to be.

This is the step that completely replaces the eigenvalue eigenvector evaluation and

it's way faster.

So, let's wrap this up then.

So, now how does this work?

The QUEST method though, doesn't actually find the quaternion set.

6:18

What we have to do is now I know the eigenvalue and

the k times beta bar is equal to the optimal eigenvalue times beta bar.

Great.

That's still a four by four.

And, we have to actually find an analytic answer now,

on how to come up with the proper attitude measure.

And he uses, it's a q here, but in our notation,

q's is classical Rodriguez parameters, right?

That was e hat times phi over 2.

So it's basically the vectorial part of the quaternion, divided by beta naught, or

e vector divided by beta naught.

So when we had earlier beta naught, beta one, two, three, four and

divide everything by beta naught, the first element just becomes one and

then the rest of them become ratios of beta one over beta naught,

beta two over beta naught, which are nothing but your CRP's.

Those are the classic Rodriguez parameters we just learned about last week.

7:07

So that's what Malcolm does.

He re-writes this into this form.

That's your k matrix times beta over beta naught at optimal lambda,

which we now have found to numerical precision times this.

The first equation we can somewhat ignore.

It's the second one that we can use, though it's z times one,

s times this tough times q and lambda optimal times q.

That gives you this last equation.

It's a simple equation in that you can group the q bars together.

Bring them all to one side.

And then invert the stuff that pre-multiplies q bar and

that's the answer.

7:54

So, CK, if you hear CRPs, what issue comes to mind right away?

>> They're close to singularity.

>> You could be close to a singularity.

What if your attitude is upside down?

Then you might have an issue.

Now this is mile can fix that.

If this is my body frame on this spacecraft,

let's say I'm lining it up with the pen, right?

That's great.

And the pen instead of pointing up,

that would be zero altitude, happens to be pointing down.

This algorithm will have issues,

because the coordinates blow up to infinity, right?

But the body frame you picked is one of an infinity of

body frames you could have picked.

Instead of picking this one, let's say we pick one where you've rotated 90 degrees.

Now, with this is upside down, that frame is upside down plus 90 degrees.

8:37

So with one frame is going singular since you have a second alternate body frame,

the other one is guaranteed not to go singular.

This is similar mentality that the MRP switching that we do.

So that's what is done with QUEST.

You have this primary body frame and

you always see about you can use sequential rotations.

You just have an alternate body frame that Is not the same as the first.

The one is singular, 180, the other one's guaranteed not to be at 180.

And you can estimate either attitude, and

then you pull out the 90 degree rotation again at the end.

And you do it once you've mapped to something nonsingular, like quaternions.

So there's that hidden step in here.

In the homework you don't have to do that, just there will be nonsingular attitudes,

but if you implemented this be aware of that.