0:15

And specifically, in this discussion,

we're going to be doing it with instantaneous measurements.

And from that we have to resolve.

I'm not taking one, the rate gyro thing here, and

then one star tracker measurement there.

You have to blend it, then you have to go to filtering techniques like common

filters or unscented filters.

Or there's least squares filters or other kinds of filters out there, right?

And for those of you who've taken estimation or orbit determination,

you've realized that's at least a class worth of stuff just to get into

covariances, and uncertainties, and all this stuff.

We're not dealing with that in this class.

So this class is all dealing about instantaneous stuff.

I have an observation.

The sun is here and magnetic field is doing this.

So the moon is there, and my local horizon is this.

I have different observations, and we'll define what we mean by that.

And at this instant, how do I resolve what my orientation must be?

And how do I deal in, particularly, wave undersensed situations,

oversensed situations?

How do you make it optimal?

That's Walbach's problem.

We'll be going through her formulation on how to describe the mathematics of

how good these orientations are.

And we'll be using a lot of the kinematics you've just seen.

Especially how to do different orientations and how good is the measure.

We'll be using the principal rotation angle.

It's a convenient thing that just says, yep, that we're good to within one degree.

But that's kind of the high level.

We'll cover it today, we'll wrap this up probably on Tuesday.

And your next assignments actually include some of these methods as well.

1:42

Attitude Determination, so static attitude determination,

that's what we're doing in class.

Basically, it's all instances.

Somehow you're getting complete 3D headings, or you're getting just partials.

But we're never getting just rates.

Forget rate gyro, sense of rates, star tracker rates.

Then you're really looking at an integrated system, a filtering system,

that's a whole other class, right?

And that's an important topic, just not something that we are covering in here.

Because not everybody here has had estimation theory.

So, that's a whole other thing.

But what with I am giving you here, this becomes the input to a common filter.

At some point, you go, hey,

I have a new observation, if you have dealt with a common filter before.

This is how you get to that new observation.

And there are some classic formulas that you should be aware of.

So the dynamic is something, common filter, rate based, those kind of things.

That's a whole other thing.

2:34

Basic concept, how many direction vectors do we need in 2D?

So that's basically planar motion, fixed axis rotation,

that's what I'm talking about here.

In this room, if I'm just blindfolded, somebody spins me like crazy,

okay, and how many chunks of information do I need?

Do I need to know where the whiteboards are, the projection boards,

and the actual door to where you guys are sitting?

How many chunks would actually tell me uniquely how I'm oriented?

If you're doing fixed axis rotation.

Evan, what do you think?

>> I want to say two.

>> Okay, give me an example.

>> An angle for you to turn to face that object.

And then a magnitude to go to it.

>> But the angle is a magnitude.

But to think simpler, if somebody spins you, all right.

3:53

>> Yes, but we're looking at fixed axis rotation.

So if I know my orientation, no, that's a good point.

You could be upside down, it still does it.

Actually, there's multiple then if I'm allowing other.

But let's say I know I'm standing upright.

That means something.

Yes, sir.

>> You know your position.

>> You know your position actually because that's a really important detail isn't it?

I'm standing here and saying hints to my right, clearly I'm facing you guys but

what if I'm standing up here and I say the desk is to my right?

Well, all the sudden I have a different orientation.

4:55

one chunk of information, that's all I need if somebody tells me that whiteboard

is right straight ahead, I must be pointing this way in the room, right.

Also requires knowledge of what this room looks like.

If I take David, yes.

>> Daniel.

>> Daniel, man.

As soon as I think about it, I'm always getting them off.

Daniel, so if I take Daniel, blind fold you, take you in an arbitrary room at CU,

spin you around and say, hey, you're facing the whiteboard,

how are you pointing?

Any chance of knowing how you're going to do that?

>> Yeah, coordinate thing.

>> Yeah, you need to know your environment so besides knowing where you are in that

environment, you need to so, hey, this is the solar system.

This is how where the suns are aligned up with us, right?

That's where those stars.

If I know I'm pointing out Polaris, great, but which galaxy am I basically?

That's what you'd have to know, hopefully we know where we are.

But you never know.

There is all kinds of movies worm holes,

who know where we'll be going ten years from now.

So, we need to know the environment, we need to know where we are so

there is a lot of key assumption that can go into this estimation theory and

what we're going to break down is if somebody tells you something like hey,

something is to my right, it's straight ahead it's up into the left.

How do we break this down mathematically and compose a full attitude measure?

Yes? >> So

we're just concerned with orientation and not so much location.

>> Yes, we're assuming we know location already.

Yes, absolutely.

6:24

So for us, it's basically like a compass.

That's just one information, as soon as I'm telling you you're facing North,

that's one chunk of information.

That's assumes you know you're on the Earth,

you know where on the Earth you are.

You know that the north means it points upward,

that means you know your local environment.

I now know how I am pointed, but it assumes a lot of knowns.

So it kind of, as you do estimations and locations, keep that in mind,

that's always implicit.

If we do three dimensional motions and

this is kind of where Kaley's question comes in.

7:30

It's still to my right, right?

Or I can't go upside down.

I can't do a handstand, not without looking really silly.

Right, there's actually a whole infinity of ways that you can rotate about this

axis and that is always to your right, because we're looking at a single axis.

I'm not doing recognition going I am right side up to that port or something,

we don't have that, it's just a single heading, these are all heading

informations and headings are fundamentally a unit direction vector.

So while you may get three coordinates for that, because of the norm constraint,

it's really two chunks of information.

So one heading information that says,

hey, this relative to the body, it's that other object, is in this direction.

Or the magnetic field, or whatever you're measuring, is in this direction.

That gives you an axis.

That's what a heading is, it's not full 3D measure.

9:02

How many coordinates do you need, at least, to define your attitude?

Three.

So, one heading vector, gives me two chunks of information.

Two degrees of freedom.

Think of it as azimuth elevation if you use spherical coordinates.

That's one way to think of mathematics, right?

That gives you two.

So if I add a second heading, I go from two to four.

And that's kind of at the cracks of attitude estimation.

Either I have too little, do it in 3D or immediately I have too much.

And if I have too much, well do I throw something away?

Do I blend it and if I blend it, how do I blend it?

So what you get in attitudes are always unidirectional vectors and

it's in chunks of two essentially, as a unit vector.

And that's a problem.

10:17

So I'm going to now s and m, just as defaults, they are nice,

they are very convenient, small size use them a lot.

A core sun sensor that will tell you roughly where the sun is for

this body and would be a magnetic field sensor

that will tell you what the magnetic field is on this body.

It assumes that you actually know where the sun is, so

I don't need to know actually where we are on Earth orbit.

I just need to know what Julian date we have, so where's the Earth around the sun.

That's precise enough, you can then figure out.

Okay, right now if I have that orientation,

this is where that initial vector would be.

So we could know that.

Magnetic field means if you want to reconstruct it,

you need to know what is the magnetic field as seen by the inertial frame.

Fundamentally, I'm trying to estimate what is body orientation

relative to inertial frame.

So for now let's just use the ECI frame, earth centered inertial frame.

It's a non rotating frame centered at the middle of the earth, that's it.

So the magnetic field, we need to know at this instant how far has the earth

rotated, what's the magnetic field doing?

And as you imagine, a lot of uncertainty with that as well.

But you need that information.

because otherwise, this room could be morphing and

I have no idea if the door is not over here or here.

I need to know what the environment is.

The same thing with the sun.

So you take your measurements in the body frame.

So I get these quantities.

But as seen by the body, the sun is 0.1, 0.2, point something in that direction and

the minute the field is measured, it's a different vector, right.

I have to know where are they in the environment as seen by the notion frame

and this will allow me now to come up with my estimated attitude.

So I'm using a b bar here to kind of denote that.

B is typically the true body frame.

And the estimated frame won't be, unless you are very lucky,

exactly the same as the true body frame, right?

So, this is what we want to estimate.

What's the rotation matrix that will map these known quantities?

I know where I am,

here over the pole this is what the magnetic field should be doing.

That's the vector in M frame components, right?

If I knew my attitude, I could map this into body frame components.

And those mapped coordinates and the measured coordinates better be the same,

otherwise you don't have the right DCM.

I mean no one of them he can do that with two of them there's ways to do it.

And we'll look at different ways this can happen.

But that's essentially the thing.

How do we find this attitude matrix going from n to b?

The bn matrix, that's what we're after.

How do we reconstruct this?

Yes, sir.

>> Say that B bar, what was that again?

>> This is the estimated attitude, right.

Ideally, there would be only one and this is it and then we're set but.

>> [COUGH] >> In real life,

you have measurement errors.

We have all kinds of sources of errors.

We'll go through those in a moment as well.

13:06

Actually let's just do this now.

What sources of errors do we have?

Let me try to estimate this.

Well the easy ones, I'll start off there, the measurements themselves.

You might have electrical noise.

You might have digital noise like you only have an 8 bit converter

maybe a 12 bit converter that's truncation errors you have deal with, right?

So the measurement clearly will have noise and issues always.

There's never perfect.

Where else do we have noise in here or issues?

>> The magnetic field is dynamic.

>> Yeah, knowing this field.

>> This part.

14:05

>> The Sun is pretty good.

We know where, but, let me see.

If we draw this out, this is not to scale.

There is your Sun, right, we're way out, here's earth,

here's your satellite and it's orbiting.

This distance is miniscule compared to the 149 million kilometers, sun,

earth distance is, roughly, all right.

But still, you could account for all of that.

But, how precisely do we know where the earth is around the sun?

Well, we know it actually really well.

But not to infinite precision.

And then if you want to account for

the satellite's motion, how good is your orbit determination?

If you want to account for those small error that you actually, not just moving

around the sun with the earth, but you're kind of wiggling around it.

If you want to account for that, how good is your orbit determination?

It immediately comes in.

So these are all error sources that you'd have to put into a complete analysis and

account for.

For now, I'm just saying we assume we know it, we recognize as error sources,

but in this mathematical step, there's nothing we can do about it yet.

So this is fundamentally.

We measure this, we need to know our environment.

So the unit vectors we're measuring, we have to know in our environment

in inertial then there's mathematics to come up with an attitude measure.

Sometimes it'll be a DCM, it'll be sometimes quaternions, it'll be also CRPs.

The different methods have different mathematics on how to reconstruct this.

Yes.

>> So, since you have measured and given for both N hat S hat.

Don't you only need one of them to get your?

>> No, you still need two, because if I only have one, really geometrically

it means I know you're straight ahead of me, but in the space station,

I could have an infinity of attitudes and have you straight ahead of me.

The math here you can't just take this and

invert this somehow and reconstruct the full three by three matrix.

>> Right, so the measured and the given,

you need two for both of those because then those give you your two frames.

>> Yeah, so from this I need to know that, and

I need to know where each one is given in the environment.

If I didn't know where you are in this room and something just tells me,

hey, you're right ahead of me, I can't tell where I'm going to go, right.

So for every measurement, you need to know this part as well, right?

So it comes back to the basic stuff we were actually very familiar

with from everyday life, we just have to expand it from 3D motion in space.