0:00

So, congratulations, all that hard work paid off.

Â Or at least, congratulations if you managed to hang in there this far.

Â and claiming all that hard work paid off. My job now in this module is to show that

Â it indeed did pay, Because we're going to unleash our Newfound powers on mobile

Â robots. And this entire module is, dedicated to,

Â what's known as the Navigation Problem, which means how do you make a robot drive

Â around in a world populated by obstacles without slamming into things, and getting

Â safely to landmarks or goal positions. So, in the first lecture, we're going to

Â return to this idea of behaviors. And we're going to use control theory,

Â now, to describe what's actually going on.

Â And I don't know if you remember. But, we actually talked about behaviors

Â before. These were these atomic primitive things

Â that the robots should be doing. And then, by connecting them all together

Â we get the overall navigation system. Now we probably know that behavior is

Â just code for a sub-system or a controller, and connecting them up

Â together is code for a hybrid system. So this is really what we need to do, we

Â need to revisit behaviors in the context of control failure.

Â So first we need a model and in fact almost always it pays off to start simple

Â so we're going to start with old friend this point.

Â So the position of the robot is X, so X is in R2, and I mean its a plain

Â [INAUDIBLE]. And I'm saying that I can control that

Â velocity, of this robots direction. Now to compare us, as we've seen are

Â differential drive robots. You can't really do this.

Â Instead, you have to control translational velocities and rotational

Â velocities. So, we can think of this really as for

Â the purpose of planning how we want the robot to go.

Â And then we have to couple this to the actual dynamics.

Â But, to start with, let's just say that x.

Â is equal to U, well first of all what does that look like in the ax plus bu

Â paradigm. Well A is equal to zero so this is my A

Â matrix simply go to zero and my B matrix is simply the identity matrix.

Â Well before we do anything else we need to see whether or not we can actually

Â control this system and we formed a controllability matrix B AB.

Â Well, A is zero, so this term is zero. B is the identity matrix, so this is the

Â identity matrix. The identity matrix is as full rank as

Â any matrix anywhere come. So, clearly the rank of gamma is equal to

Â 2, which by the way is the dimension of the system.

Â So, we have a completely controllable system.

Â We should be able to make the system do What we would like it to do.

Â So we're going to start with what I call the dynamic duo.

Â These are the key behaviors that you always need.

Â No matter what your robot is going to do, you always need to be able to go to a

Â goal location. Or a landmark, or a waypoint.

Â You always need to be able to go to somewhere, and you need to be able to do

Â it without slamming into things. Without either one of those two, your

Â robot just ain't going to be able to do what you want it to do.

Â So our job now is to design these two behaviors using What we've already

Â learned. So, we're going to do it rather simply.

Â We're going to actually simply say, you know what, if my robot is here, and I

Â want to go in this direction, well, why don't I simply say that this is equal to

Â my u, because that's equal to x dot. So that's going to tell me, this is the

Â direction in which The robot is actually going to do.

Â It's going to be moving or using my handwriting but some pretty graphics.

Â This is what we are going to do. We're going to figure out the direction

Â in which we want to move and then set u equal to that desired direction.

Â Okay, let's start with Go-To-Goal.

Â This is where the robot is. Let's say their goal is located at x of

Â g. Well, I want to go to the goal, so it's

Â really clear where I would like to go. I would like to go in this direction,

Â xg-x is this vector, and I'm going to call it e.

Â So, why don't I just put u=e, or u equal to some constant k? Times Z,

Â well let's see what E dot in this case, actually becomes.

Â Well, E dots is X gold dot, which is 0, minus X dot.

Â And X dot, well, that's equal to U which is equal to KE so E-dot becomes -ke.

Â Well, that's kind of good so if I have E dot is -ke, does this work, does it drive

Â error down to zero. Well, we know we have to check the

Â eigenvalues. So, if k is just a scalar, as long as

Â this scalar is positive, we're fine. If we want, for some reason, the matrix

Â k. We just have to pick a matrix k that has

Â positive eigenvalues. So, if k is a scaler and positive, we

Â know that the system is asymptotically [INAUDIBLE] stable.

Â If we pick k as a matrix. For instance, it could be a diagonal

Â matrix, you know? 10, and A 1000, say seems silly, but why not.

Â This is a positive, definite matrix means that the eigen values are all positive.

Â I have a minus sign here so I need to worry about the negative of K in this

Â case the eigen values would be all negative.

Â So, if you have that I would go with K constant but if you have this you will

Â indeed drive the error to 0 which means that we have solved the go to go problem.

Â There are some concerns, though in fact there's just one.

Â A linear controller means that you have a bigger vector the further away you are,

Â which means that you're going to go faster towards the goal.

Â The further away you are. Which doesn't, to be honest, make

Â complete sense. So what we should do, is we should, in

Â practice, moderate this. To make, maybe the game smaller, when

Â we're far away. Or make the game constant somehow.

Â Because we don't want to go faster when we're far away.

Â That doesn't quite make sense. And you can play around with this.

Â As long as K is positive we're actually fine.

Â And what we're going to implement on the robot is this choice of K.

Â It's a K that makes the norm of U reach some Vnot, so here is Vnot when you're

Â kind of far away, and then it's going to not go faster when you further way, and

Â then when you get closer to the goal, meaning when the arrow goes down You

Â start slowing down, and in fact, if you try to be a little creative in how you

Â pick your K, this K here is the K that corresponds to this plot then.

Â That's the K that we're going to be looking at, but you don't have to do

Â that. in fact, a lot of robotics involves

Â clever parameter tuning and Tuning of these weights.

Â But the whole thing, point here, I want to make is that you want to make sure

Â that you don't go faster when you're further away because that actually

Â doesn't make entirely sense. Okay, we know how to go, you to go to

Â goal. Let's avoid obstacles.

Â Well, if I wanted to go towards the obstacle, I would simply pick xo-x=u, or

Â some scaled version of that. Well, now I want to avoid the obstacle.

Â Why don't I just flip it? And that's now x-xo, instead, so flipping it means, I'm

Â just going to avoid the obstacle. And in fact, that's what we're going to

Â do. Let's just pick u=K*e, where K is a

Â positive constant, and e, now, is x obstacle minus x.

Â Well, if I do that, I get E dot is ke, which is actually an unstable system.

Â And it's unstable in the sense that the error is not stabilized.

Â 'Cuz the error is the distance to the obstacle instead.

Â We're avoiding the obstacle. Obstacle.

Â Now it's a little scary to have on purpose an unstable system in there but

Â as you will see we don't worry too much about it because we need to make sure

Â that the robot actually does not drive off to infinity which it would if we were

Â was unstable. the other thing that a little weird, so

Â this is if I use u=k(x-x0). The other is, that it's, it's a rather

Â cautious system in that we seem to be avoiding obstacles that are also behind

Â us even that doesn't entirely make sense and we also cared less about the obstacle

Â the closer we get which. Absolutely makes no sense, because we

Â should care more the closer we get. Well, the solution is again, make k

Â dependent on e, or actually the distances, so the normal e.

Â And to aviod this being overly cautious, we are actually going to switch between

Â behaviors, and in fact, what we're going to do is using something like a induced

Â mode, the sliding mode to very gracefully Combine goal to goal and avoid obstacles

Â but for now let me just point out that one clever thing for instances is say

Â that you want to care more about to obstacle u closer you get, so you want u

Â to be bigger the closer to the, the obstacle you get.

Â So in this case this was the K that we used, then in fact this is that K that

Â I'm going to use to implement things but, again I want to point out that you want

Â something that you don't care so much when you far away.

Â And you care a lot when you close. The reason I have an Epsilon here which

Â is a small number is just to make sure that this thing doesn't go up to infinity

Â when they normally is 0. Things going off to infinity is typically

Â not that good of an idea. Okay so we know how to build the

Â individual control modes. Now we also saw that choice of weights

Â matter. you should be aware, again, that there

Â isn't a right answer in how to pick these weights, and depending on the

Â application, you may have to tweak the weights to make your robot more or less

Â skittish or cautious. But the structure still is there.

Â What's missing, though first of all, is to couple this X dot is equal to U model

Â to the actual robot dynamics. And we're going to ignore that question

Â all through this module. And devote the last module of the course

Â to that question. But what we do need to do is make

Â transitions between goal to goal and avoid obstacles.

Â And that's the topic of the next lecture. Before we conclude, though, let's

Â actually deploy. This dynamic duo on our old friend, tho

Â compare robots, to see what would happen in real life.

Â So, now we've seen, in theory, how to design this dynamic duo of, robot

Â controllers. In par-, in particular, we've seen these

Â 2 key behaviors, goal to goal and avoid obstacles.

Â And, now, let's actually deploy them. For real on our old friend, the [UNKNOWN]

Â mobile robot. As always, I'm joined by with Sean Pierre

Â Delacroix here, who will conduct the affairs.

Â And, first we're going to see the go to gold behavior in action.

Â And. What we now know is that what this

Â behavior really is doing is looking at the error between where the robot is,

Â right there, and where the robot wants to be, in this case this turquoise piece of

Â tape. And then globally asymptotically

Â stabilizing this error in the sense that it's driving the error down to zeros.

Â So, JP why don't we see the [INAUDIBLE] make the error go away.

Â So, as you can see, the robot is going straight for the goal, and the error is

Â indeed, decaying down to zero. And this is how you encode things like

Â getting to a point. You make the error vanish.

Â Very nice. Thank you.

Â So now, we're going to run act two of this drama.

Â now the robot's sole ambition in life is not driving into things, and things, in

Â this case, is going to be me. one thing that's going to be slightly

Â different from what I did in the lecture, is that I am not a point, meaning it's

Â not just a point but in fact an obstacle with some spread that the robot is going

Â to avoid. In fact, what we're going to do is we're

Â going to first of all ignore everything that's behind the robot because it

Â doesn't care about avoiding things that are behind it.

Â And the things in front of it, it's going to sum up the contributions from all the

Â sensors and it's going to care a little bit more about things ahead of it than

Â on. Its sight, so J.P., let's take it away.

Â Let's see what can happen here. So, here I am.

Â Oh, no. All right. Very nice.

Â