0:02

Welcome back. This is week four.

Â So we're halfway through the programming and

Â simulation lectures as well as the assignments.

Â So congratulations on making it this far.

Â Last week I talked about goal to goal controllers which allowed our

Â robots to drive from their current

Â location to some location in the environment.

Â And this week I'm going to talk about.

Â How to create an obstacle avoidance controller.

Â And this obstacle avoidance controller, the point of it is that we want to be

Â able to drive around the environment

Â without colliding with any of the obstacles.

Â So if you're driving a robot in your living room you

Â don't want it to drive into your sofa and get destroyed.

Â So this week we're going to take care of that.

Â And there are actually many ways of doing obstacle avoidance.

Â And I've picked out one particular way of doing obstacle avoidance.

Â 1:00

Then we're going to take this point that's

Â in the robots coordinate frame, and transform it once

Â again into the world coordinate frame, so that

Â we know where each obstacle is in the world.

Â 1:10

What we will do then is compute a vector from these points to

Â the robot, and we will use that as an, and sum them together

Â as an obstacle avoidance vector.

Â And then what we will do is the same thing as last week is we're going to

Â use this vector And use a PID controller to

Â steer the robot towards the orientation of the vector.

Â And effectively what will happen is that the robot will drive

Â in the direction that points away from any obstacles that are nearby.

Â And thus will avoid collisions. So I've mentioned coordinate frame

Â at least four or five times on this slide. What do I mean by that?

Â We use coordinate frames to give meaning to points in some sort of space.

Â So when I tell you that the robot is located at

Â xy theta, you need to know which coordinate frame I'm talking about.

Â So, in most cases I'm talking about the world frame.

Â And this world frame is centered at the origin,

Â which I'm going to denote as just zero, zero.

Â So I pick this, where this is in the simulator.

Â And with respect to this world frame, the location of the robot is given by X and Y.

Â And of course, also with respect to this world

Â frame, we have an orientation of theta, of it.

Â So, the, the robot, right here has an orientation given by theta, and it's

Â important that this theta is defined with, with

Â respect to the X axis of this world frame.

Â 2:42

Now, the next coordinate frame that we have is the robot's

Â coordinate frame, and this coordinate frame is located right at the robot.

Â So wherever the robot is, at the center of this robot, there's

Â a cornered frame and it's, this, this we call the robot frame.

Â So, with respect to the robot, this direction right here going out in

Â front of the robot, which aligns with the robot's orientation is, would be theta,

Â what it's maybe called theta prime is equal to zero, in that robot's frame.

Â But, theta prime in the world frame, so maybe I'll call this theta

Â prime w f, would be equal to the actual theta of the robot.

Â 3:27

the reason that we care about the robot's frame of reference, is because we

Â only know the location of the sensors and the orientation of the sensors;

Â with respect to the robot, robot frame. So the robot knows for example, that it

Â has one particular sensor, mounted right here which is, sensor number one.

Â 3:48

And this sensor, it knows is, is located at, at this point

Â with respect to its own robot's coordinate frame, and its

Â orientation along in this direction. And this orientation, with respect to the

Â robot, is 90 degrees. But in the world frame,

Â this orientation would be also a function of the actual orientation of the robot.

Â So, let's call this maybe theta S prime.

Â So we said that was 90 degrees, which is the same thing

Â as pi over 2. And what we want

Â to do is, to figure out where the sensor is, so theta prime

Â S, in the world frame this would be not, nothing more than, pi

Â divided by 2 plus the orientation of the robot.

Â So this would give you the angle of this sensor in the world frame.

Â 4:50

Now, the way that I've denoted this in, in the manual is, using.

Â Sorry, I'm going to color this out. Is using, X sub S four.

Â So this is the x position of sensor four in the robot's frame.

Â The y position of sensor four in the, in the robot's frame and the orientation of

Â sensor four in the, in the robot's frame, so in this case, theta s four is actually,

Â 5:27

Now, we can go even one step further than this.

Â What we can define is, a coordinate frame, with respect to the sensor itself.

Â So each sensor has its own coordinate frame.

Â So we're getting fairly

Â deep in here, but, the point of that is that when we defy it for example

Â here this sensor i's frame and in this

Â case it's sensor you know, it's actually sensor four.

Â And again the origin of this coordinate

Â frame is located where the sensor is located.

Â And the x axis of that coordinate

Â frame aligns with the orientation of that sensor.

Â 6:05

And this is important because

Â what we're going to do, is we're going to figure out that the distance

Â that we're measuring is actually nothing more

Â than a point in this coordinate system.

Â So, here, what I've done is, the robot

Â measures a distance of, so this distance right here is D

Â 4 and all I've done is I said well in this,

Â particular coordinate frame, this point right here

Â is equal to this vector right here.

Â So D 4 0 so because we're going a distance of D 4, the

Â X direction the distance is 0 and

Â the Y direction in that particular coordinate frame.

Â 6:50

Now, what we really care about in this, in, in,

Â in the controller is, well if I have this point in

Â this sensor's coordinate frame, what does this

Â point correspond to in the world frame.

Â Because, as you can imagine, if I for example have a, a point

Â up here that is on an obstacle, I can say that the obstacle is

Â at this location in the world, and I know where the robot is and

Â from that we can make an informed decision about how to move the robot.

Â 7:24

So, in order for you to be able to calculate the transformation between the

Â different coordinate frames, you're going to need to

Â know how to rotate and translate in 2D.

Â And what I mean by that is that if we have a coordinate frame,

Â so say, this is my coordinate frame right here, and I have this point.

Â 7:45

Right here, so I have a point right here, lets call that 1,0.

Â Now this I can also, I can pretend that this point is a

Â vector going from the origin to this point and suppose

Â what I wanted to do, is rotate this vector and then translate it.

Â And let's say that I want to rotate it and I'm going

Â to use this notation: r, and I'm going to say, translated by one

Â unit in the x direction, two units in the

Â y direction and pi over four is the translation.

Â So what does this R actually mean?

Â Well, the R is given by this transformation matrix.

Â And what this transformation matrix is, does, is exactly what I just described.

Â It's going to take a vector and, when you pre-multiply

Â this vector by R you're going to get the, the, the vector transformed in space by x

Â and y, translated by x and y and then rotated by this theta prime right here.

Â So we're actually translating by x prime y prime according to my notation here.

Â 9:01

So going back to my example.

Â What, what really, what's really going on is

Â that first, we're going to rotate the vector by,

Â Pi over 4, which means

Â that it's now located at, squared of 2 over 2, squared 2 over 2.

Â And then we're going add a translation in

Â the x direction and then a translation in the y direction of one unit and two units.

Â Which means that, the vector that I get,

Â 9:45

So what I've effectively done here is I've taken this point, 1,0 and

Â I have translate, rotated and translated it to this point up here, which is 1, 1

Â 10:01

plus square root of 2 over 2. 2 plus square root of 2 over 2.

Â And that's how the rotation and translation works, and so,

Â so whether it's a point or vector doesn't really matter.

Â The point really is that I've gone through this,

Â this transformation, I've gotten a, a translation and a rotation.

Â 10:18

And to be a little bit more specific, what I've really

Â done is so, on this side I would have so, so here

Â this, this should have been R,1 comma 2 comma pi over 4.

Â And, what I'm multiplying, the vector that I'm multiplying with, what you'll

Â notice is that this is first of all a 3 by 3

Â matrix, so we want to make sure that this is going to

Â be a 3 by 1 matrix for this to be a valid multiplication.

Â And what we're going to do is we're

Â going to put the point that we're translating,

Â so let's call this xy, so this

Â is x, which would've been, which was equal to 1, y equal to 0.

Â And then we're always going to place to make, going to place a 1 right here.

Â So, this stays a 1 independent of what x and y are.

Â 11:13

So, why am I telling you about this?

Â Well, you need this tool, you need this transformation

Â matrix in order to do our transformation from the

Â point in the centers, reference framed all the way

Â back to the world, uh,to the world coordinate frame.

Â 11:40

d sub i 0.

Â So this is the distance that was measured by sensor i.

Â Then, I'm doing the transformation from the sensor frame to

Â the robot frame and my input into the rotation and

Â translation matrix is the position of the, and position and

Â orientation of the sensor on the robot, in the robot's frame.

Â So now this entire thing gives

Â us this point in the robot's reference

Â frame, instead of the sensor's reference frame.

Â So, we have to do another rotation rotation and translation.

Â And we do that by using the robot's location

Â 12:20

location and orientation in the in the, in the world frame.

Â When we do that, this entire thing, right here will,

Â point, give us the point, this original point right here, in the world frame.

Â And this is exactly what this is.

Â So these are the world frame coordinates of this,

Â of this point detected by this particular infrared sensor i,

Â 12:55

First of all, we're going to create, or

Â we've already created a new file for this controller.

Â It's going to be it's own file, so

Â it's it's own controller, it's going to be called AvoidObstacles.m.

Â And like this comment says, AvoidObstacles is really

Â for steering the robot away from any nearby obstacles.

Â And really, the way to think about

Â it, is we're going to steer it towards free space.

Â 13:18

And, first of all, all these transformations are

Â going to happen inside of this function called apply_sensor_geometry.

Â We're going to do all three parts in there before we

Â even get to the PID part of the, of the controller.

Â And the first part is to apply the transformation,

Â from the sensor, sensor's, coordinate frame to the robot frame.

Â 13:40

And really, what I'm doing here is I'm em,

Â already giving you the location of the sensor,

Â of each sensor eye in the robot's coordinate frame.

Â And that what I, what I what i want you

Â to do is first properly implement the get transformation matrix.

Â According to the equation that I gave on, on previous slide, previous slides.

Â And once you've done that properly, of course, you're going to have to input

Â these here.

Â 14:12

And then you have to figure out the proper multiplication

Â and again, you should look back at what I've done before.

Â But remember, we're only doing one step, so really what you should be doing is, R,

Â xs, ys, theta S multiplied

Â by di, 0, and 1. So this is

Â what I expect you to implement, right here.

Â 14:40

Then, we're going to do the next part, which is transforming from the,

Â the point from the robot's coordinate frame over to the world coordinate frame.

Â So, this follows a si, the, a similar

Â pattern as what we did in the previous slide.

Â Again, what you want to make sure is that this becomes the input for this.

Â 15:01

You've already implemented get transformation

Â matrix so it's spits out the

Â appropriate R, and then you're going to again have to do the calculation.

Â So you pick the previous vector, so I'm just going to represent

Â that by scribbles and multiply that by r, x, y, theta.

Â And that should give me all of the IR distances, all, all

Â the points that correspond to the IR sensors in the world frame.

Â And now we know where, where obstacles are in the world or free space

Â in the world, depending on whether you're picking up

Â an obstacle or you're not picking up an obstacle.

Â 15:36

Now, so like I said, we've computed the

Â world frame coordinates of all of these points.

Â So each one of these points right here in

Â green, we know where they're located in the world, right.

Â So, what we can do, is we can compute

Â vectors from the robots to each one of these points.

Â 16:17

This sensor right here, that is detecting no obstacles or the distance is

Â really great, is going to contribute much more than this sensor which is picking

Â up an obstacle close by.

Â So when you sum up all these sensors, the main contribution is

Â going to be along a direction which is, away from the obstacle.

Â 16:41

And then once we've done that, we can use the orientation of this vector and use

Â PID control much like we did in the doog, google controller to steer the robot in

Â to the direction of the free space. So here is the code, in the

Â main execute function of the avoid obstacles controller.

Â One thing I would like for you to pay

Â attention to is that I have created this sensor gains,

Â and I ask you in the manual to think

Â about how do you want to structure these sensor gains.

Â Do you want all of the sensors to count equally,

Â do we contribute with equally, do you may be care about a particular sensor more?

Â So for example do you care about the one

Â that's in the front and that's the third sensor.

Â So maybe this should be 3, so that you pay more

Â attention more to the obstacles that are in front of you.

Â Do you or do you want to pay more attention

Â to obstacles that are on the side in that

Â case you would maybe increase these two, so sensor

Â one and five, that go to the left and to

Â the, to the right of the robot.

Â 17:50

And again, here I'm going to, here I'm asking you to properly

Â calculate each of the UIs, so each one of the vectors.

Â And then, we're just going to sum them together and then you're going to

Â have to figure out the angle of this vector, so you get theta ao.

Â And then, just like last weekend, the goal to

Â goal controllers, so you're already familiar with that, you need

Â to compute the error between, this angle that you want

Â to steer to, and the actual angle of the robot.

Â 18:24

Now, to test this we're just going to implement this controller

Â and run it on the robot so let's see what happens.

Â I mean, obviously we're, we hope that the robot's not going to collide with

Â any of the walls or any of the obstacles that I've added to the environment.

Â But let's make sure this actually happens.

Â 18:49

I'm going to hit Play, and we're going to follow this robot and see.

Â So I clicked on it to follow it, and we're going to see what, what it does.

Â It's already turned away from the, from the obstacle.

Â Now there is a wall and phew,

Â 19:13

And since I've implemented this controller correctly, the

Â robot should just drive around aimlessly around this environment.

Â We haven't told him where he needs to go, we

Â just have told him that he needs to avoid any obstacles.

Â And, in fact, next week, we'll talk about how

Â to combine goal to goal controllers and avoid obstacle controllers.

Â 19:36

My tips for this week are again to refer to

Â the section for Week 4 in the manual for more details.

Â And I have always provided very detailed instructions

Â that should help you get through these assignments.

Â And also keep in mind that, even though

Â this seems like a really difficult approach and really

Â tedious, and you could probably think of, of a

Â way easier ways of doing obstacle avoidance with robots.

Â The reason that I've done it in a way where

Â we end up with a vector is that it'll make it a lot easier

Â next week, when we combine the goal

Â to goal controllers and the obstacle one controllers.

Â Because then it's a matter of just blending two vectors

Â together, we know definitely know how to sum vectors together so.

Â