Let’s consider the robot’s movement from a different point of view. Which parameter of the robot’s movement is the key one? Naturally, this is the velocity at the turning point. This means that at every moment the difference between the speed of both
wheels has to be optimal. If we try to picture the relay algorithm in a diagram, we will get something like this. If we lay off the difference of the speed along the vertical axis, we will have three possible situations: two sensors with the white background, and white and black or black and white backgrounds respectively. Thus, there will be only three possible options of the difference in the speed
of the wheels. It can be zero, positive or negative. Of course, in such a case all the turns will be sharp. What can we do to make them less sharp? Obviously, we can regulate the speed of the wheels more swiftly. You remember that we are using an analog line sensor, which means that apart from black and white, we can also see all their
shades. Let’s look at one of the sections of our track. We need to install the sensor above the borderline. Thus, if we imagine the patch of IR light created by the sensor, it will appear like this. We can see that the circle contains half of the black and half of the
white backgrounds, but in terms of the sensor, it will give us the “grey” value, as it sees the circle as one big dot. Therefore, if the circle moves a bit to the left, the sensor will receive absolute white. Vice versa, if the circle moves to the right, the sensor will be getting absolute black. Besides, it will receive a multitude of grey values, where, according to the
sensor, the intensity of grey will always differ. Thus, we can keep in mind the intensity of grey that we need to achieve when we work with the sensors. Now let’s picture this in a diagram. We shall once again lay off the difference in the speed of the wheels on the vertical
axis, and the difference between the current value of the sensor and the desired one, which we measured at the beginning, on the horizontal axis. I’ll call it δS, for instance. We have a straight line passing through the 0 point. Thus, if there is no difference between the current and the desired values of
the sensor, we don’t need this difference between the speed of the wheels. If we have this difference then our wheels will be turning at different speeds. By the way, negative values are possible here, because if we, say, extract the value that we are getting on the black background (700, for instance)
from the desired value of 300, we will get a negative value of δS. The difference in the speed of the wheels will also be negative. To count this difference, we need to extract the speed of the right wheel from the speed of the left wheel. Therefore, if the speed of the right wheel is higher than the speed of the
left wheel, the difference will be negative. And there we go, we have achieved what we wanted. The speed is regulated in proportion to the changing of the intensity of
grey under the sensor, i.e. the difference in the speed of the wheels. How can we describe it in our algorithm? Let’s have a look at the code. Firstly, at the beginning of the program, in the setup, we will remember the desired value – the intensity of
grey, which we will be trying to achieve, and then save it in the “grey”
value. Then, every time we run the main loop, we will be calculating the so-called error, which consists of the difference between the current value under the sensor and the desired value that we saved. This difference is denoted in the δS variable, which we saw in the previous
graph. In fact, errors are usually denoted as “e”. When we know that there is an error, we need to calculate the control
action and find out what effect this is going to have on the axis of our graph. Control action is usually denoted as “u” and in our case, it consists solely of an error taken with some coefficient, i.e. the slope angle of the line. This is obvious, because we read off line sensor values in standard units, which are in the same range. At the same time, we control our motors by using other standard units from a
different range, that’s why we need a certain coefficient to compare and contrast the units of the measurement of “grey” and the units of the speed of the wheels. This is that K, which we defined at the beginning of the program. Next, we are going to call the “drive” function. We can see a certain macrodefinition called BASE_SPEED. This is the desired speed which we will be trying to reach. We want both our wheels to have this speed. Consequently, when the robot needs to turn, we will have to add the control action to the speed of one wheel, and extract it from the speed of the other wheel. There is nothing strange in that, because, as we remember, control action can take negative values. This is when the current value of the sensor is higher that the desired
average value. Let’s fast forward a bit and imagine what else we might need. As we know, when we turn the robot on by connecting the power, it starts moving forward. We will need to add a button, by pressing which we will
start the robot. This is the first thing. The second thing is that we would better choose the average value of grey for our robot when we run the algorithm. Therefore, it would be good to see what values the sensor is reading
off. For this purpose, we are going to add a 4-digit display to our robot. Before we start the robot, we will need to see to which position it is better to
set the sensor. How will the code change after all this? Firstly, we can’t forget about the library, which would allow us to
work with the display. Secondly, you might have noticed that we only have one sensor left now. We actually don’t need the second one in our algorithm. We have connected the button and the display, set the base speed for the motors, and introduced the K coefficient with a certain conditional value of 0.2. I will explain later how to choose these values. So how do we start our robot now? Let’s access the “while” loop in the setup. The loop will be running until we press the button. In other words, the robot will not move until we start it. While the cycle is running, we will be outputting the values that the sensor captures on our display. Me might want to add a short delay, so that the digits on the display wouldn’t change that fast, and we
would have enough time to read them. We are also going to read off the value that we will be trying to reach while the program is running. When we leave the loop, the value will be saved in the “grey”
variable. Now our robot looks this this. It has a display, which outputs the readings of the sensor. We have left only one sensor and placed it on the right side. For this reason, the code has changed a bit, but we will talk about it later. We also have the above-mentioned button, with the help of which we can start the
robot. Let me say a couple of words about the sensor and why it is installed this way. I have initially placed it a bit lower, but it dawned upon me that it was a
bad solution, since the circle that the sensor captures is too small. That’s why all the changes that happen under the sensor are happening too fast according to the sensor itself. Thus, the black has appeared and it has filled the whole circle. This doesn’t allow us to solve the problem of fine balancing. For this reason, I have placed the sensor a bit higher so that it would have more
space underneath. There is one more problem though concerning the additional lighting
in our studio, which is quite abundant and which clutters the sensor. That’s why we need to create some sort of a shield for it in order to get rid of this irritating illumination. Now let’s have a look at the changes in our code. First thing, I’ve substituted all the references to the left sensor and its pin with the same information, but on the right sensor. In other words, all LEFT_SENSOR_PIN and other similar things become
RIGHT_SENSOR_PIN. It also seems plausible to add a delay at the end of the setup so that the robot takes some time before it starts moving and doesn’t rip our fingers out
with its promptitude. Most importantly, our error character has changed. We used to extract the objective value from the value output by the left sensor, and now we have to extract it from the value of the right sensor. That’s why our signs will change when we call the “drive”. Thus, the robot will be able to react to these changes.