Unfortunately we don't have access to a real space shuttle or a commercial air craft. So we need to somehow replicate the system on a PC and a software. The point of this course to let you be creative. We will not force you to make the project in a certain way, as long as it follows some general guidelines and, of course, as long as it works. This means it's a true engineering task from design to implementation and test. In this course, you would use free software to build the embedded platform for wrong way detection and in this lecture we will teach you how to use the tool. Initially, you are provided with parts of the algorithms using various parts of wrong way detection systems. And your task is to modify these algorithms to fit the system you are about to use, integrate algorithms so that all parts of the system are able to communicate and possibly integrate some new algorithms if deemed necessary. For implementing the wrong way detection system, we recommend you to use Microsoft Visual Studio Express. However, you are allowed to use other tools such as Eclipse, as long as you make the system work but the guidelines we give you in this lecture is based on Visual Studio. First, download the project template called wrong way detection project from the course webpage. Extract the files on your computer and you should see a folder called FreeRTOS_v9.0.1 and this is the project folder. Navigate with your windows explorer to the patch list and open the file WIN32.vcxpoj. It should now start your Visual Studio and import the project there. Recall that this is the same pre artists environment as you were using in the course, the development of real-time systems. So, it should be a little bit familiar. The left panel in the studio contains all the files that are currently used in the project. Click on main.c, go to the main function of the template project. And in case you need to add more files to the project, simply drag and drop the files into the Solution Explorer and it will be added to the project. Note that only the source files must be added, other files can be included without being added to the Solution Explorer. Next, go to the main function in the main.c and familiarize yourself with the code in this function. The template contains currently only a dummy task called myTask, and this task is being scheduled by the scheduler. Next, click the play icon in the upper middle part of the Visual Studio to build and run the project. This will take a while as the studio is compiling the source files. And when this is done you should get a console window with a dummy task printing out hello world. Okay, so you have Visual Studio up and running. The next thing to do is to add an algorithm for detecting the edges in a video frame. The basic algorithm is provided to you on the course page in the file canny.c. This file contains an algorithm which reads a bitmap image and detects the edges. The algorithm determines the edges as points in a frame where the brightness changes sharply and a collection of points for a line that can be visually seen as an edge. The picture shows an example of detecting edges in an image. And you can see that the filtered image contains only black and white pixels where the white pixels are the edges in the image. The implementation is based on Canny filter which contains five steps to recognize the edges. Apply Gaussian filter to smooth the image and reduce the impact of noise components. Detect in which direction the edge is pointing by taking the blur in vertical, horizontal, and diagonal direction by calculating the first order derivative of the pixel line in each direction. After these gradients are calculated, the edges are still blurry so a non-maximum suppression is applied to team the edge by comparing the edge strength of the pixel to its all neighbors. If a pixel is the strongest, the neighbors will be suppressed. After these, some pixels are still affected by color and noise variation so edge pixels with a weak gradient values are filtered out and edge pixels with high gradient value are preserved. All edge pixels that are still defined as weak, are checked according to their edge neighbors. If a weak pixel is connected to a strong pixel, the big pixel is preserved. More detailed information about Canny filter is available on its wiki page. After edge detection, you also need to encrypt the output of the algorithm before sending to the cloud. We have provided a C-code for RSA algorithm in RSA.C file. We introduced and discussed about RSA algorithm in more detail in the course web connectivity and security in embedded systems. The algorithm contains of three parts. The first part of the code calculates the public and private keys using predetermined prime numbers p and q. You can set any prime numbers for p and q in the code but remember if you choose very large prime numbers the algorithm will run slow to calculate the keys. In the second part of the code, the encryption function encrypts the message. In our case, the output of the edge detection algorithm will be an array of integers with values of 0 or 255. In a third part, the decryption function decrypts the message and outputs the edge detection data. You need to use your own creativity to utilize these codes in your program wherever it is needed. Your task is now to connect these algorithms by using freeRTOS. So if you are giving some images or runways and the edge detection and integration algorithm. What you do is to create a freeRTOS task for edge detection and you read this file into the path. Next ,you implement the encryption task containing the encryption algorithm. Now you need some way to transmit the filtered image frame to the encryption task. We recommend you here to use FreeRTOS Queue to reach you and send a frame by the Edge detection task. Now you can receive a frame by the Encryption task. Finally, your job is to send the encrypted frame to the cloud server. To do this, you need to forward the frame to a TCP client. We recommend you to implement the TCP client as a FreeRTOS task, to which you can again connect a freeRTOS Queue and you transmit the data over this queue. From this point, all you need to do is to transmit the encrypted frame to the cloud server. In this server you must implement a task which is capable of reading, decrypting and storing the transmitted frame. In conclusion your job is to put three pieces together, edge detection, encryption and network transmission. In another lecture we will go more into detail about the implementation itself