For the Deep Learning Project with TensorFlow Playground, now let's look into the Project Setup. And the setup will look like this. The learning rate will be set to 0.03. The activation, we will use ReLU. And the regularization, we'll do none. This is because regularization is not needed in solving a simple problem because overfitting, most likely, will not occur. And therefore, the rate will be set to zero. Then, we will have the problem type as classification, and also, the ratio of training-to-test data, we'll set it to 50%. The noise, we'll set it to zero. Now, noise is set to zero to make it easy to find the solution. And you can try to practice more with higher noise levels. In addition, the batch size, we will set as 10 over there. Now, the screen setting will then look like this after you set them in. And we will use X_1 and X_2 features, which are over there. Now, we'll go into Project One. And the objective of Project One is to conduct classification, a separation of two sets of clusters of data. Now, we want to select the data set for this project and you can select that over there, and then this will show up if you make the right selection. We will solve this problem first. Then, we will start with only one hidden layer that has only one neuron. Now, I know it looks a little bit lonely but that's okay. We'll be putting on more neurons in the future. But if you're setting in your hidden layer and if you have more neurons that you need to reduce to come to this type of a setting, you can click on the minus buttons over there to reduce it to make it look like this. And later on, we'll be adding more but right now, please go ahead and set it like this so we can start our Project One experiment. Then, because the neural network training has not started, the test loss and training loss will be very high, as you can see right here. Then, the initial test loss and training loss values will be different because the initial weight value is set at random. And therefore, you can see that these values over here are actually different. Then, let's press the run button over there to start the training. And then, since this is a simple problem, the results came out very fast and they were successful. And as you can see, in the blue region and the orange region right here, the blue dots are in the blue region and the orange dots are in the orange region, and therefore, we were successful. And as you can see, over here and this right here, you can see that there is a black curve that's going down very quickly and it stays at the low level. Now, this right here, what you can see is that the test laws and the training loss, they both converge to a very low loss rate and then they maintained it. And therefore, actually, these two lines, the black line and the gray line are perfectly overlapping in this case. Well actually, they're not perfectly overlapping. Their values may be different but because this is a small figure, this is a small graph, they look like they perfectly overlap. And then, what you can also do is to check the weight values. You can do that by putting your mouse cursor on the link that connects that over there between the X_2 and X_1 that connects to the neuron in the hidden layer, and then the value will show up like that over there. Now, before training, the neural network could not distinguish between the orange and the blue dots. But then, after the training, the two regions were perfectly distinguished, which shows us that we succeeded in our classification process. This we already learned about. From this very simple neuron which has just two weights over there, back in module four, in that lecture, I already explained that with a single layer with just simple as two weights, we can draw a straight line which you see right here. And that's what we just did. And now, we enter Project Two, where here, we will distinguish two data sets and the orange data and the blue data need to be classified to different sets. But the problem is that in this one, they will have a circular shape where the orange dots will be on the outside circle and the blue dots will be inside. So go ahead and use that selection menu over there to select this set so we can begin Project Two. Now, Project Two has a problem that is much more complicated than Project One, evidently, because Project One could be solved with a straight line. But Project Two, just look at it, there is no practical way to solve this with just a straight line. Where would you place this straight line to create a solution for this? Actually, there is no solution for that. So, let's look at what we need to do in Project Two. Now, one line is not sufficient as a solution. So therefore, multiple neurons in the hidden layer are something that we're going to have to try. Now, if you train with one hidden layer that has just one neuron, then the classification will fail just like we predicted. And look at the rate right here that you see, it has a test loss and a training loss that actually, the values are high. That's because you failed the overall training and the test of classification. Now, the output results in a test loss of 0.44 and the training loss resulted in 0.371, and this shows that we failed. So now, I want you to press on the plus button over there to add on more neurons to your network. And by this, over there, you can see that now, we have two neurons in the single hidden layer. Then with two neurons, the performance improves, where the test loss has gone down to 0.27 and the training loss has gone down to 0.249. Now, but still, the classification has failed, because now that we have two neurons in the hidden layer, well, we can draw two lines. And as you can see in the middle blue range between the two lines, which is the blue area, still, there are orange dots. And that's what contributes to this test loss and training loss values being very high. Of course, there is a big improvement that we accomplish but still not sufficient. Now, we're going to go to three neurons and you can set this up by clicking on the plus sign over there and adding a neuron. Then you click on that arrow over there to run it again and this will become the result. Now, over here, you can see the values right here, and these values are listed above so you can see them better. The test loss has been reduced to 0.005 and the training loss has been reduced to 0.002. And look at this over here. As you can see, the classification was successful. Let's observe the difference of the classification results. When we use, basically, one neuron over there, look at what we got. We failed big time. Then with two neurons in the hidden layer, it got much better. But still, in the blue range in the middle, there are orange dots. But then, with three neurons over here, look at how clearly the system succeeded. This is the type of result that you can attain with more neurons in your neural network. These are the references that I use and I recommend them to you. Thank you.