Very important other set of specification methods is based on your neural network. Neural networks are based on the brain metaphor for information processing. It's also referred to as Neural computing, or Artificial neural networks, and more recently, deep learning. There are many uses for artificial neural network. Main one is in pattern with cognition, for example on images and videos and so forth, they give very good results. But they can also be used for any kind of prediction and classification task,. So that's here you see at the bottom of this slide here you see some real neurons. As you know the neurons are the cells that are present in the brain and nervous system. So these are specialized cells and they have some communication between them as you see here some channels like Axons and Dendrites. And they communicate through what's called synapses that release signals that go from neuron to neuron. And so based on this network model an artificial neuron is represented here. This is at the basis of the architecture pretty much of a neuron network. So a single neuron has inputs and outputs as we can see here, so the inputs are a set of features. Attribute features, variables, however we call them, columns, that we have in the data for a particular instance. And then to each input is associated a weight. And then the neuron unit, what it does is calculate a weighted sum, so it adds all the x's multiplied by the weights. And based on this total, pretty much, input uses what's called a transfer function to calculate the input. Those inputs will be Y1, Y2 or Yn. So as you see this is a model based on the actual real natural neural model. Now there are many different types of architectures for neural network and the architecture will be based on the task to address. Because it can be applied to classification. Prediction tasks but also clustering and optimization and so forth. The most popular architecture is called the Feedforward, multilayered perceptron using what's called the backpropagation learning algorithm. It can be used for both classification and prediction by the problems. But there are other ones for example, self-organizing feature maps will be used for clustering tasks like the one's we're going to see in module six. So that an example of architecture of a feature word Multi layer per set electron that has one hidden layer. So, here you said only connecting the input with the output. For example, here we want to predict with a yes or no, is a prediction task. So, it's a typical classification task. We see here a hidden layer, so yes no, could be for example, particular diagnosis. Yes, this is breast cancer. No this is not breast cancer, for example. So, here you have one hidden layer and the hidden layers are very important. At some point, research on your networks was tough, because, the hidden layer was not existing. And the type of problems that could be solved with this type of model was too simple. It could only classify, well, problems where the data were what we said linearly separable. And so this was very limiting and once hidden layers were introduced now the range of applications and complexity is just incomparable with what it was before. And you may have several hidden layers. So here, again, it's going to be a choice in the architecture and fine tuning the perfect architecture for your particular neural network. So the neural network, we learned the prediction model and here what is not actually, if I go back to this weights, it's a set of weights. That's what actually is a model is going to learn. And once a weight has been learned and weight satisfaction, satisfying performance then this is when we are ready for deployment of the model. So that's really what the learning is about, it's about adjusting the weights. So each time, based on the input data that we have in our training set, we'll calculate the output based on the current weights. And is designed output is a cheap and we would stop learning it means it's particular weights have been learned satisfactorily. Also why is going to adjust the weight and each weight until it finds that the weight are learned. Once trained, the neural network stops learning and then can be deployed. Deep learning is based on advanced neural networks, it's going to use more advanced, particularly, mapping functions. So they are excellent for, in particular, pattern recognition and they are one of the major methods used in machine learning. However, they require important computing power and have a set of parameters to adjust. And also one limitation is that say often are not match understandable. And we've seen comparison with some other models that some considers this as a drawback. So it really depends on what kind of task you're trying to carry out. Thank you.