So far, we have mostly been dealing with not very deep, or shallow, neural networks. And the main reason is that they really do serve as the building block of deep neural networks and are easier to understand due to their simplicity. There isn't really a consensus on the definition of a shallow neural network but a neural network with one hidden layer is considered a shallow neural network whereas a network with many hidden layers and a large number of neurons in each layer is considered a deep neural network. Also, unlike a shallow neural network which takes only input as vectors, deep neural networks are able to take raw data such as images and text and automatically extract the necessary features to learn the data better. We will start learning about deep learning algorithms in the next videos. But if neural networks have been around for quite some time, how come only recently did they turn deep and start taking off resulting in a plethora of cool and exciting applications? The sudden boom in the deep learning field can be attributed to three main factors. Number one, advancement in the field itself. We talked about this briefly in the activation functions video, where we mentioned that the ReLU activation function helped overcome the challenge of the vanishing gradient problem, and therefore, opened the door to the creation of very deep networks. Therefore, advancement in the field itself is one factor that helped deep learning take off. Another main reason is the availability of data. Deep neural networks work best when trained with large and large amounts of data, since neural networks learn the training data so well, then large amounts of data have to be used in order to avoid overfitting of the training data. Now that large amounts of data are readily available and easy to acquire like never before, deep learning algorithms are being tried and tested like never before. Especially that the other conventional machine learning algorithms, while they do improve with more data, but up to a certain point. After that, no significant improvement would be observed with more data. That is definitely not the case with deep learning. The more data you feed it the better it performs. Finally, and this goes hand-in-hand with point number 2, is computational power. With NVIDIA's super powerful GPUs, we are now able to train very deep neural networks on tremendous amount of data in a matter of hours as opposed to days or weeks, which is how long it used to take to train very deep neural networks. Therefore, users are able to experiment with different deep neural networks and test different prototypes in much shorter periods of time. These three factors are the main reasons behind the boom of deep learning. In the next video, we will start learning about deep learning algorithms. We will start with supervised deep learning algorithms, and in the next video, we will learn about convolutional neural networks.