Machine learning is a discipline inside of artificial intelligence. The term AI was originally coded in 1956, and has since evolved to encompass many fields of study that are commonplace in today's technological discussions. Topics like machine learning, natural language processing or NLP and computer vision all fall under the modern umbrella. That is AI. Now, if you're thinking of AI as killer robots to take over the world, your field of interest is more HLI or human like intelligence. You'll notice that deep learning is yet an additional sub discipline within machine learning. And it's captured a lot of attention as it has started to even rival and surpass the human ability to perform complex tasks like image recognition, speech recognition, language translation and much, much more. Now, that we know when AI ML and deep learning came about how about a one sentence definition of ML? ML And its core labels things for you. Show a model, a bunch of good historical sales trends data for your clothing store and a model can predict next month's sales. Show a model, lots of photos of cars and what the correct make and model is with enough examples. The model will classify new unlabeled car photos for you. Let's dive a bit more with an example. On the weekends you'll likely find me scouting the web for new sci fi movies and tv series to watch. Now, I can tell you what previous sci fi movies I like and I can do a decent job of narrowing down the list of potential new options myself, but I don't have all the time in the world is scanned through and classified good sci fi movies to watch? I do have a general intuition of what I like, which is also reflected in my history of movies watched. So, things are, it has to be set in space in the near future. It's shorter than two hours and there's no crazy aliens or no horror or anything like that. And I can provide you the list of movies that I also liked. We can then train a model to label. In this case, it's classifying whether or not I like a new sci fi movie or a series that comes out which is about space drama. The key difference though is that although I have an intuition of what I like and I'm providing you the model with a list of movies that I liked and didn't like. I'm not providing the model with a hard coded recipe from narrowing down the movies like if it's shorter than two hours, then prioritize the space movie, as long as it's not horror, that sort of thing. The beauty of ML is that it comes up with this recipe by itself based on the correctly labeled examples that has seen so far. Now, imagine if I didn't provide any insight behind my movie selection process other than all the movies and the tv series in the past, would you have any basis to even build those hard coded rules. Not anymore. And what if I asked you to predict across all genres of movies, which could have very different aspects. How could you maintain a rules based on things like if comedy and actor equals John, else if not horror and duration, less than two hours. All that gets unwieldy. Let the machine learning model figure out the recipe that ties your historical labeled data to the predictions on unseen data. Now, let's extrapolate this to a real world application. Let's take Google search for example, say you go to Google and you search for giants, what should we show you as your results page to make the most relevant for you? Well, if you're in California like me, should we show you the results for the san Francisco giants? It's a baseball team, maybe list some local games nearby. What about if you're based in New York, should we tailor the results to show the New York giants football team instead, as one of the rules? Well, up until a few years ago, this is exactly how Google's search worked. There were a ton of rules that were part of the search engine code base to decide which sports team to show and where based on where the user was. If the query is giants and the users in the bay area, show them the results by the SAN Francisco giants? If the users in the New York area, show them the results about the New York giants. And if they're anywhere else, show them the results about tall people or giants. Those of you have worked with sequel before. Just imagine how many case statements this would be and how hard it would be to maintain and that's just for one query. Multiply this by the large variety of queries that people make, where they make them from, what device they're on. You can imagine how complex the whole code base had become. The code base was getting really unwieldy. Hard coded rules are hard to maintain and this is exactly where ML comes into play. It scales much better because it requires no hand coded rules and it's all automated. Our data set in this case is we know that historically in the search engine result pages, which links that people clicked on. Why couldn't we just train an ML model to provide input into the search ranking and that's exactly what Google itself has done internally. And they used a deep learning ML model called ranked brain. After logging out, the quality of search engine results improved dramatically with the signal coming from rank brain becoming one of the top three influencers for how results are ranked. If you're interested, I'll provide a link where you can read more about it. Not to recap, machine learning, want to lead with examples, not with instructions, any business applications where you have these long case statements of if then else and logically hard coded all that stuff together. But, you do have a history of good labeled data. That's a possible application for machine learning. Now, deep learning, remember that's that sub discipline of machine learning is useful for when we as humans can't even map out our own tuition about what makes a prediction correct or not. So, what do you see here? Now, your eyes and your brain have the benefit of many, many, many years of evolution and intuition to allow you to perceive and interpret all those pixels on the screen. How could we teach a machine to understand that this picture here is a cat? If you let you follow yourself back into the rules making bad habits that we want to try to avoid. You might say well look for a cat like eyes and these images. Okay, what about this image? [LAUGH] Your brain still knows it's a cat but the machine now is no basis to go off of it with an old rule of just look at the eyes and determine if it's cat like eyes. Okay, what happens if we added a bunch more hard coded rules like this? Look for the ears, the eyes and the nose. Alright, is this still a cat? What about this? Again, You get the point. Hard coding rules completely fails us here and that's where deep learning comes into play. When we just have labeled examples and we completely let the model figure out how to build a good recipe to answer the question, what is a cat? And in 2012, that's exactly what the Google Research team with Jeff Dean and Andrew Ng did. What you see here is what the deep learning neural network figured out what a cat is, based on looking at over 10 million images and processing the model over 16,000 computers. Now, a familiar architecture for Deep learning is the neural network, which is the model inspired by our own human brains, here it takes the input image that you see there and classifies it as a cat or a dog. And again, we're not telling the model to focus on looking for dog collars or cat whiskers. It builds its own recipe for determining the correct label and applies it to the end. As you can see from the image, modern ML models can scale and handle even tricky data points like this dog hiding in the laundry basket.