One of the challenges of becoming good at recognizing what AI can and cannot do is that it does take seeing a few examples of concrete successes and failures of AI. If you work on an average of say, one new AI project a year, then to see three examples would take you three years of work experience and that's just a long time. What I hope to do, both in the previous video and in this video is to quickly show you a few examples of AI successes and failures, or what it can and cannot do so that in a much shorter time, you can see multiple concrete examples to help hone your intuition and select valuable projects. So, let's take a look at a few more examples. Let's say you're building a self-driving car, here's something that AI can do pretty well, which is to take a picture of what's in front of your car and maybe just using a camera, maybe using other senses as well such as radar or lidar. Then to figure out, what is the position, or where are the other cars. So, this would be an AI where the input A, is a picture of what's in front of your car, or maybe both a picture as well as radar and other sensor readings. The output B is, where are the other cars? Today, the self-driving car industry has figured out how to collect enough data and has pretty good algorithms for doing this reasonably well. So, that's what the AI today can do. Here's an example of something that today's AI cannot do, or at least would be very difficult using today's AI, which is to input a picture and output the intention of whatever the human is trying to gesture at your car. So, here's a construction worker holding out a hand to ask your car to stop. Here's a hitchhiker trying to wave a car over. Here is a bicyclist raising the left-hand to indicate that they want to turn left. So, if you were to try to build a system to learn the A to B mapping, where the input A is a short video of our human gesturing at your car, and the output B is, what's the intention or what does this person want, that today is very difficult to do. Part of the problem is that the number of ways people gesture at you is very, very large. Imagine all the hand gestures someone could conceivably use asking you to slow down or go, or stop. The number of ways that people could gesture at you is just very, very large. So, it's difficult to collect enough data from enough thousands or tens of thousands of different people gesturing at you, and all of these different ways to capture the richness of human gestures. So, learning from a video to what this person wants, it's actually a somewhat complicated concept. In fact, even people have a hard time figuring out sometimes what someone waving at your car wants. Then second, because this is a safety critical application, you would want an AI that is extremely accurate in terms of figuring out, does a construction worker want you to stop, or does he or she wants you to go? And that makes it harder for an AI system as well. So, today if you collect just say, 10,000 pictures of other cars, many teams would build an AI system that at least has a basic capability at detecting other cars. In contrast, even if you collect pictures or videos of 10,000 people, it's quite hard to track down 10,000 people waving at your car. Even with that data set, I think it's quite hard today to build an AI system to recognize humans intentions from their gestures at the very high level of accuracy needed in order to drive safely around these people. So, that's why today, many self-driving car teams have some components for detecting other cars, and they do rely on that technology to drive safely. But very few self-driving car teams are trying to count on the AI system to recognize a huge diversity of human gestures and counting just on that to drive safely around people. Let's look at one more example. Say you want to build an AI system to look at X-ray images and diagnose pneumonia. So, all of these are chest X-rays. So, the input A could be the X-ray image and the output B can be the diagnosis. Does this patient have pneumonia or not? So, that's something that AI can do. Something that AI cannot do would be to diagnose pneumonia from 10 images of a medical textbook chapter explaining pneumonia. A human can look at a small set of images, maybe just a few dozen images, and reads a few paragraphs from medical textbook and start to get a sense. But actually don't know, given a medical textbook, what is A and what is B? Or how to really pose this as an AI problems like know how to write a piece of software to solve, if all you have is just 10 images and a few paragraphs of text that explain what pneumonia in a chest X-ray looks like. Whereas a young medical doctor might learn quite well reading a medical textbook at just looking at maybe dozens of images. In contrast, an AI system isn't really able to do that today. To summarize, here are some of the strengths and weaknesses of machine learning. Machine learning tends to work well when you're trying to learn a simple concept, such as something that you could do with less than a second of mental thought, and when there's lots of data available. Machine learning tends to work poorly when you're trying to learn a complex concept from small amounts of data. A second underappreciated weakness of AI is that it tends to do poorly when it's asked to perform on new types of data that's different than the data it has seen in your data set. Let me explain with an example. Say you built a supervised learning system that uses A to B to learn to diagnose pneumonia from images like these. These are well pretty high quality chest X-ray images. But now, let's say you take this AI system and apply it at a different hospital or different medical center, where maybe the X-ray technician somehow strangely had the patients always lie at an angle or sometimes there are these defects. Not sure if you can see the lost structures in the image. These low other objects lying on top of the patients. If the AI system has learned from data like that on your left, maybe taken from a high-quality medical center, and you take this AI system and apply it to a different medical center that generates images like those on the right, then it's performance will be quite poor as well. A good AI team would be able to ameliorate, or to reduce some of these problems, but doing this is not that easy. This is one of the things that AI is actually much weaker than humans. If a human has learned from images on the left, they're much more likely to be able to adapt to images like those on the right as they figure out that the patient is just lying on an angle. But then AI system can be much less robust than human doctors in generalizing or figuring out what to do with new types of data like these. I hope these examples are helping you hone your intuitions about what AI can and cannot do. In case the boundary between what it can or cannot do still seems fuzzy to you, don't worry. It is completely normal, completely okay. In fact even today, I still can't look at a project and immediately tell is something that's feasible or not. I often still need weeks or small numbers of weeks of technical diligence before forming strong conviction about whether something is feasible or not. But I hope that these examples can at least help you start imagining some things in your company that might be feasible and might be worth exploring more. The next two videos after this are optional and are a non-technical description of what are neural networks and what is deep learning. Please feel free to watch those. Then next week, we'll go much more deeply into the process of what building an AI project would look like. I look forward to seeing you next week.