AI is having a huge impact on society and on so many people's lives. So, for all of us to make good decisions, it is important that we have a realistic view of AI and be neither too optimistic nor too pessimistic. Here's what I mean. Did you ever read the story of Goldilocks and the three bears maybe when you were a kid. Part of the story, was that a bowl of porridge should neither be too hot nor too cold and a bed should neither be too firm nor too soft. I think we need a similar Goldilocks rule for AI, where I think it's important that we be neither too optimistic nor too pessimistic about what AI technology can or cannot do. For example, we should not be too optimistic about AI technologies and having an unrealistic view of AI technologies may make people think that sentience or super intelligence, artificial, general intelligence is coming soon, and we should invest a lot of resources into defending against AI evil killer robots. I think there's nothing wrong with doing a few studies to think about what the distant future could look like if AI becomes sentience some day. Doing basic research on that is really not a problem but we shouldn't over allocate resources either to defending against a danger that realistically will not come for long time. Maybe many decades, maybe many hundreds of years. I think unnecessary fears about sentience, super intelligence, artificial general intelligence is distracting people from the real issues and it's also causing unnecessary fears about AI in parts of society. On the flip side, we don't want to be too pessimistic about AI either. The extreme pessimist view of AI, is that AI cannot do everything. There are some things AI cannot do and so, another AI winter is coming. The term AI winter refers to a couple of episodes in history when AI had been over-hyped and when people figured out that AI couldn't do everything that they thought it would. It resulted in a loss of faith and a decrease in investment in AI. One difference between AI now and the earlier winters of a few decades ago, is that AI today is creating tremendous economic value. We also see a surprisingly clear path for it to continue to create even more value in multiple industries. So, the combination of these two things ensures that AI will continue to grow for the foreseeable future. Even though, it is also true that AI cannot do everything. Rather than being too optimistic or too pessimistic my story of Goldilocks learn that something in-between is just right. I think what we realize now, is that AI can't do everything. In fact, there's a lot it cannot do but it will transform industries and society. When you speak with friends about AI, I hope you also tell them about this Goldilocks rule for AI, so, that they too can have a more realistic view of AI. There are many limitations of AI. You have already seen earlier some of the performance limitations. For example, given a small amount of data, a pure AI probably cannot fully automate a call center and give very flexible responses to whatever customers are emailing you with. But AI has other limitations, as well. One of the limitations of AI is that explainability is hard and many high-performing AI systems are black boxes. Meaning that it works very well but the AI doesn't know how to explain why it does what it does. Here's an example. Let's say you have an AI system look at this X-ray image to diagnose if anything is wrong with the patient. In this example, which is a raw example, the AI system says that it thinks a patient has right-sided pneumothorax. Which means that their right lung is collapsed. But how do we know if the AI is right and how do you know if you should trust the AI system's diagnosis or not. There's been a lot of work on making AI systems explain themselves. In this example, the heat map is the AI telling us what parts of the image it is looking at in order to make this diagnosis. Because it is clearly basing its diagnosis on the right lung and in fact on some key features of the right lung. Seeing this image may give us more confidence that the AI is making a reasonable diagnosis. Now, to be fair, humans are also not very good at explaining how we make decisions ourselves. For example, you've already seen this coffee mug in the last weeks videos but how do you know it's a coffee mug? How does a human look at this and say, that's a coffee mug? You know there are some things you can point to like, there's a room for liquid and it has a handle. But we humans are not very good at explaining, how we can look at this and decide what it is. But because AI is a relatively new thing, the lack of explainability is sometimes a barrier to its acceptance. Also, sometimes if an AI system isn't working then its ability to explain itself would also help us figure out how to go in and make the AI system work better. So, explainability is one of the major open research areas. A lot of researchers are working on. What I see in practice, is that when an AI team wants to deploy something, that AI team must often able to come up with an explanation that is good enough to enable the system to work and be deployed. So, explainability is hotbed, its often not impossible but we do need much better tools to help the AI systems explain themselves. AI has some other serious limitations. As a society, we do not want to discriminate against individuals based on their gender, based on their ethnicity and we want people to be treated fairly. But when AI systems are fed data that doesn't reflect these values, then an AI can become bias or can learn to discriminate against certain people. The AI community is working hard and is making good progress on these issues but we're far from done and there's still a lot of work to do. You'll learn more about biased AI in the next video and some ideas on how to make sure that AI systems you work with are less biased. Finally, many AI systems are making economically important decisions and some AI systems are open to adversarial attacks, if someone else is deliberately out to fool your AI system. So, depending on your application, it may be important to make sure that you are not open to these types of attacks on your AI systems. The issues of AI and discrimination or AI and bias, as well as the issue of adversarial attacks on AI, are important both to you as a potential builder and user of AI as well as to society. In the next video, let's dive more deeply into the issue of AI and bias.