[MUSIC] So let me begin by saying I work in artificial intelligence, which is a discipline that studies how to make computers behave intelligently, make smart decisions to learn from the experiences. And if you think about human intelligence and you think about our civilization, everything we have that's worthwhile comes from our intelligence. It doesn't come from our long sharp teeth or our big scary muscles or anything like that. It comes from the fact that we're intelligent. So if you can create tools that amplify our intelligence, then you can dramatically amplify what civilization can do. So my own work has been concerned with some fundamental questions about intelligence. So how do we define intelligence? How do we build computer algorithms that exhibit that kind of intelligence? And that's led me into lots of different areas. I've worked on robotics. I've worked on language understanding. I've worked on computer vision. I've worked on machine learning, and I've also strayed into philosophy sort of basic questions about this notion of rationality is very slippery one. There's an economic motion of rationality, which means making decisions that are optimal with respect to some objective function. But then there's the real notion of intelligence where humans, and in fact computers as well, have to cope with the fact that optimal decisions in the real world are completely impossible to make. The real world is so complicated that it's totally unreasonable to expect perfectly rational decisions, which leads you to a question. Well, if you can't do the right thing, then what are you supposed to do? How are you supposed to be intelligent when doing the right thing is impossible? So I've tried to address those kinds of questions as well. So what I found was that in fact a lot of what's interesting about our minds and what's interesting about the structure of AI systems now, and especially in the future, is that these computational limitations the fact that our brains have finite speed, finite memory capacity and so on and the same is true for computers, it's those computational limitations that really dictate the structure of how our mind works. A lot of what our mind is doing is specifically designed to mitigate these limitations and make it possible for us to make decisions, even though the world is in fact way more complicated than we are. There are three major concerns. In the near term the biggest concern we have in the area of AI robotics is the question of autonomous weapons. That there are many countries around the world who are moving very rapidly towards robots that can decide themselves to kill people. And as you can imagine, this might not be the best idea. The second question is about whether robots are going to take away jobs. And some studies suggest that within a few years up to half of all jobs in the world could be done by robots. And what would that do for the rest of the people? What would it do for the distribution of wealth and inequality? At the moment, people don't have any answers for that. If you ask economists, they say, buy more unemployment insurance. Not a very positive solution. And then the third question is a much longer-term question, if machines eventually become more intelligent than people, then how is the human race going to relate to those kinds of machines? Are we going to be able to ensure that the machines are 100% on our side, that the machines' only goal is to help humans realize their dreams and their desires? Or could it be the case that perhaps by accident that humans and machines could come into conflict? Because the way the machines have been designed reason to behave in ways that we don't like. And then that immediately creates the potential for conflict. And as you probably might guess from looking at the history of chess, when you're in conflict with machines on the chessboard, you're probably going to lose. And we wouldn't want that to be the case when you're in conflict with machines for the world. So this is a very important question, but it's a long-term question. And we'll have time, we hope, to answer it, which means to make sure that as machines become more capable, they remain fully aligned with the objectives of the human. And it's not just a technical question. Part of the problem of having machines be fully aligned with the values of the human race is we don't know what the values of the human race are. And that's a question for all of us to figure out. So on the last question definitely everyone should learn to program because it's fun and it's really good mental exercise regardless of whether you want to ever work in that area.