I'm Christian Shelton, I'm a professor at UC Riverside. I work in artificial intelligence and in, specifically, machine learning. And right now I'd like to talk a little bit about artificial intelligence, little about the history, what it is, and maybe give you some examples of the types of problems people in artificial intelligence work on or think about. So I think the first place to start would be, what is artificial intelligence? Two words, artificial and intelligent. So artificial, we're talking about something that's not naturally occurring, something that's created by humans. And so, typically, in artificial intelligence we're talking about a machine or a computer. You could imagine re-engineering biological systems, but generally, we're considering computers. And intelligent is actually the more difficult thing to define here. Often when we say intelligence, we mean human-like, we define it by example. We believe that we're intelligent and we define intelligence is like us, but that's a little difficult. So there's a question, of course, of whether or not this has to be intelligent in the same way that a human is intelligent. And that goes back to some definitions that you might have about intelligence. So for example, if I asked if there were artificial creations that flew, most people would say, yes, and point to airplanes or helicopters or things like this, things that human beings have created that fly. However, if I asked are there are artificial creations that swim, the question becomes more difficult to answer. Some people will say yes and point to a submarine, or a boat, and others will say no. They may go through the water, but they don't swim. By the words swim, you're really talking about a biological process or a particular way of achieving that goal of moving through the water. And so there's a question, and when we talk about intelligence, of the same thing. Do we mean intelligence in exactly the same way humans are intelligent or do we mean something that's a little more abstract? And there isn't a clear definition there in artificial intelligence of which one we mean, okay. So it's a fairly broad field. It's related to a bizarre bunch of other fields, mathematics, economics, psychology, neuroscience, control theory, philosophy, linguistics, computer engineering. All of these things have a role to play about what it means to be intelligent, what it means to design an artificial system, all those sorts of things. So intelligent machines have been discussed, at least, for thousands of years. The Talmud talks about a Golem which is sort of a precursor to a robot. If you look at Homer's Iliad, it has depictions of sort of robotic creatures. Thomas Hobbes, Leviathan discussed this sort of thing. Leonardo DaVinci tried to create such sort of things. The first term robot comes from a play in which robots are depicted. And these are all well before the time of computers or what you would normally consider artificial intelligence. So sort of as long as people have had thoughts about intelligence, people have had thoughts about artificial intelligence. But the history of AI, usually, is dated back to about 1956. So there were a few events that occurred before 1956. So in particular 1943 McCulloch and Pitts formed a model of a neuron. So you can think of this as well, artificial intelligence. Maybe I'd like to form a model of what happens in a neuron. And if I knew that then I could put them together and create a model of human brain and that would be intelligence. Shortly thereafter, in 1950, Turing tests came out and this was a notion of well, if we're going to talk about intelligence what intelligence might be. And the Turing test is a test of whether or not a machine is intelligent because it's a little hard to define. The test goes something like this. Behind a screen there is either a computer or a machine. Okay, and the test administer sits on one side of that screen. And, at least in 1950s, would type on a keyboard, let's say through a teletype and communicate with whatever was on the other side of the screen which would type back. So we don't really have teletypes anymore. But think about this as an online chat situation. The test administrator is attempting to determine, through asking questions and analyzing the responses, whether or not the item on the other side is either a human or not. And if he can't, if it basically has to guess at random, so 50% of the time it gets it right and 50% it gets it wrong, then the machine which was was sitting on the other side or a human. But the machine, in this case, is considered to have passed the Turing test. So this is a test of whether or not a machine can pass artificial intelligence. If you put it behind a screen or you put a human behind the screen, someone sort of interacting with that machine can't tell the difference. This was very much based on language. So you ask things, can you compose a poem for me and things like this? See what you got back? There are limited versions of the Turing test that are sort of applied to machines. No one's really passed the full version of the Turing test. No machine has, but the true sort of start of the field maybe comes from 1956. So in 1956, there was a Dartmouth conference. It was titled Artificial Intelligence. This picture I have here shows in 50 years later, five of the key members were are still alive 50 years later and it sort of kicked off this work. And in fact, Newell Simon and Shaw brought forth this logic theorist, which later actually co-authored a paper with them proving a theorem. It was denied because the journal didn't think that a machine could take authorship of a paper. But it helped them prove something and this was a big, big thing. That it kicked off sort of the field of artificial intelligence, maybe at least in that name. And so in the 50-plus years since then, maybe 60 years since then, things have progressed. So in the 1960s, we saw great successes. What's shown up here is an example of basically some synthetic neurons, neural network. Down here you see a box world. Basically our teaching machines well, if it looks like this could you reason about how to pick up boxes and put them on top of each other in order to achieve particular configurations or other things about those boxes? And then those are great early successes. And people thought, my goodness. We are close. We will be there very shortly. Actually, what I'm showing over here is Shakey the robot who came on the 1960s people thought well, we should really put all these things together. This is Shakey the robot, possibly the most famous real robot around. And he could go move around and it can decide how to push boxes out of the way and achieve various things in a fairly simple set of rooms. It took a long time and it had to communicate which is via what was Wi-Fi then, so it was very slow. But it was an exciting advance where people put a lot of these different technologies together. The 1970s are what's generally regarded as the AI winter. The large successes from the 60s did not bloom into sort of full successes later. So for example, in the 1960s, there was funding for translation from Russian to English at the time. This is a very important problem. And it looked like they were very close because they could translate sort of simple things fairly well. But what proved difficult were the more complex things. So it's one thing to be able to translate each word individually or even maybe reorder sentences slightly. But it's quite another to be able to get the nuances, to be able to get something that looks fluent in that other language. And that requires sort of a much higher level of artificial intelligence. It wasn't apparent in the 1960s, just how big of a jump that would require. And so when those successes didn't lead to the anticipated further successes in the 1960s, funding dried up for this field, people were very discouraged. This was generally considered the AI winter. In the 1980s and 90s, we saw a resurgence in AI, primarily around neural networks and machine learning. We saw things like the No Hands Across America project in which a car drove itself from New York to LA and appeared on The Today Show. That's of course the standard for all, whether or not you've reached fame is can you appear on The Today Show. And neural networks like these, big computers that were designed. This is a designed to solve the chess problem. And a resurgence in the interest in AI. And today, well, at least since 2000, we've seen basically the incorporation of more data. So the one thing that was perhaps missing in the 80s and 90s, were a lot of data. You can argue that a child gets to be intelligent by observing things many hours, every day, continuously. And that's a lot of data that comes in and maybe this was the missing component. So things like Siri or other sort of agents that will you can talk with a little bit these are really driven by large data databases that allow to find regularities in language and things like that. IBM Watson is a search through large databases. AlphaGo's is a recent success, that's mainly neural netwworks not data. In go playing we have self-driving cars, to a certain degree, and these all grew out of very large data projects.