[MUSIC] Dear Professor Chatila and dear Professor Sørensen, you have volunteered to discuss data literacy in the light of artificial artificial intelligence, what they are about, what are their risks and their possibilities. Thank you for participating. --You are welcome. --Thank you, thank you for having us. --Now, many people agree that AI, are revolutionizing many professions and changing the way we work. So we can look at, for instance, all the assistant technologies at smartphones, sound and image recognition systems and natural language processing systems. These are some of the examples. Do you think they are good examples of what AI can and cannot or are there any better or other examples? --Well these examples are probably the most known, the most apparent to the public, but actually wherever we have data, AI systems can be used. And this is why it's called also data science in this domain, it's the new name of statistical data processing. And this is based on so called machine learning techniques with which enabled to build models out of data by a means of statistical processing and generally using neural nets as classifiers. And these models can then be used to make predictions and to provide interpretations on new data that wasn't known before. So wherever you have data, you can apply this technology but artificial intelligence is not just that. It's also other means of processing data knowledge representing knowledge making, for example, inference, logical or probabilistic inference. And also let's not forget physical machines like robots, for example, which are not just medical devices but are systems that embody, also artificial intelligence algorithms and systems. And then you can find actually AI techniques in many many domains in almost any industrial sector or in the service sector, transportation for example, healthcare, manufacturing. Using the AI systems also for example for recruitment or insurance and finance computations. So AI based on data is really applicable in many many many domains. And as you said, I mean image processing, language processing is of course one of the most apparent technologies. -- Professor Sørensen, what do you things are the most paradigmatic AI systems out there right now? -- That depends a bit on on what we take AI to be, I think this sort of definition that Raja has just given is very broad and you could perhaps encapsulate many, much of it as automation. There's a lot of places where human actions are automated by machines, be them computers or be them robots or even more tacit systems. And I think the benefits of such systems is that we can sort of outsource human cognition to autonomous systems, but of course we can do that in many domains. We can do it in the public domain, but we can also do it in science and I think perhaps the most promising areas of AI, if you want to call that AI, would be in research and knowledge production. --Many seem to think that such AI machines are able to think much of the same way as humans are able to think. But is that really the case, if we look back 70 years, one of the pioneers of AI Alan Turing, the English mathematician, said that talking about machines thinking is so nonsensical, that it doesn't make sense to talk about it at all. But at the same time, he said that in the future most of human intellectual work could be done by machines or even superseded by them. Henrik, what's your, doesn't that mean that it is thinking? -- Well, as you said, Turing was a bit ambiguous and on what thinking meant. And he was of course also responsible for for at least one paradigm of what we should take thinking to be. He sat down and thought very hard about what human thinking is like and reduced it to a very small number of four or five basic operations that he thought any human who thinks would go through. And he took thinking, he was a mathematician, he took took thinking to be about computation, especially about computation of numbers. So a lot of the view that you presented that AI would be able to think would actually in the first decade be mirrored as AI would be able to perform logical inferences, do mathematics. We have seen computers beat chess masters. We have seen a few examples of computers proving mathematical theorems. We have seen a few examples of computers composing music, but mainly through machine learning, but we certainly haven't seen all of the study of the human psyche reduced to study of computers. So in a sense, these early visions for what AI could be and early reservations, they boil down to the fact that over the last 60-70 years, perhaps one of the things that AI has taught us the most is about, not about technologies but about what is the human condition, what does it mean for humans to think, for humans to know things for humans to produce knowledge and what is the role of machines or agency? -- Maybe I could ask you, Professor Chatila. Now,of course people are excited about the many tools, AI machines could produce in a way, AI systems. But from a conceptual level of thinking and consiousness how far have we moved since Turing in your mind? -- Well, I would say not much actually, because what Turing put forward actually is, as Henrik said, is computation. A machine, a computer is basically, I mean the device that implements the Turing machine, the theoretical model of computers, which is executing algorithms, step by step transformation of data into new data. One important thing that actually has been put forward by philosophers since a long time, John Searle for example, is that the computer, the Turing machine doesn't actually understand what it's doing. It's manipulating data, but it doesn't have any semantics, doesn't understand what this data means in the real world. If you have a cup, for example, for you, I say cup and this evokes a lot of things, it's embedded actually with your interaction with the cup. But for a machine it doesn't interact with a cup, even a robot doesn't understand what the cup is. And therefore this lack of semantics is really key to understand the difference of intelligence that we have with computers and with AI actually. So it's not exactly the same kind of intelligence that we are speaking about, they don't play chess the way we play, they don't play golf the way we play. Even learning is not exactly the way we learn, even if machine learning is inspired actually by some errors in the brain. --Let me go back to something you mentioned, Professor Sørensen, you said that one of the most promising areas is AI for research itself. So in a way, could you explain a little bit what that means? I would think that it only means well, better calculations, faster calculations. But you may you say that it means something doesn't change science fundamentally when we use AI systems? --I think that's an area of controversy these days, I think there's something to your argument. Computers, yes, they speed up computation that might be interpreted as if we just had more humans, they would be able to achieve the same thing as a computer. But there is a sort of a qualitative barrier that some of these huge models that we train today, they're not just faster than humans, they are beyond the reach of humans. So in a sense it's not just improving human cognition, it's adding something that we would not have been able to do without this automation being part of it. So when we have computers pre process images from the huge physical apparatus that we use to do experiments, we can sample much more from our experiments and have a computer sort through that in a way that we would never be able to do only with human powers. And so at least one way to talk about whether science is changing would be to acknowledge that. there's this qualitative difference between what humans could achieve and what machines add to that. But of course there might also be a much more radical interpretation to your question that perhaps people have seen this coming of machine learning as a different paradigm in doing science. That we can learn just from the data, without any theoretical models or without the usual scientific process that we have been used to thinking about for centuries. That we can somehow have raw data that we can draw inferences from using these computer models. And that's a huge discussion in the philosophy of science whether that's actually a good argument, and whether and to what extent it actually holds because we have come to learn that there is no such thing as raw data. Every piece of data comes from somewhere is picked up by something and is filtered through a cleaning process before it enters into these models. So there's a sort of human analytical aspect even involved in machine learning models as we currently know them. -- Yeah, that's also the motto of this MOOC, right? That data is not neutral. So, but what about the interaction between or the collaboration between AI systems and humans? Professor Chatila, that's something sometimes as far as I know called augmentation, right? It's a little bit like in the old days when chess machines or chess computers were introduced during the Garry Kasparov, the old world champion of chess. He called it advanced chess, so he thought that the best chest could only be obtained if you combine the forces and the skills of humans and computers. Is that the way to go now? -- Well of course computers and AI systems augment our abilities. And therefore I still call them tools. very, very very advanced, very powerful tools but like any other tools they augment our abilities and enable us to do more and maybe to do things differently. I would like to just get back a little bit about the question that has been discussed about scientific research because actually there are two paradigms. But finally the two paradigms are something that are useful. In more classical scientific research, you have observations of reality. You have observations not necessarily many, many many observations but reproducible and this is very important. And then you try to formulate a theory capturing the essence of the phenomenon. And this is based on fundamental principles of physics and previous theories, but the theory actually expresses something very important. It expresses causal relationships between its components and therefore based on this causal theory you can make predictions of something you didn't observe yet. This will enable you to verify that the theory is actually true because of this reproducibility and then exception to the theory will lead to a new theory and a better understanding of the phenomenon, this is the scientific method. Whereas for example, with AI based systems or data based research you have to collect a large amount of data and when you have a large amount of data, it's difficult to make sense of it for us humans. So we use statistical models, no theory, no causal links, just the statistical models and this help us to make sense to actually understand. But if we use it on its own, the predictions are based on the model, not on causal reasoning. So exceptions are not necessarily noticed actually are embedded in the whole model and you don't really grasp what the essence of the phenomenon, you don't have this causality. So it's really important to say that we have two things, but one should feed the other. It's not completely the same and this is how we are augmented. This is what I would like to say is that this is a fantastic tool to augment our understanding, the algorithm don't have the same representation. They do their processing and they provide the results for our use, our understanding our interpretation and and the semantics lie with us and they come from our interaction, our understanding of the world.