[MUSIC] [APPLAUSE] >> Hi, my name is Philip Nickel. I'm an Assistant Professor in the Department of Philosophy and Ethics here at Eindhoven University of Technology. I'm also a Senior Researcher at the Ford TU Center for Ethics and Technology. And today I'm going to talk to you about the ethics of technological risks. My own research is about concepts like risk and trust that sort of mediate the relationship between humans and technologies in some way. So that's my interest in this topic and hope to share a little bit with you today. Normally, just to start with a case. Normally a company would be very happy to see that its technology is hot in China. But when this headline appeared, Galaxy Note 7 Catches Fire in China, Samsung, the company that makes the Galaxy Note 7, was not very happy because their phone was actually physically catching fire and exploding in China. So that's not really what you want. The interest of this case is partly ethical, so I'll talk to you a little bit about the way that we analyze a case like this in terms of ethics. And to begin with, just with the events in the news, you might think there's no particular ethical issue here. So I'll try to explain a little bit more how ethics gets into the story. It's wonderful to study risk. The nice thing about it is that when something terrible happens like a flood or a phone that explodes or a data leak, then there's a silver lining that you can find to it which is that you can study the risk and learn something about it, and hope that you make the technology better. And that you can fix things in such as way that the accident or the incident doesn't happen again. So that's the fun thing about studying risks. But when we do talk about risks, actually there are different levels of causation that are involved, and some of these relate more to the ethics than others. So to deal with the Samsung case, in the first instance there's a phone that explodes. Now the reason why the phone exploded is because they were putting a lot of pressure on the battery design to make it smaller and smaller. And as a result, they didn't put the normal kinds of separators in between the parts of the battery that they otherwise would and that was the reason why when the phone got under any kind of pressure, that it would catch fire. At a human level, the reason for this type of incident happening is a bit different. It's because the designers of the phone were under a lot of pressure to use existing technologies and to optimize them to squeeze them into an even smaller and more appealing design. So at the human level, we can account for it in a slightly different way. And at the organizational level, we see a company that has a certain kind of hierarchical organization, they're under some political pressure because they're so closely aligned with the fate of South Korea, and they are trying to put as much pressure on existing technology as possible without really investing in long term innovations. Or at least, that's beginning to be how people are seeing the organizational explanation of this particular event. So as you can see, there are different levels on which we can explain this. Now where does ethics come in? Well, a lot of people are impacted by something that happens like this. And as you can see here, airlines had to actually ban Samsung phones from being carried on to the plane. So it was a great inconvenience to a lot of people. At a strategic level you can see that this would be very bad for Samsung and so you might think, well, there's no ethical issue here because it's just bad business for Samsung to do this. But still, there's almost always an ethical side to this when it comes to the responsibility of the company. And so it's useful to have ethical principles to analyze this so that, if necessary, we can point to the responsible party, or to try to indicate, well what went wrong here in a way that relates to our interests as a society or our interests as users of this technology? So, how do we do that? Well, actually, it's quite difficult to formulate principles of ethics for risk. And the reason is that ethics emerged at a time in history and it started to be formulated by some of the famous thinkers in ethics such as Immanuel Kant or John Stuart Mill, who are sort of the founders of the important schools of ethical thought, consequentialism and deontology which you may have been familiar with from earlier lectures in this course. But these principles were formulated in a time and place where probabilistic thinking about possibilities hadn't really emerged as a general method in the sciences. So probability emerged somehow in the middle of the 17th century, people started to think about it in the context of games, analyzing games of chance, also in terms of making insurance tables, trying to insure people against unwanted events. But it stayed fairly isolated. It started to be used in economics. It hadn't yet made its way to ethics when these foundational systems were invented. And so if you look at the principles of the main ethical theories, the main principle of utilitarianism, for example, says that an action is right as it tends to produce happiness in the greatest number. Well, that tends to leave some space open for talking about risk, but it's not really formulated in terms of probabilistic or uncertain consequences. Similarly, a principle such as we're familiar with from the deontological moral theory, the principle that one should act always on such a way that your maxim could become a principle for everyone, that principle also doesn't make any reference to probabilistic outcomes. And when we think about some of the examples that are commonly used to teach those ethical theories, the wrongness of lying for example, there's no probabilistic element in those examples. So how then are we supposed to accommodate this? The Swedish philosopher, Sven Ove Hansson, has said that ethics was created in a Newtonian world, and now we're living in a quantum world where there are probabilities, and we need to somehow take that into account. So how do we do that? Well just to take an example considering the duty not to kill, well that can be established and defended by traditional ethical theories. And for example, if you are driving a car and you purposely tried to hit somebody, the standard ethical theories would, of course, say that was wrong. That right, the duty that you have not to kill, corresponds to a right in others not to have other people threaten them with death. Now, consider a probabilistic case, maybe you came here in a car today. If you did you probably purposely, knowingly that is, did something which was risky. You knew that when you got into that car, you might get in an accident and cause somebody else's death. That's something that everybody knows who knows how to drive a car. So you knowingly imposed a tiny risk on everybody else that they die. Do we evaluate that in the same way? If so, why? If not, why not? Is the mere possibility of that event happening enough to make your action wrong? So then, what should we think? Is there some threshold over which the risk is not permissible, and under which it is permissible? That's a useful starting point. So maybe the way to formulate such a principle would be to say that a risk is permissible if it's equally distributed among people. Or, for example, it's permissible if it's voluntarily accepted by everybody who is subjected to it. These things come naturally out of the duty based ethical tradition. So that's one possible answer to our story. Now, in terms of another major ethical theory, utilitarianism, the way that that eventually dealt with these conditions of probability is through the idea of expected utility. So instead of considering a definite outcome in evaluating that outcome morally, what they did was to consider every possible outcome of the action that you perform, and you multiply the likely outcome by the likelihood that it will occur for all those outcomes. And then you sum that, and that is then the new expected utility. Well, this is an innovation that allows consequentialism or utilitarianism to deal with probabilistic outcome. So if we go back to the case of Samsung, we can ask the question, did Samsung expose people to a more than equal distribution of risk through the use of the smartphone? Well, they probably didn't put people at that much risk. I'm sure lots of people carried their smartphones on the airplane and it didn't catch fire. But what we can say is that people did not expect voluntarily to receive that much risk from the use of the smartphone. And so it certainly wasn't a voluntary exposure to risk given the risks that we normally associate with such a device. So in that way, we can say at least initially that Samsung did something that was also ethically problematic from the point of view of this ethical theory. Now, from the perspective of utilitarianism, the expected outcomes of the event seem to have been quite bad. Now not all of the badness was experienced by individual users. Some of the bad outcomes were experienced by Samsung, by the airlines and by others. But if we add up all of those expected outcomes, then we see that Samsung should've estimated that had they pressed the design in this way, that it might, well, yield this result. And as a result, they acted on a principle that doesn't seem to have been right from an ethical point of view. Now these are just initial answers. Ethics needs to make progress in this area to become better at formulating these principles for probabilistic outcomes. This is something that's only been developed in the last 30 or 40 years. And so there's a lot of work here to do and a lot to think about. >> [APPLAUSE]