[MUSIC] The next limitation is that the footprint is not the foot, but it is so crystal clear, right? We can see it's so crystal clear and real time. It's almost like we have a crystal ball, and you can see it. If you go to Silicon Valley people will try to sell you a crystal ball, right? It's very tempting, I mean it's means a crystal ball, you can predict the future. So with such granularity see everything and yeah, and it's very tempting and being impressed by this amount of accuracy. We can predict things with 80, 85% accuracy 90% accuracy, we might confuse the footprint with a foot, and that's also usually it's fine. For example, you can make some mistakes and Amazon sends you a wrong commercial and you can laugh about it someday, but if it goes to more serious questions. If you use the digital footprint for more serious questions, well, then we run into limitations for example. Predictive policing is a big field where they are used. So when when police cars go on their patrols nowadays often in many cities here in California, they're guided by algorithms. These algorithms for predictive policing actually tell the police where to go. They actually use algorithms that are again taken from another field, in this case from earthquakes because just like an earthquake is a big earthquake in the little replicas, of the earthquake often crimes happen like this, right? So the earthquake algorithm turns out to be very useful there. And a lot of data goes in there who drives where, what's the bus schedule and where do people move, right? And that gives suggestions to the police and to make some rounds in some neighborhoods at some point, and effectively we could measure crimes got reduced with this predictive policing. With police following the recommendations of an algorithm of where to show up at which time. So that's called predictive policing and we can use predictive policing and in a lot of these kind of applications in police in the judicial system. For example, if somebody got convicted for homicide, right? A murderer and got behind bars, got into prison. The question after a few years is does this person get parole? Can this person get out of prison? And in the prison you have a lot of data, where you can monitor these people behind bars 24/7 with video cameras with other records, behavioral records. And so imagine you take all of this data you analyze it and then you can make a prediction of what's the likelihood that this convicted murderer gets re-involved into another homicide after being released from prison. And this algorithm tells you with 70% accuracy, with 70% probability this person will be re-involved in another homicide. Now, do you give this person parole or not? Well has this person done anything? No, now has this person already committed a second murder. No, I mean this person did a first murder but the second one with the parole. I mean no, the person did not read but the algorithms is 70%. So you want to let this person go and another person is going to die? Or you better lock this person up but then the person didn't do it. So, how do we do that? Nowadays well, we go to a psychologist. The psychologist says, well there's a 70% and uses this processor here uses the brain and evaluates with this processor. And we've been fine with that for decades, and now algorithms make this pretty sure somehow it makes us feel kind of like funny, right? An algorithm saying over 70% probably that's why you going to get locked up because an algorithm said it. But that actually happens already all around us. For example, if you're young if you're in your 20s and you have a sports car and you are male, you might pay more car insurance than somebody I don't know. A mother of two in her mid-30s which the car insurance company might think well this mother will drive more careful than this 20 guy with a sports car. So the mother of two with two kids were right Miguel, has to pay but that's also kind of like prediction like nothing happened yet. It might be that you are much more careful driver, but you have to pay more insurance premiums. So this kind of like predictive discrimination that happens already, and we actually used to it. Now in some cases becomes even even more delicate. For example, if we take the ultimate decision and that is to kill somebody. So for example with drones, what drones do in drones applied in warfare is well the drones basically what they shooted what they kill is mobile phones. So people carry mobile phones around and then we have the SIM card and we go after the SIM card and you there's a person and the drones basically, for example. I mean the other data footprint as well we do data fusion, but the mobile phone is very important and many terrorists start to play Russian Roulette. So basically they exchanged the SIM card with others also with innocent or so forth in order to distract and there's a lot of collateral damage. I mean, there's a lot of innocent people dying before we get a terrorist, you can look up the statistics on that. So because well we go for the footprint, we don't really see the foot. So as one JSOC drone operator says, it's of course assume that the phone belongs to a human being whose nefarious and considered an unlawful enemy combatant. Or as the former director of the NSA and CIA says, we kill people based on metadata. Metadata that's data about data, all right. Or to say it again in the words of the JSOC drone operator. Well, this is where it gets very shady.