Before we get into this session, I want to tell you a little bit about myself. So I kind of grew up in Bell Laboratories. It was this wonderful technology environment that I think was about the most incredible place to learn computing and networking software, which was kind of in the sixties, seventies, eighties. That that group, that Bell Laboratories group, was writing some software. And one of the products that came out of it was something we know as UNIX, which is the base technology for most of the things run on your phones. So the iOS and Android are UNIX derivatives, as is Linux, and I think even if you lift the hood up on Windows. You'll find it in the Kernel. There's something called a POSIX compliant piece of software, which is also Unix. So you could argue that just about everything you run on all your computers emanated from Bell Labs. The reason I bring this up is because around that time, there were a couple of geniuses there, the two guys who built Unix, one of whom, Ken Thompson, won an award. The Turing Award, which you should know, is the computer science equivalent of say the Nobel Prize. He won this touring award for UNIX and when he gave his speech he talked about Trojan Horses and how in some sense you can make them invisible to code review. Now, you might remember in a previous session we talked about how we were looking for a Trojan Horse that was inserted in say a log in program, where you're checking login password. Just look in the code and there it is. But here's what Ken Thompson reminded us, and let me give you a little bit of background. I think you guys know, that when you write a piece of software, you put it in a translator, right? It might be a compiler, might be an assembler. Some of you, I'm sure, are programmers, so you know that you write the code, it gets translated into something that can then be executed. And I understand there's linking and loading, but to make things simple, program, translator, object code, makes sense? So Ken Thompson said was, take this source code. That has, maybe it's clean, there's no Trojan Horse embedded in it. I run it through a good compiler. I'm going to get clean object code. Clean source, clean compiler, clean object, everything is good. But if as we learned previously, let's say I've put Trojan Horse stuff in the source code. I put a little check with a secret trapdoor password. Dirty source through clean compiler produces dirty object, got it. I write the Trojan Horse, translate it, fits into code. Well, we said earlier [SOUND] you find something like that, well, go look at the code, see what's in there. I'm going to have a little piece of software that says, if password user ID valid or password equals abc123, then let your number, I have it in there, I see it in the code reveal. Here's the genius that Ken Thompson came up with. He said, you know what, instead of putting it there, why don't you have the compiler just insert that, do you follow? The compiler's translating so it sees clean source code, and when it's translating it, it translates the clean but also inserts the dirty part. Isn't that something? I mean, here's the implication of something like this. If previously you thought, well, I can always just look at source code and see if it has Trojan Horses. And we all acknowledge that, yeah, I mean, 99% of the code we run, the programs we run, the things you download to your phone, you're not looking at the source code. But if you could, you could probably sense that there was something weird, if you see these funny password checkers that are in there. But if I do it in the compiler, the chances of you ever having the opportunity to go in and review the code and the translator, very, very, very low. Now let's think about this. It implies something that as computer scientists, as engineers, as business people, as students, that we must understand, and it's profound. Here's what Ken Thompson said, I want you to think. He said, you can't trust any piece of software that you didn't completely write yourself, including all the translation tools around it. The corollary there is you can't trust software. This is a depressing fact in computer science now. If you never heard that, then I hope you're sitting down. I mean, that's a pretty jarring concept that we can't trust software. And in some sets, it helps to explain this crazy cybersecurity industry that we're going to be learning about in our sessions. You think why am I watching software to see what it does? Why do I have to build scaffolding around systems? Why can't I just fix it in the first place? The reason is because we can't trust software. That's the reason. It's probably the one root cause for almost every hack we've ever seen, that software turns out is remarkably difficult to get correct. You build a building, and the building should be a structure that stands, you don't build systems around a building to catch it when it falls over. It would be ridiculous, you gotta be kidding me? And yet, for software, we build software, we put it in place, and then we build cybersecurity around it to stop it or catch it when it goes array. This is a fundamental notion, this question of trust and system. I want you to keep that in mind as we go through the sessions. Now just to kind of check on our learning here around this problem of translators, let's do what we do. Let's think about how you might stop something like this. You remember, previous session we talked about soda machines. I asked you to think about how you'd fix it. We'd take the note, we'd put a camera, we tried all this stuff, none of it worked too well. We also, when we were talking about our simple log-in program, I said how would you find the problem here? We came to the conclusion that code review might be the best way to do it. Well now I would ask as a little quiz, how might we reduce the risk of a Trojan Horse and a compiler. Let's go through the three options. The first option might be to test the compiler, right. So that we test it, run through a bunch of tests. Second possibility would be to impose stiff contracts or legal contractual requirements on the compiler vendor that we're buying the thing from. And the third might be to interview lots of different compiler vendors when were buying our translator. So we've got testing, we've got contracts, we've got multiple vendors. Now think about that, which of those in your mind do you think would potentially reduce the risk of there being an insertion in your translator. Think about what your finding and think about it testing will have the same problem we had before. Too many possibilities, very unlikely that you'll ever pick something up in testing. So that doesn't work, let's skip to our third one. This issue multiple compiler vendors, well maybe it would be a good idea to talk to a few different vendors, maybe reduces the probability, maybe. But I think the correct answer here is interestingly enough, B, that means contractually you should ask your software provider to sign something saying it's nothing like that in the code, make it legal. If they're going to put that in, at least force them to lie on a contract. It would be a little bit tough for them to want to do that. So it's a weird sort of thing where a computer scientist, a software engineer, a manager would have to resort to essentially making the software vendor raise their hand. And swear they didn't put the thing in there, and you do that with a contract. I think it's kind of funny, because usually we like to fix security problems with functionality and here I'm fixing it with a contract. So I hope you've learned something with this. And I'll see you in the next session.