Dearie voting machines are at sometimes referred to as Black Box Voting Machines. And that's because the counting process. All the records of the votes are things that are fundamentally unobservable. They're happening inside the machine and inside this computer software. That's often compounded, because the computer software itself is kept secret from the public. Voting machine companies would claim that their software running their machines was a trade secret. Now, that's pretty common in computer, in computer software development generally, but when it comes to voting, it seems like there shouldn't be anything fundamentally secret about the way our votes are cast and counted. The actual process of counting votes and announcing a total is something that many people believe should be transparent to the public. There's a further objection to keeping the software in the voting machine secret and that's one on security grounds. That security by obscurity as its called keeping something secret on the basis that to do otherwise would be a security problem is a widely discredited practice in software development. When you think about it, if a piece of software relies on being secret for its security and that software leaks out, then they'll never be any way to get that security back. The only way to ensure that something is going to be robustly secure is to assume, that it's going to be available to the attacker. For this reason security by obscurity is a practice that's is sometimes called security snake oil. That is, it's something that is, is claimed to be a security practice, but in fact is an anti-security practice. Unfortunately, the major venders of DRE voting machines engaged in secret software behaviors and other kinds of security snake oil practices very frequently. In particular, one company, Diebold, the makers of the Accuvote TS I showed you earlier, was extremely secretive about allowing anyone to do an independent security evaluation of their machines or the software running in them. For many years, Diebolds was continuing this practice and would, would even threaten election officials who, who proposed to have any independent security evaluation done, with potential loss of their jobs of, of contracts for new voting machines and other things that, that were important to them. All of that started to change in 2003. When a voting activist named Bev Harris was Googling for documents about the Diebold machines and came across a file posted to a Diebold Internet server that Google had, had picked up. This file happened to be a copy of the complete source code to the Diebold voting machine. The software blueprints that were suppose to be the big secret. This was the first time that anyone independently was able to see what was inside the software, do a security analysis, and talk to the public about the results. A team of scientists from the University of California, San Diego. Johns Hopkins University and Rice University looked at the software Bev Harris found and did a thorough security analysis. This is the paper they published in 2003. They found a number of problems, and I'll highlight a few of the more interesting ones now. One problem they found was with the, the software handled the voter access cards. It turned out that using just easily obtainable hardware and software you wrote yourself, a voter could make any number of these cards that would work in the normal election. This would allow a voter to cast as many times as he wanted within the election booth. Another problem this research group found had to do with the encryption that was used in the Diebold voting machines. Now to step back for a second, encryption is a means of scrambling data files essentially so that, so that they're impossible to read. Unless you have an encryption key for the file. So, a key is usually a, a very large randomly generated number that's used in the scrambling process. And the corresponding de-scrambling process to get the data back requires the same key. Without the key it's practically infeasible to recover the data. Diebold applied the encryption to try to protect the integrity and, and ballot secrecy in data that in the data stored on the voting machine's memory cards. That's the ballot data that was brought back for counting. But it turned out that they applied encryption incorrectly in a variety of ways because of design errors. The most interesting of these errors, the simplest one anyway, was that all of the voting machines used exactly the same encryption key. Now this is a terrible security practice because if a criminal were able to get one of those voting machines say it, it's stolen from a polling place, or fell of a truck, or the, the, the criminal is an insider in one election district, then that criminal can take that information and apply it, to break the encryption on all of the other Diebold voting machines in use nationwide. That key which you see here happened to be the string F2654hD4. That was the secret that was protecting the integrity on all of these machines. And once the code leaked to the Diebold website anyone could decrypt any of the data files from any of the machines. The next problem was a ballot secrecy problem. It had to do with the way ballots were stored on the memory card. So, the machine made a record of every vote that was cast. And then every time someone cast a vote, the votes were stored in a file on the memory card. Now, if you think about a ballot box if that also contains a record of every one's votes, but the ballots are shuffled in the process of being placed into the box and transporting the box and then taking them out for counting. In the Diebold memory card the votes were stored in order. So ballot one, ballot two, ballot three, every new vote was appended to the file. What this meant was, that if someone was just observing at the polling place, watching the order in which people went into the up to the machine and cast their votes and they had access to the memory card at the end, they could determine exactly how everyone of those voters voted. So, that's a major weakness in ballot secrecy. Finally, the researchers looked at the software development practices. They looked for evidence that the software engineering methodologies used to produce the software in the Diebold machine were up to and exceeding the, the standards for, for other kinds of critical software. What they found when they looked into the code was a lot of evidence of, of poor engineering practices, things that were, were almost certain to result in insecure, unreliable software. But the easiest of, of these to understand is probably illustrated by some of the comments that were found in the code comments or notes programmers leave inside the, the software source code to let themselves and others more easily understand what's going on. Some of these notes that the developers left really reflect a form of internal chaos on the development team. Okay, I don't like this one bit. This is a bit of a hack for now. Or in the third case this is going to result in an error is essentially what it's saying. These are evidence that the development practices at Diebold were far substandard from the level needed to produce critical infrastructure software like an election system. So, all of these problems paint a pretty grim picture of what's going on inside the Diebold DREs, but I think the company's reaction paints an even grimmer one. The company Diebold in the aftermath of this report first denied the problems. Second claimed that the software that were studied was not something used in actual machines. Third personally attacked the researchers involved. In some cases trying to have them reprimanded by their universities. And finally said if, if there are any problems they've been fixed in the new version of the software that's available now. You might think that fixing these problems in the new version of the software would be an adequate response, right? Someone looked at the code, they found some problems, and now we can just go and fix them. But actually, finding problems like this is evidence that there's something rotten to the core. Secure software, reliable software is a product of a certain development, kind of development practice of a certain mentality and methodology. And finding problems like this so easily from what was, was not the you, you could easily imagine going into a, a far more deep level of detail than, than, than what these analysts had resources to do. Finding problems like these so easily is an indication that those development practices are broken. But every time you so, so one implication of that, that rotten development practices result in rotten software is that just reading through the code and finding some problems is extremely unlikely to reveal all of the problems, all of the errors, all of the vulnerabilities of that software. As evidence for that we can look at these Diebold voting machines which, since this 2003 study, have been evaluated by, by a number of other groups. And every group that's looked at the system has found even more severe problems with security and reliability. Problems just keep getting discovered every time you have a pair of eyes looking. Here's an example of one of those problems. This is something that wasn't spotted in the Hopkins study. But that I, I noticed in a later study. I'm not going to tell you exactly where the problem is. But for those people in the audience who are programmers and know C++, I'll leave this as an exercise for you. Can you spot a, the security bug in this actual code from the Diebold voting