[MUSIC] Hey Aaliyah. You know it's my friend's birthday and you have such a great voice. >> Yeah. >> Would you mind calling her for her birthday? >> Yeah, totally. Yeah. >> Sure, here you go. Here's the phone. And here's the number. Let me just get this right for you. Okay, there's the number. >> Okay. [MUSIC] [SOUND] [FOREIGN]. >> What? Wait! She, she doesn't speak French! [CROSSTALK] What, what happened? Who did you call? >> [FOREIGN] >> [MUSIC] >> Hey Aaliyah, it's my friend's birthday and she speaks great French, she would love it if you would just wish her a happy birthday in french. Could you do that for her? >> Yeah, yeah. >> Oh great sure, let me just dial the number. [SOUND]. >> Here you go. >> [FOREIGN] [MUSIC] >> Hi, everyone. In this lecture we're going to be talking about authority and how it relates to security. Authority is access to resource or information that the user controls. So we're going to be looking at when the user grants access to their information or resources, how the system grants that access and how we can make sure that the user is in control, as much as possible, with the granting of authority to their resources and information. In the little vignette that you just saw, we had two scenarios. In one, the user granted authority to dial the phone. And in the other they didn't, and that affects the security of their system. When you're building a system, were going to look at some guidelines about how to grant authority to keep systems more secure. The first guideline is to match the easiest way to do a task with the least granting of authority. So some questions that you should think about with this is, first, what are the typical user tasks? When a person is using the system, what are the kind of things they have to do? Then, what's the easiest way for the user to accomplish each task? Ultimately, users are going to do tasks in the easiest possible way. So if you understand what that is, you can predict what users will do. Then you need to think about what authority is granted to software and other people when the user takes the easiest route to completing the task. And finally, how can the safest ways of accomplishing the task be made easier and vice versa? How are the easiest ways made to be also the safest. Essentially what you're doing here is creating a system where the natural easy way a user would do a task becomes the way that also is most secure. So consider the login screen for the Washington Post. There's nothing specific about the Post's website, it's just one of many examples that has the sign in with Facebook button. So if we're coming to this site, and we've never created an account before, we can't simply sign in here. Now, we can register an account, so we can click on that link. And you see from there we can put in an email address, password, pick a whole bunch of other information, potentially link to a home delivery account. And while this isn't a particularly onerous form, there's a lot of information that the user needs to add here. On the other hand, we can just come back to this main page and click Sign in with Facebook. When we do that this window pops up. We have an example account logged in here. And all I have to do is say OK and I'm signed in. It's a much simpler process to log in with Facebook than it is to register an account. And probably even than it is to sign in with an existing account that has a username and a password. As a result, the easiest way for users to go, is to log in with Facebook. But are they actually granting more authority than they want to in that case? Let's go back and take a look at what information appears there. I've signed us out so we can see the log in screen again. Now, let's compare the information that we're granting access to when we register as opposed to when we sign in with Facebook. If we register, the information that we're giving is e-mail, a password, the country, zip code, the year that we were born, our gender and information about our industry. So essentially we have email password, a little bit of personal information, and some work information. If we go back, if we try to log in with Facebook we can see the information that gets shared. We click that link. And it says that the Washington Post will get information from our public profile, it will get our friend list, and it gets our email address. Now the email address is something that we provide when we register. Our friend list from Facebook is totally new information, so we're granting additional access through Facebook plus there's this public profile information. If we mouse over that, we see that has our name, our picture, our age, gender, language, country and other public info. Now, a lot of the first things in that list are things that we're asked when we register anyway. But other public info is quite broad, and in fact that includes all the information that we've decided to make public on Facebook, which can include our favorite books and movies, our likes, and potentially even other posts that we've made public. So logging in through Facebook is easier, but it does grant access to a lot more information then we would allow access to if we registered through the site. Thus the easiest way actually grants more authority and more access to our information. Maybe this isn't the most secure that we can set the site up. Users are granting more authority than they necessarily have to if they use the easiest path. Let's look at one more example and the principle that goes along with it. In 1975 Saltzer and Schroeder wrote a paper on computer security, and they proposed a design principle called the Principle of Least Privilege, which basically says that a system should be given the minimum amount of information necessary to accomplish its task. Here we're looking at the Budweiser website. Since this is an alcoholic beverage website in the US, they're required to verify that a user is over 21 years of age before they let them into this site. Like a lot of alcohol providers, their site asks you for your birthday when you come in. Now there's nothing that actually checks to make sure the birthday you enter here is accurate. You can totally make one up. So, it's not a very secure way of keeping underaged people out of the website. They can just lie and say that they're older. But, let's say that people are going to actually enter their real birthday on this page. Do they actually need to give their full and exact birthday in order to enter the site? Well, no, they can just make up any birthday they want. So, essentially you're just asking for their birthday to see if they're over 21 or not. Providing an exact birthday actually gives a lot of information. There's 365 days in a year, but you're also providing the year of your birth here. So, there's not just a one in 365 chance that someone will have your birth date, the chance is much smaller than that because you're entering the year. Combining that with, say, information that can be accessed from your IP address about your location, you're actually helping a website, like Budweiser, narrow down the exact person who could be accessing their website, quite specifically. In fact, there have been research papers that have shown that if you have someone's zip code, which is along the lines of the location information you can get from an IP address, along with their birth date and their gender, you can uniquely identify almost 85% of the population. So granting access to your exact birthday is actually giving a huge amount of information. All the site needs to know to make its decision about whether to let you in is whether or not you are over 21. Instead of asking for your birth date, the principle of least privilege would say, they just ask are you over 21 or not? If someone is going to lie, it's no more difficult to lie with an exact birthday than it is to lie about being over 21. And the site's actually much more secure from the user's perspective if it doesn't ask for this extra information. The second guideline is to grant authority to others in accordance with user actions indicating consent. A big problem is that sometimes we give people authority to do things, but we haven't consented to them actually having that authority. In the vignette that we saw at the beginning of this lecture. When the phone was handed over, the user didn't understand that she was also granting authority to dial anyone you wanted. She gave a phone number, and she expected that her friend would dial the number that she'd given. Instead, she gave authority to dial any number that wanted, and that was a real problem. So we want to grant authority only when the user has given consent for that authority to be had. Some questions to think about here is, when does the system give access to the user's resources? So when is the system allowing other people to access something that the user has. What user action allows that access? You shouldn't have a system that gives access to users data or information or resources, unless the user has explicitly agreed to that. Does the user understand that their action grants access? So perhaps user takes an action that a programmer decides will indicate access, but the user doesn't understand that they're granting authority to something. An example of where this went terribly wrong was with social readers. Here we're looking at the authority granting section for the Washington Post Social Reader but in fact, there were a lot of different news websites that partnered with Facebook to create social readers. Essentially, when you would click on an article that went to one of these news websites, for example, if you were on Facebook and you clicked an article from the Washington Post, this would come up, and it would say, you're looking at the Washington Post Social Reader. Do you want to read the article? If you clicked Cancel, you wouldn't be taken to the article, so people would say okay, let's read it. What information and what authority were people granting access to when they said OK? Well, if we look over here, it says this app will receive your basic info, the information about you, which is a section of your profile, and your likes. That's not too bad, and I may be willing to share that information, which is pretty much public anyway, in order to be able to click through and read articles that my friends are sharing. But then, we have this little thing at the bottom here, that says this app may post on your behalf, including articles you read, people you liked and more. And again, I may not be suspicious of that because there are a lot of apps that get access to post to my account, but they don't post without asking me first. But if we read this section over here, it says, this app shares articles with your friends as you read them. Click OK. Read article to start. People didn't understand what this meant. If you clicked OK, to read the article, which a lot of people did without even reading the text in the grey box, you were giving authority to the Washington Post to post something to your timeline every time you clicked on an article in the Washington Post website. What was shown on Facebook is for example, if I had done this it would say Jen Bulbeck read the following articles and it would show a list of every article I had clicked on on the Washington Post website. I didn't have to take any explicit action to share them on Facebook. I didn't need to like them. Just the act of clicking on an article within the Washington Post, shared it on Facebook. There is a huge backlash against this because people would click on things that weren't bad, but it was maybe embarrassing to share that with their friends. And ultimately, these social readers got refined and a lot of them got phased out. But this was an instance where people may have had some information about the authority that they were granting, but it's very vague from this that if you read an article on a website external to Facebook, that your action of reading it will be shared on Facebook. You're granting that authority by saying okay let me read the article here. But most users didn't understand that. And that is really why people were upset about what this app was doing. Some may have willingly granted that information, but because users felt like they were mislead, that they had granted access that they never intended to grant, there was a lot of backlash, and ultimately, these apps needed to be severely refined. Finally we want to offer the user ways to reduce others' authority to access the user's resources. So if I've granted say an app access to my Facebook profile and I don't want it to have that access anymore, can I revoke that? Even in a more detailed way could I revoke some of the privileges that it has access my profile, but not others? Some things to keep in mind as you're designing are first, what kinds of access does the user grant to software and other users? Then, which types of access can be revoked? And finally, how can the interface help the user find and revoke access? So let's look at one more Facebook example. Here we're looking at the profile page of an example account for a guy named Malcolm. He has given access to the Washington Post as a way for him to log in to their website. And he wants to revoke that access, so his account on Facebook is no longer linked to his Washington Post account. How does he get to that information to revoke the access? This is ultimately an interface design question. In this case, we hope we can go up here and find some information. Now there's some privacy shortcuts, maybe this one gives us some information. If we go to all the settings, then we can look up information here. Where is it in here that we can find a website that we've granted access to and then revoke that access? It's really not clear. This is information about how private or public our posts are on Facebook. We also have information like who can contact us or look us up, but there's nothing in here about websites. If we look over in the left menu bar, there's also no mention of websites that we've connected to using our Facebook accounts. Now, I happen to know that those websites are listed under apps, but the average user's unlikely to know that a website like the Washington Post, where you've logged in, actually counts as an app. Once we go here, we can see the Washington Post is listed as one of the sites that has access to our information, and from there, we can edit what they're able to see. So we can change the visibility of the app here, we can see the information that it's getting, but we can't change any of that. But we can remove the app, and that allows us to revoke the app's access to our information. Is this interface well-designed to allow people to change the authority that software to access their resources? It's not very well designed, unless an average user knows that apps also refer to external websites, it's very hard to find the place that you can control and revoke that access. This is the kind of thing you want to keep in mind when you're designing software. Make it easy for users to change the authority that they've given to software other users to access their information. So in summary, when you are designing software and you want to make it more secure from users perspective, first follow the principle of least privilege. Don't ask for any more information than you absolutely need in order for your system to accomplish its task. As you're doing that, first make the easiest way to complete a task the most secure. Users want to do things in the easiest and fastest way, so make sure, the way that they're going to go, the easy way, is also one that provides the least amount of authority, to software other people, for the task to be completed. Two, make sure the user consents to the access that they allow. If the users going to allow access to their resources or information, they should be very clearly aware that they're granting that access. And finally, it should be easy for the user to revoke or reduce the access that software or other people have to their data. Following these guidelines will make it easy for users to act naturally within a system and to keep their information secure.