[SOUND]. Now let's take a look at another case study. This one involving phishing warnings. This is a topic that we've seen before. But in this example, we're going to look at how users can be given some control over detecting when they're running into a phishing site. Traditionally users are warned of a phishing site by their email programs or perhaps their browsers and other extensions. That automatically try to detect the phishing. Here's an example of a random phishing message that I found in my spam folder on Gmail. You can see that Google is detecting this and it puts a red bar on the top that says be careful with this message. It contains content that's typically used to steal personal information. If we go down to the message itself you can see it says to view a copy of the court notice click here. But Google has disabled the link that appeared in click here. Now if I say ignore this I trust this message. Still, I'm unable to click the link. So, Google is really going out of its way to remove the ability for me to follow potentially dangerous content. They've done the same thing with this message. There's a link at the top here that says, can't see images? View online. But, that link has been disabled. Google has also put a different morning on the top that says, that contains a suspicious link that was used to still people's personal information. So, they disable the link, and I don't even have the option to say that I trust this message. The link is disable and Google won't let me follow it. Now if we recall, one of the guidelines that we looked at said. That we should enable the user to express safe security policies that fit the user's task. Automatic detection is good and useful. But it can have some false positives and false negatives. It may block users from seeing sites that they actually want to access. Or it may miss detecting a site that users do want to access. If we give users control they can actually specify a list of sites that they trust. And be warned through that process when they're visiting a site that's not trusted. There's an add-on for Firefox, called Petname, which is described in the chapter that you were assigned to read this week. What that does is allow people to name sites that they trust. For example, if they try to go to paypal.com, they can add a pet name, small payments shown here at the top. That pet name shows up with a green background and the name the user has entered. So they know they're dealing with a site that they've seen before. But if you remember the example in a previous video this week. One way of spoofing PayPal is using a domain name that has a capital I, that looks just like the lower case l at the end. If the user were to follow a link like that, they might see something like this. Now, you can see in the URL that it doesn't appear right, though the link actually could be manipulated to show that I with a capital. But more importantly, the petname box doesn't appear with green, it appears yellow, and it shows untrusted. That's an extra signal to the user that they're looking at a site that they haven't seen before. And if the user has labelled PayPal, they'll know that they're looking at a spoofed site. In addition to allowing users to express their security policies, this guideline also says that it should fit within the users task. That means we don't have to make users have to go through and create a list of all the sites they trust. We want them to be able to integrate that process into their normal tasks. And that means recognizing when their accessing sites that they haven't added to their list. Which serves both as an admonition that their accessing an untrusted site. And a suggestion that they might want to add it to. And admonition could happen when I come to a site that I haven't added as one that's trusted yet. So when I go to log in, as I start to enter a password, it could actually pull up an error message. That warns me that I'm entering a password at a site I haven't trusted yet. That both prompts me to add it to my list of trusted sites. And it warns me that I might want to check that I'm entering information on the right page. Furthermore, I can continue to log in, and because it's not a warning that appears. I'm able to complete the process without my workflow being interrupted. So in conclusion, automated security controls like automated phishing detection, is great. It can be really useful. But it's not the only solution. It has problems like failing to detect messages that are phishing or detecting and blocking messages that actually aren't dangerous. If we give users control over creating a list of sites that they trust, that can be more secure. Because users will know when they're going to a site that looks familiar, if it's actually the site that they've listed. But it's important to assist them in the process. It's unreasonable to expect users to create a full list of sites they trust and they don't. But if you integrate that security process of creating that list into the normal workflow of how the user behaves. That can make users more secure by giving them the control they need, but helping them along the way