The Risk Register, ultimately in terms of a management tool and the steps used to deal with it, aren't going to be identified here. We've kind of talked through them already. Identifying the risk, evaluating the severity of any and all identified risks, applying possible solutions to minimize or somehow mitigate and get rid of the risk. We sometimes refer to that as zeroing out the risk, in theory. And then monitoring and analyzing the effectiveness of any subsequent steps that may be taken. This is the thought process we go through, the actual step-by-step management we use the Risk Register to drive, and then of course we iterate back. We feed back into the Risk Register, looking at whether or not the risk has indeed been dealt with. Is it a recurring risk? Is it coming back, even though we've applied countermeasures to it? If so, then maybe the specificity that we're using to address the risk is not at the appropriate level. Maybe the countermeasure is not at the appropriate level. Maybe we need to change that in some way. Maybe the countermeasure was indeed effective, and the risk has not returned. So as a result, we can say that risk is no longer in the system. We want to take note of it historically, be aware that we've dealt with it, but we're going to take it off the Risk Register and mark it as complete and closed out, because it's not active. And we'll move it into a kind of a historical holding pattern. If it ever does reemerge, we want to have a record of it and know what we did to deal with it. But hopefully it's going to go away, and we're going to be able to focus our efforts on other risks, in other areas as a result. When we think about risk management, general concepts that we've been talking about that we just want to kind of formalize and standardize here. The ultimate purpose of information security is to reduce risks to an acceptable level. We've been talking a lot about the fact that risks do exist. Risk, as a generic concept, exists in everything we do. You get out of bed in the morning, you run the risk of stepping on something, slipping and falling on the ground. You run the risk of banging your head on a cabinet. You run the risk of slipping in the shower or the bathtub. There's all sorts of risks that we engage in, and interact with, and go through in order to get to where we need to be every day. Getting in a car and driving involves risks. Going to work involves risks. Leaving from work and going home involves risks, right? So there's risk everywhere. But the goal with information security, specifically, is to reduce to acceptable levels, reduce the risk that we face inside of the operational environment. Inside the organization, within the networks that we manage, around the infrastructure we control. We want to control what we can. We want to know what is knowable. And we want to take action against the things that will interrupt the logical flow of the things that we're trying to accomplish, and control as much of that environment as we can. But there's always unknown risk. There's always things that we are either unequipped, don't have the knowledge to understand, don't have the situational awareness to understand, just don't in general engage in the activities that lead to the risk. And so it hasn't been exposed, or the risk has not been identified because nobody understands that it is a risk, because it hasn't acted in such a way, up until now, that would indicate that it is. But it's laying dormant there, and at some point may become active when somebody or something figures out a way to activate it, or trip across it in such a way that it now becomes a threat, or a concern to us. And if it is a threat that can take advantage of a vulnerability, a weakness in our system, we then may see that turn into a risk. A likelihood that something bad will happen. And that's really what the risk is. It's the opportunity that a bad actor, what we call a threat actor or a threat source, that that initial response vector coming from the threat actor or the threat source. The thing they put into the system, the actions they take, are going to take advantage of some sort of weakness, what we call a vulnerability traditionally. And that action targeted that vulnerability, identified as such that they know they can take advantage of it, will lead to exposure. The likelihood that that exposure is going to be negative and have direct impact on us, is going to equal the amount of risk that we are going to face. And so when we think about how risks come about, they come about because systems, and/or individuals, or some combination of them are used in an incorrect way. Or somehow are going to be asked to do something that under normal circumstances may not present a problem, but under this special circumstance and this particular context will. And as a result of that, whoever's behind that request, whoever's behind that ask, has figured out a way to effectively create a situation that will lead to the likelihood that somebody can exploit a weakness that they've identified. And if they can exploit that weakness, that vulnerability, the thread is there that they can do that. If the threat becomes realized, if it's actual, we then are likely going to face risk, because the risk is going to involve the negative impact of whatever that particular action, right? The vulnerability that's being assessed, and ultimately is going to be exploited. The threat action is going to indicate. So the cost of controls that we have to think about to offset risk is something that we have to struggle with. When we, as security practitioners, think about how to stop an attack from taking place on our systems. We may say, if we just block that IP address, then the person on the other end of that IP, well they can't attack us. So we'll go to our firewall. We'll program in that IP address that they're coming from, and we'll put it on a blacklist or a watch list. Some sort of block list that says, do not allow as oppose to a white list that would typically allow that IP address in, right? So block all traffic. Yeah, block that IP, don't let them in. Okay, the bad actor may be stymied for a minute or two, but they'll say well, what if I change IP addresses? Maybe that will work. So they change IP addresses, they come back and knock on the door again, and sure enough that IP address is not on the list, so they get in. So now maybe we have to block another one. So now we get smart. We say, well we're not just going to block one or two IPs, they seem to be operating in a range, a subnet of IP addresses. Even though they're randomly choosing them, they're still in a general range. Maybe the 192.168.16 address block. So we're going to go ahead and block all of those addresses. We're just going to block the whole range. That way, it's going to be much tougher for them to come in. We're not going to have to keep going back to the well every 20 minutes and blocking another IP as it pops up on the radar. So we may do that. So we'll go back and forth. And my point is that we'll take countermeasures, and use those countermeasures based on the kind of risk we face. Now manually intervening and blocking one, and then two, and maybe a whole subnet of IP addresses is not going to be very costly. But it is going to take time, and time does equal money in our world. And we can ascribe a value to how much time it took every time we had to go block that IP address if we choose to. We have to think about that, and be aware of that. So that's something we can measure. And what we're saying here, is that the cost of the control or the countermeasure should never exceed the loss. If the attack is causing us potentially, if successful, to lose $1,000 if it is successful. And the amount of time it took for us to block one, and then two IPs, then ultimately an IP subnet was maybe, I don't know, 20 minutes of time. And 20 minutes of time in our world is equal to $5 a minute. So at 20 minutes that's $100 of invested time and energy that we have spent to effectively provide a countermeasure of control that will prevent or block $1,000 worth of loss if the risk is realized. Well the control costs substantially less than the countermeasure. Ten times less, as a matter of fact, in my example. $100 for the control, $1,000 for the controller for the countermeasure impact. In other words, if we say hey, we spent $100 to stop this but it would've cost us $1,000 if we didn't, that's a good deal, and we should do that. But if we invert the figures and say the risk is $100 worth of damage, but it took us $1,000 worth of time invested to prevent the damage, not so good. It took us ten times, the cost was ten times more than what the actual damage would have been if we just let the attack take place. It would have cost us less money to fix. So we have to make sure we understand, when we're dealing with risk, that we want to always be on the right side of that equation. We always want to make sure we're spending the right amount of time, right amount of energy, right amount of resources, hard dollars, right? We can ascribe a cost to things. We want to make sure that the cost of the controller, the countermeasure is not going to outweigh the potential liability that the risk presents. Because if the risk costs less than the control, as silly as this may sound, it actually makes sense to allow the risk potentially to occur. Because it's going to wind up being cheaper for us than it is to implement the control. So we want to just think about that and understand that as we begin our conversations around risk.