This is about the fundamental formula for assessing risk. I saw a post on a LinkedIn Group the other day, a group where myself and about 39,000 of my closest colleagues (more on them later) exchange ideas around IT Governance and related issues, and I made a comment which led to a discussion which brought up something that got me thinking which got me writing. And here it is:
Let’s take the simplest formulation of calculating risk. Whether you use qualitative or quantitative measures, risk is, as defined in NIST 800-30 rev 1– Guide for Conducting Risk Assessments, “typically a function of the degree of harm and likelihood of harm occurring”.
http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf
“Opportunity” is its opposite: the likelihood and impact of something “good” happening.
Do we have trouble communicating about risk? First we need to understand the question. It can mean a number of things. Let’s look at them individually:
1. Do we have trouble communicating about how we control risks? The answer is a resounding “no”. With the help of the insurance industry and all the vendors that sell controls, there is no end to the ways we have for talking about controls. No end to the amount of conversations we have all had. If you printed out all the White Papers that are on-line about controlling one risk or another, the Sierra Club would want to have a word with you about preserving our forests.
2. Do we have trouble communicating about the impact of a given exploit/vulnerability? Certainly not. SQL injection attack? Heartland Payment Systems. Weak wireless encryption? TJ Max. Insider threat, sub-category: works for you? Pfc. Manning. Insider threat, sub-category: contractor? E. Snowden. Insecure endpoints? Stuxnet (maybe). Unencrypted mobile devices? NASA. The point is there are some big impacts that you can point to when talking about specific vulnerabilities. So in evaluating the risk faced by the organization of being vulnerable due to a specific set of circumstances, the impact can always be discussed. Perhaps even quantified. That leaves us with where Chicken Little went wrong.
3. Do we have any success in talking about likelihood? We do, but it is limited to the two extremes:
A. “This will happen to everyone” (Advanced Persistent Threats)
B. “Never gonna happen” (The Sky is Falling)
With the exception of “certainly will” and “certainly won’t” happen, likelihood is really hard to talk about. Whether or not it is hard to measure depends on what you think of the discipline of probability and the quality of data available so I will save that part of the discussion for another time.
Consider the two examples we have in Western folklore of individuals who, in the parlance of risk management, were detective controls. The boy who cried “wolf” was pretending to detect a high-impact (wolf attacking the sheep), high-likelihood (there are always wolves who are trying to attack the sheep) event. That his cries amounted to what we call a “false positive” is not the point. His credibility eroded slowly precisely because both the impact and the likelihood of the event he claimed to detect were well known and considered to be “high”. It was a risk that all the shepherds considered themselves to be tasked with controlling.
Chicken Little is an entirely different story. There is no version of the story where someone says to Chicken Little: “who cares if the sky is falling, that’s no big deal?” In fact, the impact is a given throughout the story. Everyone in every version of Chicken Little accepts that the sky falling would be a cataclysmic disaster. But the likelihood is just so far from what people can believe that readers laugh at Chicken Little and her friends for believing it could be true. We have this same impression of the risk of being invaded by predatory space aliens. Hollywood has provided us with lots of illustrations of the impact of alien invasion. The Day the Earth Stood Still (both of them), War of the Worlds (both of them), Independence Day, Signs, Skyline, etc. These outlandish examples are to illustrate that while we can describe impact fairly well– in graphic terms even– we are not nearly as good at describing likelihood. And that is where we lose our audience in talking about risk.
To compensate for this, we have increasingly described negative outcomes as “when not if”, in other words as “certainly will” happen. Should we? Well, as information Security/Risk Management professionals, there’s not a lot downside in attempting to overstate risk in this way. If it leads to better controls and the negative outcome is never realized, then you were either right and the controls were important or you were “too cautious” but that’s not the worst thing you can be accused of.
And if everything is almost certain in terms of likelihood, except those things that are “never gonna happen”, then perhaps likelihood should no longer be part of the equation for calculating information security risk. It’s something that is worth considering.
Speaking of things that are maybe not what they should be anymore, consider a LinkedIn “group” of 39,000. Maybe call it a “small city” or a “district” or even a “cohort”, but I think if we continue to communicate with on-line collections of people, when we get over 500 individuals, we need a different word than “group”.
Finally, what about those things that we classify as “never gonna happen”? There are stories from the past 100 years of things that should never have happened that did. I believe the best risk management professionals have a special list in their heads of the things they would never say they need to build controls for, but they have an eye out for them just the same. Some of us still sleep a little better at night because we have memorized the phrase “Klaatu barada nikto”.
Thoroughly enjoyed this post. Point taken about likelihood, definitely something to consider. Very enjoyable read.
David thanks for your insightful post.