Troubles with quantified risk

Ryan McGeehan
10 min readMay 31, 2021

--

Risk quantification can be confusing and derailing to groups and decision makers.

The following points are areas of pain when working with quantitative models with others. These areas of friction cause bad experiences, and bad experiences change our approaches in the future.

We’ll talk about the following topics:

  1. Security return-on-investment does not guarantee investment.
  2. Assuming a breach, or proving the absence of one?
  3. Multiple definitions of the word: “Risk.”
  4. Competing beliefs for the meaning of probability.
  5. Systems are too complex to predict quantitatively.

Let’s walk through them. Hopefully, you’ll still be excited (as I am) about the remaining usefulness of probabilistic methods.

1. Security return-on-investment does not guarantee investment.
“Just because there’s ROI doesn’t mean I should approve your budget.”

Security team leadership is often concerned with requests for resources: headcount, budget, and prioritization. Risk quantification methods can demonstrate some ROI or value to a security investment. Hopefully, this convinces leadership to increase resource for an organization:

See? If we spend money on security, we save money!”

That lure of quantifying risk is supposed to be appealing to leadership.

However, there’s a gotcha:

Misunderstanding: All opportunities with positive ROI will secure resources.

Comparably, rocketship startups that pursue fundraising (hopefully) won’t make this assumption. A clear product story demonstrating positive ROI might still be shot down by their preferred investors.

Investors are still comparing your story with other investment opportunities… even with multiple profitable options, with plenty of non-quantified data.

This is difficult when a quantifiable risk measurement method may demonstrate a (seemingly) obvious decision, to you at least.

The issue is that leadership will likely have numerous other opportunities to invest in more proven areas with very efficient ROI. Any security team proposal will likely lose to a head-to-head match-up on strictly ROI in a healthy business. The money will always make more money elsewhere.

In fact, other business areas don’t even need to prove their ROI with the same rigor you may have. By default, most leadership is resistant to any pitch for resources that pull from core business drivers.

A model is not entitled to influence. Quantification may assist with a pitch for resources, but it will not force hands. It must effectively communicate.

Lesson: Resource conversations must be influential. Quantification offers tools which might be influential in your organization — However, they offer no guarantee, and they do not “solve” the decision.

2. Assuming a breach, or proving the absence of one?
“You can’t prove to me that we’re not hacked right now.”

We cannot easily prove the absence of some things.

We begin seeing breakdowns when this statement is extended to argue against the usefulness of building evidence. For example: why bother logging if the adversary can delete logs?

Let’s talk about assume breach in terms of the concept of absence.

Proof of absence (a certainty) is very different than evidence of absence (a suggestion).

I argue: We are in the business of building evidence of absence.

We build and manage strong evidence for others to believe there is an absence of compromise.

Similarly, we build evidence that the impact will be minor if that were to turn out to be false.

We see confusion and friction when arguments arise about proof and certification. This is impossible in practical environments: We cannot certify with proof that a breach will never happen, is not currently happening, or that there will be no impact when it does.

That is the difficulty of our ability to create a proof of absence. We are limited to creating evidence that we are not breached.

This mindset conflicts with the assume breach mental model, which is quite popular to bring up, and can be distraction from reasonable discussion.

The assume breach model does not pass simple absurdity tests. You would not tell a customer or your boss: “We’re breached, every system is breached, every employee is an insider, we have to disclose to every regulator, every day, your data and our data is everywhere, and we’re all going to jail.”

We don’t do that. Assume breach model is not used this way by reasonable practitioners. Reasonable security leadership believes that a breach is possible and uses it as a creative device for scenario analysis.

Put more simply, anything is possible. But everything is not. Focus!

The possibility of a breach is probabilistic phrasing. Assume Breach then makes more sense as a threat modeling approach to assume breaches and build defenses in depth where it is most appropriate from those perspectives.

So while assume breach is a useful mental model — the evidence that suggests no breach is what builds trust in an environment.

Lesson: Quantification suits different needs than “Assume Breach”.

3. Multiple definitions of the word: Risk
Your definition of risk is wrong!”

The presentation of quantified risk models to a fresh audience is always difficult. There’s almost always confusion.

A risk, when quantified, is strictly defined. Everywhere else, it is loosely defined. Those of us who want to quantify risk prefer the following risk definition:

Risk = P * I: The product of impact and probability of an undesirable event.

The word “risk” is not regularly used this way in mainstream usage. Here’s some everyday examples:

  • The lack of obvious mitigations (“Driving without a seatbelt is a risk”)
  • An amount of property at stake (“We have too much risk in volatile investments”)
  • General expressions of fear. (“This feels pretty risky…”)

Academics and industry strictly define risk Risk = P * I without much exception. However, research shows that people’s everyday use of risk is used very differently and, most often… non-quantitatively:

The three nouns risk, safety, security, and the two adjectives safe and secure have widespread use in different senses. Their polysemy will make any attempt to define them in a single unified manner extremely difficult. *

This may seem to be an obvious point. Still, a probabilistic model still has the burden of being communicated. Enough professionals in security / risk are unfamiliar with specific probabilistic language, or, opposed to the thought of being an exercise in prediction altogether.

This ultimately comes to be another point about the burden of risk communication. The rigor involved with risk quantification may make the models useless to the audience you’re supporting.

Lesson: Consider alternative communication styles with other professionals that strictly avoid probabilistic language.

4. Competing beliefs about the nature of probability
“You can’t make predictions if you don’t have data.”

Some people absolutely hate the word “prediction” due to the baggage it carries and are resistant to tools related to it.

This idea isn’t new. A centuries-long war among academics has waged over the word probability and whether it relates to future beliefs or the observable past. Some have gone so far as to bury and hide the methods associated with subjectivist probability — measuring the beliefs of future events in probabilistic terms.

The term probability itself has debate stemming from mathematical, metaphysical, philosophical, and even religious positions! These are strict schools of thought with vocal and sometimes well-credentialed viewpoints with immovable beliefs on what is correct.

At a high level, probabilistic interpretations have organized themselves into two major groups. Or eight. Or ten. For simplicity, we’ll review the two.

Frequentist interpretations view probability as frequency of observed trials.
Subjective interpretations view probability as the belief of future events.

The sunrise problem is a similar, classical example that frequentist approaches also stumble on. Will we see the sunrise tomorrow? A strict frequentist result might suggest that it is impossible that the sun wouldn’t rise tomorrow (The sun has risen on 100% of our observable days).

But… suns explode! We require tooling to model with all information available (expert belief, reference classes, and confidence) when data is lacking or expensive to observe. Subjectivist methods bolster the weaknesses of measurement. Our particular sun has never exploded, but it is still a sun.

Of particular concern to frequentist mindsets are the use of expert forecasts. Or the role of humans “guessing” probabilities (forecasting) to represent the odds of a future event. I view it as a necessary process that is assumed to be replaced. Forecasting panels help encourage that replacement: reliable breach data would always be more useful than a panel estimating a breach probability.

This area of criticism into subjectivism is pronounced with the lack of reliable breach data. Or, any other risk we care about.

Ironically, the strength of subjectivist methods comes from missing data. Similarly, subjectivist methods diminish as data becomes more available or objective methods are strengthened.

It makes little sense to dig our heels into a belief system when it is useful to swing between approaches as our tools and targets for analysis change. Be flexible.

Lesson: Frequentist and subjective viewpoints offer tools for different contexts. We prefer frequentist approaches but would never exclude subjective ones.

5. Systems are too complex to predict quantitatively.
How can you measure what you can barely comprehend?”

There are schools of thought that consider rare and high impact events ( nuclear meltdowns, aerospace failures, and computer errors) as complex systems that result in disasters. A key factor in complex systems is the causal unpredictability with their failures. Specifically: small problems inevitably lead to disasters in tightly coupled environments when the potential for harm exists.

Individuals that strictly believe this are resistant to talking about causes. They focus on impacts. This perspective is healthy, but diverse viewpoints may clash.

The foremost, glaring issue in risk assessment is the possibility of events you would have loved to measure… but can only measure with hindsight after a disaster.

Example: The Three Mile Island disaster. The investigation didn’t result in a simple explanation with a simple fix. Rather, it is better explained as a system accident. The cause could not be predicted, while the consequence was one everyone wanted to avoid.

This suggests a limit on the depth that quantitative methods can take in modeling a risk. They cannot model all possible causes. A quantitative model may prove useful in studying the known frequency and expectation of a nuclear meltdown through Probabilistic Risk Analysis… there are strong reasons to believe that the decomposition of underlying causes might not enumerate and surface the right scenarios for study.

Probabilistic methods do not prevent failures of imagination — rather, they are limited by them.

A more modern take comes from MIT, described in Nancy Leveson’s STAMP approach. Nancy does not hold back criticizing quantitative risk measurement approaches in her writing from the perspective of complex systems risk and has been doing so throughout her career. A snippet:

In performing a probabilistic risk assessment (PRA), initiating events in the chain are usually assumed to be mutually exclusive. While this assumption simplifies the mathematics, it may not match reality.

Among her criticisms:

5a. Experts have difficulty reasoning with tiny probabilities.

Well supported and agreeable. Anecdotes from other industries are easy to come by. The Rogers Commission Report that details the Challenger disaster is rife with examples. Numerous studies about how “words of estimative probability” show how translations between numbers and words bring about error.

However, the issues with probabilistic assessments always come with organizational insights around groupthink and management's influence.

An additional criticism:

Complexity makes it difficult to surface failure scenarios.

Nassim Taleb is also a vocal hater of nuanced aspects of probabilistic measurement, and rightfully so. Nassim and Nancy share a similar disdain for wasteful over modeling or any attempt to closely model a complex system to certify it. This, of course, will fail eventually. Model failures will eventually become disasters.

A perspective called Normal Accident Theory comes from an influential book in this space. Once systems become tightly coupled, they become accident-prone and more difficult to predict. The book is filled from cover to cover with obscure disaster causes.

Nassim’s book series, beginning with Fooled by Randomness, is highly articulate in describing a mindset to hold around complex systems. In these books and on Twitter, he rails against those who get caught up in their own predictions and expose themselves to ruin. One of the most repeated of his many observations is a failure by “experts” to recognize systems that behave with harmful distributions.

For instance, assuming a normal distribution output by a system that actually behaves with a fat tail. More simply: Never assume records can’t be broken when expecting potential failures. Instead, expect complex systems to break records… for worse.

Next:

5b. Assuming there is a root cause gives us an illusion of control. (link)

This quote from Leveson is damning when considering how risk assessment (quantitative or not) requires the formulation of scenarios. When causes are predicted, it can give a false sense that they have been adequately enumerated. Causes in complex environments are rarely “just fix it” issues. For instance, the “root cause” of the Facebook View As incident was actually an interaction between three separate vulnerabilities. Interactions between vulnerabilities are arguably impossible to enumerate when individual vulnerabilities are already difficult to surface.

The impact, rather than the cause, was predictable. Both Leveson and Taleb focus on ruin, rather than cause, and rightfully so.

Lesson: Quantitative risk modeling bolsters an illusion of understanding. Design systems to be resilient against outcomes with unknown causes.

Summarizing lessons

I still view probabilistic risk methods as a fascinating area to improve information security, but difficulties with collaborative quant will hold it back. Here’s a summary of lessons learned:

  • Organizational influence is not solved by quantifying risk.
  • Quantification will force you to re-examine your assume breach mentality.
  • Quantified risk language clashes with everyday risk language.
  • We prefer frequentist approaches. Never exclude subjective ones.
  • Design systems to be resilient against consequences with unknown causes.

Other:

I’ve written about a few other areas of criticism. Instead of repeating, I’ll link them here.

  • “You can’t predict adversaries.” — (1)
  • If it’s not falsifiable, it’s not scientific.” — (1, 2, 3)
  • Why can’t I measure performance with risk reduction?” — (1)

Ryan McGeehan writes about security on scrty.io

--

--

Ryan McGeehan
Ryan McGeehan

Written by Ryan McGeehan

Writing about risk, security, and startups at scrty.io

Responses (1)