Quantifying your unknown risks.
The current acceptable standard of risk measurement is holding back the security industry. It has forced us to rely very heavily on intuition and arbitrary metrics in our attempt to mitigate risk. I want to provide a simple guide towards understanding how estimation and forecasting are taken seriously in other industries that manage risk with probabilistic methods.
If you have ever moved furniture, you’ve already done this yourself.
If that is true, you may, at some point, done that awkward movement where you hold your arms in a static position to estimate the size of something you’ll eventually move. Here’s an example.
Some people use their knuckles to measure an inch, or use their shoe size to pace out several feet, etc. These are all forms of estimation, which leads me to my first point.
All measurement is an approximation.
This concept is simple and pretty well explored. It is best explained with some anecdotes.
You may believe that a measurement of an object with, perhaps, a wooden ruler would not be an approximation. If the wooden ruler says something is 4x4 feet long, then you would feel as if that is the size of that box.
At the risk of sounding pedantic… you are actually making an estimation that is supported by the ruler’s measurement. You trust that the ruler is correct, and you are mostly deferring your judgement to that ruler. You estimate that the box is 4x4 feet, and you’re not going to bother arguing with a ruler’s output.
Now, take for instance, an engineer at NASA who is verifying a part to install into a new planetary rover. Do you think they would double check their work with a wooden ruler? No, likely not. They would more likely use something more precise, like a laser designed for industrial use.
What’s the difference between these two measurement instruments? That would be uncertainty. You would trust one measurement more than another, despite that they are measuring the same thing.
Uncertainty, and its role in decision making is the focus of this article.
Each measurement method, from extended arms, a ruler, or laser… each reduces the uncertainty of a question we want to answer: “How big is that thing?”
What happens when a measurement tool doesn’t exist?
The ability to approximate something without a measurement tool, is our more typical understanding of an estimate. Maybe you weren’t able to find a ruler. Or, you were too lazy to go get one.
Or, maybe, a “ruler” doesn’t even exist yet.
So, you ended up using your arms. Now you’re forced to make an estimation, and you’re homebrewing a method (your arms) to support your estimate.
As it turns out, estimation is an important skill. Do you want to know someone who is really good at estimation? Shigeru Miyamoto. Yes, the creator of Mario Brothers. In the video below, you’ll see him estimate the size of several objects just by looking at them, within a couple of centimeters, and once, exactly.
While this is a funny example, it turns out that many fields have areas where experts are very effective at estimation and prediction. For instance, intelligence processing, weather forecasting, and aerospace. This ability to estimate things operationally can be fully scored and quickly improved upon over time as failures occur. This feedback loop is a feature of most forms of quality engineering.
In security, we are extremely reliant on estimation techniques, and intuition. We don’t have a convenient “wooden ruler” to estimate the probability of a future scenario.
The prediction of a future likelihood is the “forecast”.
We can all recite an equation of risk. While there are many variations on this, they all involve a future likelihood. Risk is the product of a likelihood and its impact.
When forecasting the likelihood of something taking place, you are generally estimating a probability, for instance, a 50% chance of rain.
In security, we should be able to say something like:
“There is a 15% chance that we’ll suffer a similar root cause as Equifax this quarter.”
But a phrase like this terrifies most security people! Why?
The issue is that we don’t understand how rigor is involved in a forecast. For example, NASA enforces standards around how probabilistic values are formed in risk assessments. When a probabilistic forecast is involved in decision making, like, “Should we launch this spacecraft towards Mars today?”, they make sure that “We’re 99% confident this thing won’t blow up right away” wasn’t made off the cuff.
Based on this table, it suggests that a random intern engineer shouting “Uhh, 90% chance of failure, I guess?” is not used in critical decisions. It gets a zero in all categories, and is only appropriate for day to day discussion.
However, a fully decomposed ensemble forecast, supported by competing mathematical models, forecasted by experienced and qualified individuals, producing high confidence and tight intervals that agree with simulations, would hit a far higher bar for a decision to be made.
This “rigor” creates a greatly reduced level of uncertainty, and allows for high risk behavior, like attaching people to a rocket and sending them into space for the first time.
Rigor reduces uncertainty, and expands decision making capability.
While your knuckles might be less reliable than a ruler, and a ruler may be less reliable than a laser… each still has its place in everyday decision making. With an agreeable amount of rigor, we trust the uncertainties involved with our decision making in pursuing our goals.
What are our goals? We have many, in all areas of security.
- Incident response: “After extensive effort, we believe our adversary is unlikely to reappear elsewhere.” (95%)
- Software: “No bugs that bypass Same Origin will appear this year.” (95%)
- Intelligence: “North Korea was involved with an incident at Sony” (95%)
- Red Teams: “A red team is unlikely to succeed in this environment.” (95%)
Now imagine if the rigorous process we used, instead, output some likelihoods that were very low for the cases above. We would act very differently.
- We wouldn’t announce sanctions.
- We wouldn’t declare an incident to be overwith.
- We would drastically increase investment in application security.
Decision making is all that matters when looking at risk, and risk is based on the likelihood of future events. So we must have rigorous forecasting capabilities.
How do we develop rigor in cyber security?
This is something that I would already argue we are very good at. However, most security methods and techniques are disjointed and not speaking any probabilistic language. We rarely compound all of our knowledge into single, informative, probabilistic scenarios, but we already use all the methods that would help.
Here is an example. Let’s say I hand you, a master forensicator, a laptop. Then I ask you: “Is this laptop compromised?”.
You would likely start with complete uncertainty before you have a chance to ask questions. Fifty / Fifty (50%) is a usual place to start when you are totally uncertain. You ask: “Do you think it’s compromised?”
“I don’t know. The security team asked me to bring this to you. Goodbye!”
At this point, you might begin forensics. You would capture and analyze RAM. You timeline the filesystem. You would look at known startup locations. You would run hashes through VirusTotal. You’d have your teammate look at your results. You’d even look for attempts of anti-forensic techniques, after coming up with nothing.
Now, after all of this work, you still find nothing. Do you, or your team still believe the odds of this are 50% compromised? Probably not. Based on your certainty, you would likely have a much higher belief that there is no adversary on the laptop.
Although, there is still room for a more exotic compromise. Maybe there was an upstream hardware compromise. Maybe the adversary was sophisticated, and bypassed your capabilities. “Assume breach” means that there is always a chance we missed something. In this example, this likely means we have to leave some room for our own error.
When the person returns, you and your teammate may express a high likelihood that this laptop is not compromised. Maybe even 90–99%. In one out of ten similarly informed situations would our level of rigor be inadequate, or one out of one hundred similar situations, or more.
Defining a level of rigor for decision making.
If you ask a question on a frequent basis, for instance, “Is this laptop compromised?”, you will notice that you apply similar analysis to the question being asked. Many of the forensic methods that were suggested were repeatable, and the inclusion of others applied another layer of quality.
So, a forecast that is used for active incident response could possibly include:
- Certain techniques or analysis runbooks must be involved.
- Specific leadership must review the work.
- All participants must exceed a specific bar. (Hiring criteria)
- External IR firms must double check the work.
Varying levels of rigor, similar to the NASA table, can be defined for any security subject matter and for any granularity of decision.
This way, probabilistic forecasts can be OK for Shigeru Miyamoto style YouTube videos, or board level “Is this M&A compromised and should we buy them anyway?”.
This also works backwards. Leadership could say, “I need you to be 90% sure about this forecast” and you can then define the amount of rigor you’ll need to obtain that level of certainty, or as close as possible.
This all, of course, has its place. Rigor is expensive. You’ll want to apply it appropriately. But practicing security in terms of uncertainty, estimation, and rigor is a fantastic opportunity to align multiple subject areas into proper engineering against risk.
This is not an essay to advocate for “Add All Of The Rigor” in “All Of The Decisions”, it is simply to advocate for probabilistic outputs for all security efforts, with proportionate rigor to the decision needing to be made. Usually you won’t need much, as forecasting is pretty quick.
Only then we can answer, confidently:
“What is the likelihood that we’ll be breached this year?”
If we can improve the security industry’s understanding of probabilistic methods around forecasting, estimation, and rigor… We can unify efforts industry wide towards more a far more effective practice of “Risk Engineering”. I think we are most of the way there.
I write about this subject often, in “Risk”.
Ryan McGeehan writes about security on medium.