Image for post
Image for post

Prediction of Adversarial Behavior

Exploring the quantitative characteristics of your adversary.

A crucial question in risk measurement is whether an intelligent adversary’s behavior can be forecasted with quantitative methods similar to , , or ’s approach towards risk.

It’s a good question! Can you treat an intelligent adversary like a meteorological phenomenon, a crucial part failure, or space launch?

This essay was written to support a frequently asked question in .

An intelligent adversary is a source of uncertainty.

There are obvious differences between a hurricane and a hacker, but they are both sources of uncertain behavior.

A hurricane does not make decisions based on new information. You can make assumptions that a hurricane will not emotionally aim for specific locations, change directions out of spite, or .

An adversary is different for these reasons. Their actions are highly dependent on the context they operate in, and they make decisions based on available information to them. This is often described as “unpredictable”, but this is not a useful use of the term.

Measurement of the “bad guys”.

This is a brief version of .

It’s important to remember that the word “prediction” does not assume perfect knowledge. A shrug of your shoulders, a feeling that you’re making of a wild guess, the “IDK” acronym or the (🤷) emoji can also be represented as valid predictions from a decision science standpoint.

This is explored in the . Given several options and a complete lack of information, a belief in an outcome would be divided equally (1/N) across those options.

Example: Pick a door! Any door! There are three doors. Pick the right door and win a new car.

Without other information, your belief is likely split evenly between them, with a 33.33% preference assigned to each door. You have no idea where that car is. Your decision will be based on uncertainty.

With this in mind, a complete indifference between your adversary’s options can be represented quantitatively. If we can express a complete lack of information, we’re closer to measuring the information we do have.

So how often are you truly uncertain about your risks that include an adversary?

Do you really act with zero knowledge of all adversarial behavior day to day in security?

We predict adversarial behavior regularly.

It doesn’t take much evidence to prove that we continuously make predictions about adversarial behavior, whether it’s a decision to enter or withdraw from a market, play a game a certain way or secure a system a certain way.

Simply setting a strong password is an example of a prediction. You believe it is probable enough to be compromised to decide on a strong password, or to bother enforcing it as policy across your company.

Do you believe that specific threats are more likely to attack you remotely, or show up physically on premise? You’ve made a prediction about that threat even if you believe it’s 50/50%.

” is a valuable aspect of prediction. If we did not apply predictive scrutiny in this way, threat models would enumerate infinitely without regard if they’re reasonable or not. We are rarely indifferent about risks. Best practices are drawn from this knowledge. Otherwise, they would be arbitrarily chosen.

Prioritization is a part of everything we do in risk. We make subtle predictions continuously. It’s the reason we have opinions of why a certain mitigation is better than others.

So, can we can quantify the important predictions we make about adversarial behavior in risk?

Yes. It’s a thoroughly explored field philosophically, and well practiced industrially.

Prediction of an adversary: Philosophically

Quantitative prediction of an adversary lies in the field of .

Game theorists develop various models that prove how situations can be difficult to reason with when unreliable or restricted information is shared between parties. Like nearly everything in adversarial forecasting, you can trace roots to this problem to RAND in the 1950’s with the formulation of the .

RAND also led the development of to combat complexity in game theoretic problems, especially for .

Game theory models are great at demonstrating the difficulty of game-like situations where information is restricted or asymmetric. The mathematical approaches towards these problems were eventually advanced towards Nobel prize winning research into .

Security suffers from these problems, greatly so. We don’t quite get to know what our adversaries want to accomplish, or need, and they don’t quite know exactly what we have, or how we are protecting them.

It turns out that you don’t need complete certainty… an adversary or blue team just needs a reasonable belief to start making offensive or defensive decisions.

Everything we do in security is based on some reasonable beliefs about our adversaries, even if we believe we can defer our judgement to checklists of best practices to others who do.

Otherwise, why bother studying the enemy?

Prediction of an adversary by industry

Our industry loves to cite Sun-Tzu quotes and opine about the usefulness threat intelligence.

One can not, at the same time, excuse their failures by saying “our adversaries are intelligent and unpredictable, sorry!”

If you are in security, you are burdened with operational prediction. You must focus on being better at it.

Proclaiming otherwise simply hides our profession’s failures behind non-measurement. Other industries .

Here are familiar examples that deal with adversaries routinely, with reliance on subjective probability:

Intelligence: If it were impossible to predict adversarial behavior, the entire field of intelligence analysis would not be attempting to increase its certainties associated with an adversary’s next moves. We would not invest in counter intelligence to ensure the predictions within our strategies were not exposed, and thus, not useful.

The CIA’s “” pioneered quantitative approaches to . Influential work of his is now , and a by the CIA in his name.

Operations Research: Modern military approaches can rely on Operations Research (OR) when statistical approaches are fair game to an adversarial problem. There are many examples of OR probabilistic approaches to adversarial problems, a couple of note are how to create an effective anti-submarine strategy. These are heavily drawn from the intelligence gathered from an adversary, much of which requires approximation.

Additionally, you may have heard of , which is another founding feat of Operations Research. Modern examples of Operations Research wins are undoubtedly classified, but how to best use .

Do these fields inform their practitioners about an adversary? Absolutely.

Could they change quantified beliefs about future outcomes? Of course.

Our adversaries can be studied empirically through forecasting? Yep!

So how does this look in the cyber security space?

See the following essays on , , .

With several panels, we’ve measured “in the wild” vulnerability likelihood of , , and / .

More in , if you’re interested.

More here.


Adversarial risk is very different from other quantified industries in terms of the subject matter being predicted.

But, as demonstrated, they all live within the same realm as the subjective and probabilistic approaches employed by those industries. The concepts of operational security, threat intelligence and defense in depth are more pronounced as a result, but the fact that we can make quantifiable predictions about behavior not eliminated.

Ryan McGeehan writes about security!

Writing about risk, security, and startups.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store