A topic described as subjective is often considered non-scientific. Risk, being a subjective topic, must certainly be one that science has a hand in. Can we pursue a science of risk with these limitations?
An exploration of subjectivity and risk in a context of science forces us to confront the scientific method itself and it’s relationship to risk. The entire practice of cyber security shows its roots in the scientific method. It also shows how the scientific method is, itself, limited by the subjectivity of its operators.
With this essay, we’ll tie these concepts together towards a more comfortable view of our role in the scientific method as security engineers.
A Startup Example
Two VC’s are considering an investment in a startup, ACME Co.
VC One: I expect the company will generate $100M in revenue in 5 years.
VC Two: This company is misunderstood. I expect $1B in revenue in 5 years.
Neither VC will actually solve for how valuable this company is. The valuation is based on how much the VC has available to invest, any insider knowledge about the market, as well as their beliefs about risks and rewards in ACME Co’s future. Both VC’s are likely to be wrong in the end.
Even when ACME Co. goes public, the public marketplace (NYSE, NASDAQ) is merely an pseudo-equilibrium between these perceptions of buyers and sellers. Someone may buy a stock when their personal measurement is higher than the market, and may sell when the market’s beliefs are higher than their own.
A valuation is a (hopefully!) scientific or actuarial discussion of subjective probability in terms of belief and value. Risk is the same discussion. Instead, risk replaces reward with undesirable outcomes: The probability of an unwanted impact.
1. The Scientific Method and Cyber Security
Cyber security risk involves a quickly fluctuating set of hypothesis.
Those of us working in security desire a principled approach to identifying and mitigating risk. Not checklists, green lock icons, or certifications.
Risk is generally accepted by industry and academia as the probability and impact of unwanted future events. It is a testable measurement. Well crafted probabilistic forecasts can be retroactively inspected for error and calibration.
The scientific method is not something that is generally used as the go-to model in security. This is reasonable! The scientific method is very much focused on producing confirmation or rejection of a hypothesis.
But we can be great at science. Our friends are great at developing hypotheses!
- Induction: Others are compromised by spear phishing, and we might be, too.
- Deduction: They bypassed encryption! They must have the private key.
- Abduction: They must have found domain admin with lateral movement.
We also follow up on these hypothesis through the scientific method:
- Hypothesis: At least 1 incident (SEV0) this year will involve a remote adversary.
- Experiments: External vuln scans, network segmentation, bastion auth.
- Measurement: Expert forecast in probability (%) of occurrence / year.
- Test: There (was/n’t) a SEV0 incident meeting this criteria this year. (Brier)
- Confirmation: “We would feel stronger about these results with better network telemetry, experimentation, and detection. But, this experiment was useful and we have ideas for the next one.”
There is obvious criticism to apply: Who works like this?! We don’t treat our work like this in practice. End-to-end security engineering falls into many criteria of science. Though, we don’t define scenarios, use color coding, letter grades, non-standard experiments, and don’t retroactively review ourselves for error or review forecasts, or even make them at all.
Practical security work that reduces risk can absolutely fit into scientific methods. We need to better learn how to introduce risk and science topics into our field, like risk based hypothesis, forecasts, and error.
2. The Importance of Hypothesis in Risk Mitigation
Formulation of risk hypothesis is just as special as its measurement.
Our primary export in security may very well be the formulation of risk based hypothesis. It takes the form of abducting, deducting, or inducting bad things that can happen and suggesting an associated belief they could materialize, along with how bad the damage can be. The following are hypotheses in disguise: Risk assessments, vulnerability reports, threat actor disclosures, are all often a subtle form of risk hypothesis. They (should) suggest the plausibility of certain bad outcomes in order to be useful.
Our industry has demonstrated that we are really good at this. The entire practice of threat modeling, risk assessment, and all security research is heavily loaded into the practice of what can go wrong. Our effectiveness at measuring and mitigating known scenarios (spear phishing, watering holes, credential attacks) is balanced with our ability to abduct unknowns, moving them into knowns, by way of rigorous logic and research.(IE, discovery of zero days, undetected breaches, unattributed adversaries)
The extreme productivity of the security industry’s what if mentality, and our ability to test, mitigate, and test scenarios begs the question if we can produce these scenarios automatically. Is this where we go? Can we ever produce an objective system that tells us where the risks are, without the assistance of a human expert?
Is this the direction of a risk science? We fully automate the discovery of risk?
I would argue no, but that may be OK. Let’s dive deeper.
3. The “Science Device”
Identifying limitations of the scientific method in risk mitigation.
Imagine you come across a science device. It allows you to submit a hypothesis into it. It would respond with omniscient authority with the test result:
Penicillin kills bacterial infection. (Science Device: True).
You sit and think: Oh, wow. This is free science! You proceed to measure everything like a mad scientist and you probably become very wealthy.
However, you would soon find yourself with a limitation to your (undoubtedly immense!) powers. This is because you’d only be receiving half of the scientific method’s value. The Science Device is incomplete!
The Scientific Method lays the obligation on the scientist to brainstorm the next hypothesis. Free and unlimited objective measurements only go so far. The hypothesis is not for granted and not produced by measurements.
You may have experienced this limitation while troubleshooting a complex technical problem and losing sight of the right question to ask. Error messages or debugging output in a terminal never tells you what to do next. Hypothesis still has to come from somewhere — you, a person. You would still need an educated expert to input a series of educated hypothesis to, say, cure cancer, even with unlimited measurement capability.
With such a device, I’d probably test
P ≠ NPimmediately, wouldn’t I? Consider how you even knew it was worth asking.
This quickly leads the the question: Well… Could I solve this with a hypothesis device?
Since we’re already working hypothetically, sure!
4. The “Hypothesis Device”
Discussing limitations in the scientific method due to subjectivity.
A “Hypothesis Device” would theoretically take input about the operator’s goals. It would hypothetically return the next best hypothesis to test in pursuit of the operator’s goals. This is difficult to grasp, even as fiction.
As fiction, it’s simple to imagine the “Science Device” operating and providing
False outcomes to it owner. It’s along the lines of having a Magic 8 Ball and is well traveled fiction.
A “Hypothesis Device” has much more to figure out. What would it suggest to the operator as the right hypothesis? Would it suggest experiments?
Many hard questions come to mind, if it did.
Would the device need to respect the limitations of the operator?
The operator would normally require the resource or measurement capability to experiment with a hypothesis. Their lab has limitations. Would such a machine hold back so the operator could actually pursue the hypothesis? Would it suggest less valuable hypothesis?
There are more questions. Does the machine respect the operators current intellect? Would it suggest a hypothesis in a more robust language that the operator isn’t familiar with? Or require a workforce to perform the experiment that the operator doesn’t have? Does the machine already know what the operator knows about the problem, or would it have them start from fundamental knowledge?
Given these questions, it follows that the “hypothesis machine” would have to be limited to the knowledge, desires, and capability of the human subject operating the machine. Otherwise, the device wouldn’t be useful.
It follows that the device itself is dependent on, or shares information with, the subject. This indicates a subjective result. On the other hand, the “Science Device” may produce the same results, regardless of who is holding it, indicating an objective result.
That’s it. By way of thought experiment: We have suggested a principle in the formation of hypothesis within the scientific method is founded on subjectivity.
This isn’t quite new, but it’s important to accept in dealing with risk: Experts produce and prioritize hypothesis to subject to the scientific method. Risks are formulated on hypothesis of future undesirable outcomes.
In risk, the brain rules. Let’s bring it back to security.
5. Applicability of Subjective Hypothesis to Cyber Security Risk
Accepting the role of subjectivity in risk mitigation.
There is no way to remove expert talent or experience from risk measurement in its rawest principled forms. Doing so would have to literally solve for the general intelligence milestone in artificial intelligence research and create a form of artificial subjectivity: An actual, thoughtful being with shared needs and desire to avoid outcomes as we do.
The general intelligence milestone literally aspires to create a subject that operates independent of ourselves to accept and process general information. Among many things, this system would have to be curious, identify opportunities, suggest, describe, and prioritize hypothesis in order to compete with our ability to assess risk.
This suggests that all approaches in identifying risk cannot evade the brainstorm. Risk assessment can never be complete so long as it is limited by human knowledge. It can only be improved by including the views of diverse, numerous, experts. That’s because the scenario is inherently subjective and these opinions can be shared. However, this direction of improvement faces our well understood limitations as we desire to make these assessments with limited time, money, and expertise. We can’t create a committee for everything.
Conclusion: In risk assessment, expertise is important. We need to protect and improve the subjective expertise of human minds in order to advance a science of risk. We need to accept the role of subjectivity in the study of risk and treat bad future outcomes as a scientific method. As security engineers, we need to adopt this as our domain of expertise.