Forecasting: Bloomberg’s “The Big Hack”

Measuring the uncertainties of headline cybersecurity journalism

Ryan McGeehan
5 min readOct 5, 2018
From Bloomberg: “The Big Hack

Note: This forecast closed, and the follow up is here.

Today, we saw an incredible discussion surrounding Bloomberg’s disclosure about supply chain hardware implants, attributed to China. This spurred lots of investigation and confusion in many directions.

How does a security team interpret such a complicated subject, and measure the risks associated with it?

I organized a forecast to help unravel all of the twists and turns, and to create something measurable that we can better understand together.

Let’s build a scenario we can create forecasts against.

We care about risk, nothing else. What information is being reported that influences our opinions about risk? We want to measure how our knowledge has changed as a result of this journalism.

There are many aspects of this report that a security team simply doesn’t care exclusively about. For instance, if all of the facts in the article are correct, we may still learn and we may still prioritize risks differently.

The journalists may have misunderstood or lost details about the attack’s specifics. This also doesn’t matter. If parts about the intrusion are still sort-of accurate, they may apply to us.

The bad actors identified (China) is interesting, but not necessary. While interesting and useful as threat intelligence, so long as any bad actor is eventually identified, there is still usefulness in this journalism. So even poor attribution has subtle value if any actor is eventually identified, as it would confirm most of the article.

This complicates a scenario we want to measure against. So, for this forecast, we complicated the scenario’s conditions to capture the essence of our goals:

Should we care?

The conditions in this scenario.

There were several important factors that would influence how most security teams would prioritize the risks related to this article. While you may create a different scenario, this was designed to be pretty adoptable. If you disagree with it, you have to make your own.

The title of the scenario read:

Will the supply chain server hardware attacks described in the Bloomberg article be confirmed by Jan 1 2020?

It notes the conditions that would “Yes” this future scenario as follows:

  • Official and on the record confirmation of the incident from any Amazon, Apple, or SuperMicro representative.
  • Official and on the record confirmation by an unnamed Bloomberg victim company that is linked to the attack.
  • Indictment or confirmation of the described incident from a government institution.
  • An officially published hardware forensic analysis of the described chip from a security vendor confirms this incident.

Should any of these be confirmed, there will be a substantial influence on how we prioritize risk, given the observation of a recent in the wild attack.

Ultimately though, there’s a catch-all: I would be interpreting these factors in the event of corner cases. While forecasting, we quickly ran into some!

For instance, if the reporters somehow confused hardware with firmware. While unlikely, I would ultimately judge if this is a “hardware” attack, even if software (firmware) was modified. I think for purposes of risk management, we’d still care and not bicker over the judgement rules.

This sort of “someone plays judge” aspect is a convenience factor in forecasting. A stricter environment would do well to improve upon a single judge if that was important.

We shot wide, designing this scenario to conclude by Jan 1, 2020.

A forecast of “about even” odds that attacks will be confirmed.

The panel was 44.82% certain that we’ll see some confirmation of the events described above. This was the highest uncertainty forecast I’ve run so far. There was not much debate, just significant confusion in both directions. This, opposed to the Chrome Security forecast, which was 98.36% certain of a “no attacks” outcome last month. So, this panel knows how to express certainty when it believes it should.

Our adorable panelists

Despite all of the information available, we seem have no idea what could happen.

There were several aspects of the discussion. A diverse group of 20+ people including various areas of security and journalism. It was cut across two Slack channels with different conversations, and outside participants who were not a part of either. A goal of mine is to encourage diversity in the panel and to gather a lot of approaches to the problem.

I’ve taken most of this group through several forecasting exercises over the past months, and they have become familiar with the practice of forecasting and tend to think all of the information through, and are open to being convinced of different outcomes. Here’s what we discussed:

The outrageously explicit denials from every target mentioned. No real accusations of weasel words. (Apple, Amazon, SuperMicro, China).

Strong claims of anonymous sources in the journalism from each vendor and law enforcement with illustrative language about the actions that took place.

Several technical details about the viability of the attacks as described, and whether this was a firmware hack mistaken as something more substantial. (ServeTheHome, ErrataRob, DaringFireball, qrs, riskybusiness, SecuringHardware)

Lastly, and unfortunately, the journalists were reviewed for credibility. Many panelists had mentioned reviewing their work in the past. Other headline stories by these authors have seen debate and refutation. For instance, a new “Cyberwar” that may have been refuted (wikipedia), and the NSA using Heartbleed for years before disclosure which also received several sources of criticism. There was discussion about whether their work should be dismissed entirely, or if corrections will still be impactful.

Yet, on the other hand, there has been great support of Bloomberg’s fact checking capability, which would mitigate against massive errors.

In forecasting discussions it was important not to measure journalists. It was important to measure the risks we care about, and designing a scenario around those risks was most important. If 99% of these issues are overhyped, but 1% of these facts confirms and changes our perspective on risks, then there was value for us and it’s important to capture.

What happens next?

This is the most interesting aspect of forecasting. We can only apply judgement towards the information we have.

While the forecast shot long and closes more than a year from now… we will see new information comes in, we can forecast again and value the additional information. It seems this may be the case:

With all forecasting — the goal is to be “less wrong” or “more right”. It’s just meant to structure wild conversations that are using non-numerical terms to discuss a numerical subject: uncertainty. If we can measurably begin reducing our uncertainty, we can start turning a corner as an industry.

Ryan McGeehan writes about security on medium and scrty.io

--

--