Learning from cryptocurrency breaches

Analyzing two years and 62 entries in the Blockchain Graveyard.

Ryan McGeehan
8 min readSep 24, 2018

I maintain the Blockchain Graveyard to capture my favorite property of cryptocurrency: Its incidents are usually public and often reveal some clues towards their root causes. These root causes help us perform risk assessments elsewhere, allowing us to forecast the probability that a similar scenario could occur to other organizations with cryptocurrency.

A critical tool used in forecasting a probability is access to historic data… even if sparse and anecdotal. This helps reduce prediction bias and ground an individual’s decision making skills using known reference classes instead of “inside views” based on ephemeral gut feelings.

In our industry terms, it is a form of rigor that battles FUD. Here are some observations from the last two years of observations into cryptocurrency breaches, which should help security minded folks prioritize risks in the space.

Root cause estimations for cryptocurrency breaches

Here are some observations based on all cases in the graveyard so far. If you observe these qualities in an organization that holds cryptocurrency, it will strengthen the likelihood that they’ll suffer losses similar to those in the graveyard.

First: Dangerous implementation of “hot wallets” and “cold storage”.

Nearly every cryptocurrency startup begins engineering with the use of a “hot wallet”. Their risk grows greater and greater, until breached. (In the wild: Coincheck, Youbit, Bitcoinica, Bitcoin Central)

There is a spectrum of risk in between “Hot” and “Cold” that does not have a lot of common, consistent language. These terms are poorly defined, and this ordinal spectrum of temperature can be highly misleading, which will become apparent as I write.

I tend to use “hot wallet” in circumstances when raw key material is exposed to an application for convenient transaction building. I reserve “cold storage” for non-electronic storage, like paper. But, some consider offline dedicated commodity hardware wallets or air gapped systems as “cold” etc. This discussion is moot as the complexities with any of these options encourage high risk hot wallets and dangerous practices with most varieties of “cold” solutions.

To illustrate the hot wallet: if you were to withdraw funds from a product you use… the application would immediately create a transaction, signed by the key material (the hot wallet) stored with it, and shipped off to a network for mining.

In practice: Prototype engineers generally leave key material in environment variables, at rest on disk, in repository managed configuration files, in build / launch scripts, CI/CD, etc, so an application can conveniently sign transactions..

This is opposed to “Cold” storage, where larger amount of funds can be kept on physical paper, dedicated hardware (Trezor / Ledger), or other means.

Better yet, some infrastructure takes advantage of secure hardware (HSMs for instance) that makes secret data entirely inaccessible to an application, allowing for operations without key exposure at a speed closer to a “Hot” wallet.

However — Cold storage is extremely hard to build in practice when an application is expected to draw from it often, which generally leads to larger hot wallets. Any manual restoration can be high risk (not in theory, but in practice) when done repeatedly.

And, ultimately, it’s never one wallet method or the other. “Cold” storage is supposed to mitigate a majority of the exposure, leaving a small exposure in “Hot”. Many reasons, discussed below, make “Hot” increasingly higher risk.

Hot wallets are the preferred developer experience.

  1. Hot wallets tend to get larger and larger, since any sudden usage of your application can deplete your hot wallet. Larger hot wallets reduce service outages due to lack of funds, where you’d need to restore from cold.
  2. Thus, while cold storage deposits are easy, withdraws are difficult. By design, you learn very quickly that you may move from cold to hot often during a business surge, and that you can’t be expected to manually move funds back and forth without reducing the security of cold storage.
  3. Likewise, modeling “hot is too large!” can become a difficult problem too. Deciding when to offload funds into cold storage may become a complicated business decision to automate. If hot is growing, it may be because demand has changed. Offloading it too quickly might result in an outage.
  4. Cold storage deposit addresses need to be secured, too. An application that has a “cold storage address” wired into it somehow can be manipulated by an attacker, forcing cold storage deposits to go to an attacker or back to the hot wallet. ( In the wild: Gatecoin)
  5. Cold storage creation or restoration ceremonies can become extremely expensive and cumbersome if you want to plan them correctly. A botched cold storage restoration is not unheard of (In the wild: Coinsecure).
  6. A lack of skill or extra engineering costs prevent the adoption of secure hardware based approaches that reduce the exposure of key material. For every 10,000 Rails developers, there are maybe a couple competent engineers who are familiar with the interface standards to operate with secure hardware like an HSM.

As a result, a cryptocurrency startup that is moving fast usually skips into a mostly or entirely hot wallet based approach. There are not enough options catering to early startups to assist the above hot/cold problems safely while quickly prototyping a product, demonstrated by the lack of adoption and resulting frequency of incidents we see today.

Conclusion: We will not see a reduction in hot wallet incidents until prototype scale options exist (and become prolific) to mitigate hot wallet breaches. Options taking advantage of “colder” approaches that protect key material are one of two major mitigations needed to defend against wholesale breaches.

Second: Applications being manipulated to drain funds.

A wide variety of exploitation can be observed with the intent to cause applications to misbehave, especially with exchanges. This is another issue with the rapid prototyping of early cryptocurrency companies.

Regardless of how key material is secured, an application is still expected to leverage a wallet to sign transactions and move funds as directed by an application.

Even if key material is “safe”, a system may sign malicious transactions anyway. While something like an HSM may give your key material a very low risk of exposure, it may still result in a breach if an application is allowed to ask the HSM to sign anything it wants.

For this reason, it’s hard to say that an “HSM” is “Hot” or “Cold” unless you really understand the policies that are being enforced before a transaction is signed.

Thus, as demonstrated in the wild on a frequent basis, if an application misbehaves, funds can be stolen or depleted without secrets material being compromised. Race conditions, SQLi, insecure direct object references, session and user authentication issues are all present in attacks against applications as the root cause for stolen or destroyed funds. (In the wild: Flexcoin, Cryptoine, MyBitcoin)

This is another significant cost for a company looking to begin a cryptocurrency product: building an infrastructure layer that distrusts the behavior of an application before signing a transaction. For instance, variable rate limits, requests for human intervention, complete lockdowns when % of funds are missing, or when off-chain ledgers are not in sync with a blockchain.

A quick litmus test: If an application is preparing a transaction that would deplete an entire hot wallet — should it succeed? Or, if thousands of small transactions are on track towards depleting the wallet, should that succeed? What if those transactions occur simultaneously: Will they win a race against rate limits?

Far too many breaches seemed to entirely deplete whatever the application was allowed to sign transactions against.

When these issues are stopped far before extreme losses, it’s often due to some type of per application policy management that exists nearby transaction signing. This assists with an outcome centric approach that targets the following scenario:

Our application was forced to misbehave. It moved a regrettable volume of money.

This mindset will help understand defenses that are agnostic to a onesie-twosie approach to enumerating all-the-bugs that could cause a malicious transaction, and instead insulates against all of them. Policy enforcement as a component of transaction signing is just one approach, but many other approaches can help mitigate against wholesale theft via application error.

Conclusion: An assumption that applications will be forced into misbehavior needs to common among cryptocurrency entrepreneurs. It is currently too expensive and complicated to apply policy enforcements that control transaction signing outside of an application. There are not enough vendors or open source infrastructure that demonstrate this as appropriate to implement in a prototype.

Again, this needs to exist at prototype stages for cryptocurrency development before we’ll see a reduction of incidents in the wild.

Third: Cryptocurrency employees are in the crosshairs.

Engineers and executives are often targeted in attacks against cryptocurrency companies. Password reuse, spearphishing, and social engineering with a focus on specific individuals is seen quite frequently in the graveyard. (In the wild: Vircuex, Bitinstant, Allcrypt, Bitstamp)

Additionally seen is lateral movement by attackers through cloud accounts and email inboxes to obtain access to hosting (linode, AWS, etc) or network (Registrar, DNS) infrastructure, usually from a single credential they’ve gained as a starting point, typically from an employee’s personal or work accounts online.

The right employee victim at a vulnerable cryptocurrency company can bypass almost every traditional security control protecting funds. Only institutions that practice reliably enforced and consensus driven (multiple approval) security policies can stand up to engineers finding themselves entirely compromised. Nearly all early companies have some amount of engineers with widespread root access or ability to build and deploy applications or infrastructure at will.

Until a company can eliminate a reliance on everything-authorized employees, they should concern themselves with reducing the likelihood of employees being compromised.

Conclusions: Cryptocurrency companies need to deeply concern themselves with the online account security of their employees. Unfortunately, this must include their personal accounts. This includes deeply critical onboarding / offboarding procedures that scrutinize password and multifactor hygiene in corporate and personal accounts, and perhaps employing single sign on products sooner than most organizations would.

Lastly, trends: More frequent incidents from smart contract failures.

I’m always asked about upcoming trends, which I’ll put on smart contracts.

Smart contract related issues were high impact and infrequent in the earliest data in the graveyard: Mostly from the DAO.

These issues seem to be occurring more frequently, and are also beginning to involve issues where the contract owner, or wallets that administer contracts, are breached as opposed to a contract itself. So while an immense amount of opinion and effort is applied to the reliability of how contracts are written, it’s becoming obvious that failures are occurring when the wallets that maintain these contracts are also compromised. Not only could they operate a contract in a regrettable way, contracts can “upgraded” to behave in a different, malicious way. These are all natural next steps just barely outside the blast radius of the smart contract failures we’ve been seeing so far.

Conclusions: A cryptocurrency flavor of application security has been born with the advent of smart contracts, and the burden of wallet security has increased for those who manage smart contracts. Breaches of smart contracts will rise comparable to their success until we figure out how to bring better wallet security down to the prototype developer. Standardization of secure boilerplate contracts will help reduce the need to DIY contract development.

Conclusion

To greatly reduce the rate of cryptocurrency incidents in the wild, the earliest stages of cryptocurrency developers need access to tooling that mitigates wallet breaches from private key exposure and application risks while still allowing development speed in a prototype phase.

The incidents will continue unless industry support bolsters these areas:

  • Prototype friendly API’s leveraging secure hardware.
  • Transaction policy engines that mitigate against application misbehavior.
  • Startup vigilance against targeted employee risks.

It will also benefit VC’s and angels to ask their investments about these areas.

Otherwise, you may find the value of your investment stored in the environment variables of a web server.

Ryan McGeehan writes about security on scrty.io.

--

--