Starting Up Security: From Scratch

Ryan McGeehan
11 min readMay 9, 2016

Did you just get asked to own “security” for your company?

We can mimic what other companies do and copy into a similar security program of our own. This is certainly better than nothing, though this won’t take into account how your risks are unique to others.

With some work, we can build something better. This article is about exploring risks before we start building.

Consider this a prequel to Starting Up Security.

Security is more of a attribute of a company than an organization. Having a great understanding of risk will help you build this program to be inherently cross functional with less resistance to change.


To truly start up security, you should be able to articulate a strategy based on the risks and threats around you.

A good Chief Security Officer can use this approach to result in a program like the one described in Starting Up Security. It describes their first month on the job, selecting and prioritizing risks to mitigate.

Let’s design a security program from scratch.


Start with a series of conversations around risks, threats, and fears, followed by a complete enumeration of the technology we depend on. This is called a risk assessment and very thorough frameworks exist to perform them.

Careful: Don’t limit security to a single risk, product, or technology. Narrow programs are formed this way.

A risk is the potential that something of value will be lost. This is sometimes calculated by considering the impact of a loss and the probability of a loss.

“The lock to the vault could be broken. “

A threat is the thing that will cause you to lose it.

“A cat burglar with a set of lock picks and plenty of time could break that lock.”

A fear is a perception of risk. Fears may help discover risk, but they might not actually get you to the right threats and instead reveal vulnerability.

“I’m afraid someone could rob the vault because sometimes we forget to lock it.”


Interview your veterans and leaders and ask them about close call incidents and war stories so far. Ask about any embarrassing practices that keep them up at night.

Find out who has the most privileged access in the company. Interview them. Ask them who else has what access, how often they actually use it, and who shouldn’t be trusted with the same. How could they abuse it, and what would be abused? Who would notice?

Ask executives the same questions. Ask them what risks they think you’re supposed to be working on. What do they think are existential risks to the business? You’ll probably find that “Security’s job” is a totally dependent on who you ask. It will end up being close to a sum of these views.

Who hires and fires people at your company? How is access given and taken away? These roles influence security in a growing company pretty significantly.

For interviews with communications or PR: Swap your company name with the names of breached companies in Brian Kreb’s blog, or an FTC enforcement. Once they see what a bad headline looks like, have a conversation about what you could or could not recover from.

What do your users view as your biggest risks? How could you poll your users? They will broadly vote for “duh, their data” as your biggest risk, but which bit of data specifically? Is data inconsistently stored and secured? Would a criminal hacker say your risks are any different? What about a state in conflict, what would they want to breach?

Output: You’ll be informed of how your company perceives risk.


Spend time enumerating the anatomy of all technology at your company. If you can interview every major technology owner from IT to software engineering to infrastructure, do it. Get a high level of how data (as an abstract concept) moves from place to place, in and out of your systems. This movement of data around your architecture is how a good security mind will think.

When do their employees gain and lose the authorization to access what data?

Where is that data geographically located when at rest? Is it encrypted? Where are the keys? Who has access to those keys? Who made those keys?

As you go about this, keep an eye on the perception of monetary value in the company. Try to imagine what would be sold first if you were to liquidate. You can imagine the monetary loss from sporadic power outages and hardware failures, people getting hit by buses, or a fatal bug in just the right bit of code. What would be valuable to a competitor?

How is new code written? How are bugs fixed? How is code released and shipped? How are updates pulled down to a client?

Who are the vendors that our company is dependent on? Your IT team will know of a few. A contracts lawyer may know more. Your finance team may know most. Then figure out what vendors are most important to the business or to your users and customers.

At this point the theme of these questions should become clear. You simply want to know how data gets around.

Output: You’ll begin to understand the company as a set of risks and moving parts.


Once you’ve become more familiar with the internals of the company and its risks, now we can model the threats to each risk.

Enumerate your probable, practical threats. There are many threat modeling approaches, some designed for startups. What, (or who), are the most probable ways that your biggest risks will be threatened?

  • Risk: Unauthorized access to our customer database
  • Threat: Remote intruder of engineer laptop with production SSH keys
  • Threat: Engineer leaving our company for a competitor with the DB
  • Threat: Thief who steals Sales Rep laptop while traveling with excel data

A way to expedite this is to become familiar with common types of attackers. Most modern threats are weaponized to take advantage of common opportunities:

  • Everyday password leaks
  • Breached employee laptops
  • Breached infrastructure you expose to the internet
  • They are already trusted! (insider threat)
  • They are network omniscient (MITM)

These very common threats result in very common mitigations. This is why a copycat approach to security is still viable, though incomplete.

Consider how Target likely appreciates their vendor security very differently after their HVAC systems were breached, and this area would have more consideration in threat modeling. Or, Bitcoin companies worry about cryptographic threats to key material more than others.

Consider your own oddities that could surface unique threats. This is when your security program starts to look different than everyone else’s.


Now that your risks have some well understood threats, organize this global set of risks for priority. Start with some example groupings:

  1. Risks that should never ever be threatened. These should include very agreeable risks that are existential to the business. You’ll eventually make these your biggest areas of concern. You’ll seek overwhelming buy in for support for projects, budget, and hiring. (“Credit Card data will never ever be accessed.”, “A backdoor will never ever be added to our codebase”, “An anonymous user will never ever be identified”)
  2. Risks that we should never ever be unable to respond to. These may be areas where you are willing to trade off preventative controls as long as you have strong visibility for a retroactive response. Example: “We allow employees ‘admin’ access to laptops, but only if”:
  • “OSQuery packs are installed”
  • “It’s receiving remote updates from IT“
  • “We can remotely image them”

For #3, write down everything else. Intentionally ignore them. We’re not going to focus on our employees being shoulder surfed, or if a boogeyman might leave a malicious USB key on their desk. Those are big company security problems.

Present the results from #1 and #2 to your leadership, your peers, and anyone involved with this process. You want consensus that these risks are worth mitigating and that cooperation will happen. Security is a horizontal responsibility, not an organization. You depend on this consensus to prevent a temporary interest in security.


With our input of risks, we need to decide on an output of mitigations. We’ll choose mitigations that have the deepest reduction of impact and probability on as many identified risks, and put them earliest on the roadmap.

Example: If an engineer’s laptop could get owned by a drive by exploit… should I focus on hardening their host to prevent exploitation, detecting misbehavior on the network after exploitation, or multi-factoring their authentication methods assuming they’ve been stolen?

Answer: Multi-factor will defend against so many other threats we’ve predicted, that it has become the highest priority mitigation.

One approach is to consider the “kill chain” for your threats, and where many of them overlap.


Most attacks (even “sophisticated” ones) are preceded by a series of events that can be detected or prevented. For instance, they all want some level of persistence (ex: malware), some method to move laterally (ex: domain / account takeover), some way to discover employee targets (ex: Searching LinkedIn).

If you consider hundreds of varying attacks, there may be very common steps that all of them would implement. A stolen key for SSH authentication, for instance, would be required for many other attacks that pivot deeply into your infrastructure. Thus, multi-factor authentication becomes highly valuable as it would prevent a wide constellation of attacks involving that form of lateral movement.


Security teams at competing companies usually want to work with each other and share information on known adversaries they’ve caught red handed. This is a huge way to avoid speculative threat modeling.

Collaborate with other security teams.

Example: If a different security team in your industry just suffered a product leak to a common social engineering gang, it may be a great time for a tactical email to employees with a heads up to be suspicious of email and to build a place to report it. If they re-use a domain or technique, may as well attempt to block it.

Generally though, find security teams that are nearby your company in technology or industry, and take them out for pizza.


If you read any of the articles I’ve written, all of them will slow down the company. End of story. There are bright spots every so often where a security team speeds things up, but that’s rare.

No one wants to go for a run before work, or eat their vegetables, or see the doctor once a year, or regularly review their credit card statements. It’s all friction against how we want to spend our day. The very same with people trying to do their job in a company when security comes around with a bunch of changes.

But, in hindsight, these preventative measures are often worth it.

That means when you ask engineers to change their workflow for this vague and nebulous “security” thing… consider what you sound like. If you arrive as a group on a set of risks, this process is smooth. When you can describe the risk and threats to the person impacted by the change, it will then be an implementation challenge and hardly a political one.


If our roadmap generates logs, who looks at the alerts we raise? Do we hire a team or junior analysts to watch logs all day?

First, avoid solutions where centralized alerts happen at all. Try to ship off alerts directly to the employees that are impacted and let them figure it out. Or, announcing high value alerts to a slack channel where a group can casually discuss an alert.

For the rest: the best answer I’ve seen here is a rotation. Lots of small engineering teams do bug rotations, support rotations, etc., on a weekly basis. If there are no active alerts or fires for the current rotation, that person makes the rotation easier for the next person.


An early security team will own a fire hose of incidents, bugs, risks, and opportunities to for improvement. Host infrequent meetings with the right leadership as part of your roadmap to discuss your top 1% of nasty risks you’ve uncovered. For a wider audience, newsletters, or just a regular drip of “Hey @here check out this nasty bug someone found!” in slack will go a long way.

One way is to avoid committing to share this content formally, but to instead commit to social time between the teams with continuous risk. This content will share itself with proximity between teammates. Take ’em out to lunch.


Security carries a burden of needing to be constantly secure. However, high performance groups and individuals (product teams, athletes, etc.) usually “peak” at designated times to meet a demand. Security teams awkwardly peak after an incident, or after a red team.

A boxer has the advantage of knowing when their fights are, and plan for them. They’d still be pretty nasty if they had to throw down in the meantime.

Scheduling “incidents” into a roadmap will have a similarly positive effect.


The only time you truly know how you’re doing is if you’ve been hacked. This sucks. Metrics in security have been heavily explored, but are borderline useless compared to the leaps that come from incidents.

The answer is to introduce red teams, tabletop exercises, or some other similar “hostile” event, as a milestone near the end of a roadmap. Challenge the controls you’ve just built somehow. Let this be a chance to fire the new guns at something. Otherwise, it’s a serious bummer to build security infrastructure without a guaranteed attacker.

It’s almost unfair that product teams get instantaneous feedback through growth metrics and revenue and all sorts of glorious engagement data. Product teams also work off of non-arbitrary deadlines based on a market, user demands, competition, etc. It makes security project deadlines look made up, or made with a continuous “we needed this yesterday” attitude.

A scheduled attack that is timed with a roadmap helps motivate a true production mentality and improve focus. A team will know that the mitigation they are building could catch a bad guy right away, which is a very satisfying aspect of working in security.

Afterward, you have a perfect time for a Hackathon, a pet project, or a tabletop exercise to revisit risk and threat models. It is a great way to iterate while still hitting peaks as a team.

It is greatly preferable to plan for peaks instead of expecting greatness 365 days a year.


It’s important to start a program with a consensus on the wide view of risks and threats. We learn how it all moves, we decide on our risks, and we agree on the fixes as a group. This approach helps avoid cold feet or negligence once it’s time to actually do the hard security work. Once you’ve put this momentum into a roadmap, it should be easier to run a security program that goes to war on a schedule.


I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups like HackerOne. Incident Response and security team building is generally my thing, but I’m mostly all over the place.