Risk and Performance Management

Ryan McGeehan
9 min readMay 5, 2020

--

Risk measurement quickly raises questions about management…. but not about risk management. Rather, managing the performance of people who manage complex risks.

My writing on risk measurement often gets attention from management roles. The management audience desires methods to manage the performance of defenders with risk based measurement. I decided to thoughtfully write out my views on that.

This will only partially read like a cyber security essay. Most management and measurement concepts are not specific to our sector — however, the characteristic of risk makes performance management even trickier. Let’s explore why!

We’ll start with the premise that organizations want to perform well. Some function within an organization must hire, maintain, develop, and lead a team towards a mission or goal. This leads to questions like these:

  • Are making progress towards our goals?
  • Do we need to make changes to our team?

Measurements become a highly sought source of influence for management while they answer those questions. However, the subject matter of risk has a treacherous relationship with employee performance.

In short: this essay will defend why we should keep risk measurement separate from performance measurement.

1.Confirming the quality of performance over time
Quality of confirmation differs for depending on the work being done.

Some work has measurement opportunities. These measurements support a manager’s opinion of performance. Some examples:

Frequent: Produces its objective repetitively or continuously. A group of salespeople closing multiple deals on a quarterly basis.

Expected: Success or failure has a time frame. A political campaign campaign’s candidate is expected to succeed or fail by a certain date.

Direct: Produces an objective record. A baseball pitcher’s strikeouts are directly written into a record without debate of how they were classified.

Observable: Data clearly suggests success or failure. A building can be observably completed. If it collapses, it observably failed.

Work that is similar to the above criteria has less debate about whether it is performing well or not.

Classifying security work for performance

Let’s discuss blue teams focused on avoiding large data breaches. This security work has the other end of all four of these classifications. It is infrequent, unexpected, indirect, and often unobservable.

Infrequent: Each day that passes without a breach is not a victory that suggests a breach won’t happen in the future, or hasn’t happened.

Unexpected: We aren’t signed up for prearranged match-ups against our adversaries. There are no expectations of a pass/fail moment at any time.

Indirect Results: It is impossible to directly measure the probability of unwanted events. That is omniscience. While testable… we’re limited to approximate, subjective measurements.

Unobserved: The only methods we have to suggest the absence of a breach is by developing the evidence of absence of a breach. A breach can happen silently and unobserved… and yet, it relates to our performance, whether we observed it or not.

These qualities lead to difficulties assessing performance. Managers that are more familiar with types of work that produce lovely measurements may face trouble managing areas of work with sparse opportunities for measurement.

Infrequent, unexpected, indirect, and unobservable.

These classifications puts us knee-deep into the performance debate. We are set up to continue! How can a security organization discuss performance if these classifications are greatly resistant to measurement?

To progress with this discussion, we have to bring up knowledge work.

2.Manual Work and Knowledge Work
Knowledge workers plan their work and define “success”.

All work is some combination of knowledge work and manual work.

This draws heavily from Drucker’s interpretation of knowledge work.

A manager delegates decisions to a knowledge worker with a wealth of expertise or knowledge and grants them flexibility to operate on that knowledge. A knowledge worker is expected to understand a problem and recommend solutions. Knowledge work requires substantial trust and subject matter expertise. Some example tasks for a knowledge worker:

  1. Articulate a problem
  2. Suggest a budget
  3. Propose a solution

Manual work, however, operates on tasks that are output by knowledge work.

For instance:

  1. Write code to a specification.
  2. Follow a runbook for a procedure.
  3. Fulfill a delivery contract of goods from point a to point b.

Everyone performs some ratio of both. However, knowledge work defines what success is and creates the steps towards achieving it. This is often results in selecting a KPI or other success criteria.

Trust between a knowledge worker and organization is crucial. After all, a knowledge worker could define destructive tasks like set all of our products on fire and a KPI of temperature of our warehouse. These examples, of course, are outlandish. But, we’ve all worked with non-performers before. A poorly performing knowledge worker can prioritize damaging work. Trust is key.

A knowledge worker is also trusted to change a KPI in the face of new information. In reality, these decisions come from team conversations and some amount of consensus with management. Knowledge workers simply have an increased influence in these discussions.

This suggests that a knowledge worker is responsible for how they are evaluated. They are relied on to suggest the goals they will pursue.

So how do we manage performance of a knowledge worker, if they’re the ones defining success for their role? This is a hard problem that has been written about by management authors for decades.

3. Experts define what they set out to accomplish.
The Objective and Key Result for knowledge workers.

A knowledge worker should be routinely defining and accomplishing the tasks that lead to their stated goals. In this way, they effectively manage themselves. The knowledge worker produces realistic approximations of what they’re capable of and what is possible. It is ultimately followed by the execution of those tasks within time frames they have estimated.

Reasonably making progress with OKRs

The Objective and Key Result (OKR) plays well with the knowledge worker model. It allows the knowledge worker to define their key results, which may fluctuate depending on how they want to attack their objective. It also requires some form of testable measurement that lies within the overall effort to be successful. In addition to defining these OKRs, a knowledge worker should also behave like an expert.

Also see Measure What Matters for a discussion on OKR philosophy.

  • An expert should not require substantial supervision in making their objectives.
  • An expert should not be constantly fluctuating Key Results with conveniently available pseudo-justifications they couldn’t foresee.
  • An expert should demonstrate that their knowledge is still developing from one performance cycle to the next.

Management has an easier time observing a knowledge worker progress through their commitments from the OKR perspective and gives insight to draw upon in a performance conversation.

However, a strict view of the OKR philosophy still avoids strictly mapping OKRs to performance. Aggressive OKRs should be inherently ambitious and tolerate failure. By design, an OKR should never get an A+, or even attract a grade to begin with. I agree with this. However, it means there’s more towards understanding performance for knowledge workers.

OKRs provide a manager with a map with which they can pursue two other tools, but it is not a performance framework in itself. It’s only useful with other evidence.

Organizational Feedback

The need for peer review is elevated due to the friction involved with measuring a blue team’s performance. Blue team work being infrequent, unexpected, indirect, and unobservable means that the role of knowledge work is maximized. The Bay Area has some frameworks that focus on performance assessment of knowledge work.

Lots of discussion of Facebook’s (1) approach, and similarly, Google’s, are publicly available. Both are called Performance Summary Cycle or Performance Review Cycle.

Peer review at scale is hard. I can’t pitch it without caveats. Peer review produces toxic cultural waste. It is noisy, biased, time consuming, and often political. However, having been a part of it myself, I believe it’s the best available tool when the work being produced is resistant to straightforward measurement.

Peer feedback is impossible to get right for everyone. When it works, feedback is extremely valuable in any context where learning is valued.

Sidenote: If you can suffer through the self-congratulatory tone of the book, Work Rules has a thorough discussion of Google’s PRC.

So, why the heck are we OK with something toxic? Instead, why can’t risk organizations be strictly measured against some objective, unbiased risk measurement? Can’t we just say less risk is better performance? And if so, just measure the risk, reduce it, and move on with our lives?

Let’s talk about where any numbers associated with risk come from to begin with.

4.Risk experts maximize the role of knowledge work.
Subjective measurement pushes knowledge work to its limits.

How much did we sell? How was our uptime? How much did we grow?

These are objective measurements. Their trends suggest that the decisions of experts were good, in hindsight. This may increase trust. Example: If we observe increased uptime… it suggests that the OKRs that our SRE team decided to pursue were sound. Very good!

As we discussed, risk based knowledge work is concerned with future events. Often times, we are valuable due to our beliefs regarding risks in an environment. Risk is ultimately a subjective topic.

Though it is subjective, there is a measurement we care about… it’s just not historical. It’s best represented by a subjectively assigned probability. Like a forecast. As an example, we may assign a % based belief that a regulatory disclosure event may happen within a certain time frame… for instance, the next year.

This makes our form of knowledge work very special as it maximizes the role of trust in our work. We are special in that we require a maximum amount of trust to function correctly in our roles, similar to a lawyer or a doctor. Our assessments of future outcomes often come at our own word.

This reality surprises us with our role of self assessment. We are often responsible for mitigating those risks at the same time as rigorously measuring them.

Problems grow unless we avoid coupling together risk measurement and performance. Let’s talk about how this goes awry in the performance conversation. We’ve all experienced it.

Risk has little to to with the failures of metrics based performance management.

5.Measurement idolization corrupts knowledge work.
When incentives harm the value of expertise.

The loan shark tells their debtor: Bring me the money, I don’t care how!

This workplace terror may be described as toxic performance management of KPIs. Books and blogs dive deep on the issue. Television has made it interesting. It is often in the headlines. Its often blamed for speeding tickets.

Goodhart’s Law states:

When a measure becomes a target, it ceases to be a good measure.

Organizations already have a hard time deciding how to interpret the concept of measurement into overall performance… risk or not. As we discussed previously… the collection and dissemination of feedback through performance cycles is tough and expensive. The performance cycle at tech companies in the bay area is a famously overwhelming time sink for employees, and is a severe emotional drain. Managers desire objective measures of performance to save time and reduce debate.

Any sane manager would love to offload this cognitive terror for an easy, defensible, numeric dashboard they can sort employees on. The draw towards metrics based management makes total sense.

The challenge I set out to explore by way of this essay is finalized… the conflict can be examined with three truths:

  • Knowledge workers are paid to provide risk measurements.
  • Knowledge workers also measure the risks they mitigate.
  • Performance management can corrupt risk measurement.

When risk measurement and performance are tightly coupled, it is a circle of absurdity: Hire risk experts to tell us good news, and reward them increasingly when doing so.

I’ll wrap this up with a final reference that helps model the point. Bill Walsh (summary) has written in this arena of distancing performance with measurement. He succinctly phrases the problem as a football coach would: The score will handle itself. In his perspective, a standard of performance is core to success. Specific activities are under direct control of the team and should meet that standard. Winning and losing cannot be controlled, only make more or less probable. Focusing on wins, losses, or the scoreboard diminishes the value of activities that contribute to the probability of victory.

This is similar to an OKR-centric approach to risk. Focus on the objectives and key results, and the risk will handle itself.

Conclusions

  1. Risk based knowledge work resists simple performance measurement.
  2. Knowledge workers are trusted to measure and manage themselves.
  3. OKRs and peer reviews are crucial for evaluating a knowledge worker.
  4. Objective measurement is efficient, but risk is a subjective concern.
  5. Overly quantitative management becomes subjected to Goodhart’s Law.

Ryan McGeehan writes about security on scrty.io

--

--

Ryan McGeehan
Ryan McGeehan

Written by Ryan McGeehan

Writing about risk, security, and startups at scrty.io

No responses yet