Read or buy our new book, Build: Elements of an Effective Software Organization →

Beyond spreadsheets: Driving developer productivity improvements using goals, signals, and metrics

So many developer productivity journeys start with a spreadsheet.

Quickly or eventually, that spreadsheet evolves into a tool for tracking metrics across teams. Next thing you know, you’re tracking a dozen metrics per team, manually mostly, discussing them regularly without a clear end in sight, and… Well, let’s just say I haven’t seen this end particularly well.

If you’re trying to solve an engineering productivity problem with a spreadsheet, no judgment. Metrics-driven opportunity discovery is a super-common and super-valid technique for lots of problems — if you were trying to reduce cloud spend, the only place to start is to look at spend metrics and slice and dice them a lot of different ways to find broad or particularly impactful opportunities.

This approach can seem equally rational when you’re trying to improve an engineering organization’s productivity. But in this case, casting a wide metrics net scatters your energy and attention, making you less likely to succeed. When you are looking at a dozen metrics across multiple teams, you’re going to see a lot of interesting-but-conflicting information, and a lot of information that requires nuanced context to understand fully.

Trying to identify productivity opportunities via spreadsheet is like asking a product manager to build a product without ever talking to a customer. Dashboards and spreadsheets are great for driving visibility and accountability — once you know the goals you’re trying to pursue — but they’re pretty poor for discovering specific opportunity at any level. The insight you need to drive productivity change won’t come from columns and rows; it will come from people talking to each other, at the team level and across the engineering organization.

What brought you here?

By the end of this post, I want you to have a framework for setting goals that will drive real, org-wide productivity improvements. But first, I need to understand how you got here.

Maybe you felt like things weren’t running as smoothly as they used to. Maybe you felt like it’s harder to know whether work is flowing smoothly in the new hybrid office. Maybe your board felt like you’re spending a lot of money on software engineers and they’re struggling to understand the value you’re getting as a result.

For each of these scenarios, we can think of metrics we might measure to prove or disprove that your concern is valid — a whole spreadsheet of them, in fact, if we try hard enough and ask enough people to contribute their ideas. But what if we reframe the question? What if, rather than tracking a slew of numbers to affirm the problem, you restated the problem as an objective? All of the problems above can be restated something like this:

We need to demonstrably increase the productivity of the software engineering organization.

This doesn’t say what you’ll do to address the problem, how you’ll measure “productivity,” how the problem came to be, or where it will fall in the grander prioritization process — none of that matters right now. Right now, fuzzy as it is, the problem statement does its job by stating the problem: we need to improve productivity, and we need to make that improvement demonstrably visible to senior leadership.

How will you do it? Buckle up: it’s time to put on your product manager hat (you have one, somewhere, in the back of your closet, right?) and talk to your users. Did you know that any productivity effort has users? So does yours: the software engineers at your company.

The wisdom of the crowd

Your software engineer users are one of the best sources of insight you have. Set the spreadsheet aside for a moment, and go talk to them.

Have as many in-person conversations with small groups of engineers — including both veterans and new hires, including product and backend or platform teams — as you can manage. You could do this via a survey, but I strongly recommend having at least some of these conversations in person with a few teams, because that environment tends to generate usefully divergent ideas.

Use prompts like the following:

  • If we could improve one thing about the tools you use, what would it be?
  • What’s an annoyance for developers today that could become a real risk in the future?
  • What would help the company learn more quickly?

It should go without saying, but do not frame any of the questions in terms of squeezing more work out of engineers while all other things remain equal. Whatever your goal is, it’s not that.

Many or even most of the ideas you’ll hear will have technical solutions, but don’t tune out people, process, and political challenges that you wouldn’t necessarily solve with code. Increasing engineering leverage without spending engineering time could be a huge win.

Goals, signals, metrics

Once you have a collection of ideas … pick one? It doesn’t have to be the “best” one, just one that gets a sufficient number of head nods when you’re talking about it. It might be some small part of some bigger effort. If this is your first time taking this approach, prefer ideas that can be executed on quickly and with minimal disruption [she says as though this is easy to figure out]; avoid ideas that will require un-incentivized work by product teams, or that will require more than two quarters of execution.

Even the best idea won’t survive very long without a metrics-based story for how you’ll prove the idea was good. Metrics are necessary and important, but they should be the output of a thoughtful and deliberative process about what “better” would look like.

The goals, signals, metrics framework is useful here — and note that “metrics” comes last.

  • Goals focus on outcomes, not the anticipated implementation.
  • Signals are things that humans can watch for to know if you’re on track.
  • Metrics are the actual things you will measure and report on to track progress toward the goal.

In this framework, first, you agree there’s a problem worth solving. Then, you set a goal that, if achieved, would be clearly understood as progress toward solving the problem. Next, you have the “I know it when I see it” conversation — what statements, if true, would have everyone nodding in agreement that you were making progress? These are your signals.

Finally, you arrive at the metrics, but a word of caution: don’t beat yourself up to measure something when broad agreement about existence of a clear “signal” would be sufficient to declare success, nor when the change has another more notable business impact. There’s a ton of accruing value to instrumenting your dev process, but not all aspects of productivity can be measured conveniently, if at all.

In many cases, work on a goal will start by establishing a baseline for the current reality, and that reality might not match your anecdotal expectations. Again, stay focused on the desired outcome, not the metric or the tactic. Keep your focus on making things easier for engineers, use that focus to motivate increased observability of processes, execute on the opportunities you find, and know that quantitative data will sometimes disagree with the stories you’ve been told.

You may have a hard time setting a specific target for the metrics at first, and that’s not just OK, it’s expected. When you first start to tackle a problem, focus on trend — up or down and to the right as appropriate. If you decide to continue to focus on the problem over subsequent quarters, you’ll have more information to set targets or acceptable thresholds.

If you find yourself stating your goal in terms of a metric, stop. Think instead about the celebratory Slack message you'll send when you succeed, and center your goal around that.

Using the goals, signals, metrics framework for developer productivity
Using the goals, signals, metrics framework for developer productivity

Federated vs centralized?

One of the most effective efforts I’ve been a part of was measured and tracked centrally but achieved in a profoundly federated way. The business believed in the importance of the problem enough, and trusted the teams enough, to set an ambitious goal: reduce company-wide average cycle time by days, by the end of the year. The CEO provided air cover to prioritize this ahead of many product needs. The obvious move for any individual team was to get below the overall average. The teams figured out how to make it so.

I mention this because it’s up to you whether to federate or centralize your effort for a given goal: some goals will lend themselves to central execution, while others will be most successful if the responsibility is federated. The right answer will depend a lot on your organization and the amount of air cover you can secure for people working on developer productivity improvements.

Communicating with leadership

The metrics you decide to track to validate progress on a particular initiative may not be the metrics you’ll use when talking to the board or senior leadership. It’s in your interest to tell them what to care about so they don’t make an uninformed decision on their own. You may be inclined to gravitate toward time-saved-waiting metrics, but some time-saving changes — especially related to process — can be very hard to measure with any confidence.

Resist the desire for activity-based metrics — X actions per dev per day — because your goal is not to make your engineers type more; your goal is to learn and deliver value for the business, faster. Resist sentiment-based goal setting, too: you simply do not have the levers to prevent a dip in satisfaction that had nothing to do with you. Resist the requests to report upwards on tactical execution metrics more than a couple of times a quarter.

So what should you do? Leading metrics like an individual team’s cycle time and production incidents will help you manage the work, but they can lead to overreaction or over-correction if they’re being monitored too closely at senior levels. Consider “the p80 developer experience” when you come up with the metrics you’ll use to show aggregate progress over longer time frames: it acknowledges that you’re never going to make everything perfect for everyone, but often helps point you to meaningful improvements you could make, even if the p50 experience doesn’t change much. Don’t chase these metrics week-to-week, and be cautious in setting targets for them, as attribution for change can be quite challenging.

You will struggle mightily if you don’t have access to your audience, directly or indirectly, to know how they are thinking about the problem. A sufficiently-senior sponsor for the work needs to represent concepts like SPACE to senior, non-technical leadership. (Maybe you are that person! Lucky you!) That person may not be the person running the actual projects, but they should be technically familiar with them, and trusted by those non-technical leaders too.

A single All Things Productivity metric will have you chasing your tail even more than a spreadsheet full of too many metrics.

If your leadership is determined to have a single “productivity” metric, I hate to tell you this: You are not set up for success, because that’s just not how this works. A single All Things Productivity metric will have you chasing your tail even more than a spreadsheet full of too many metrics. That doesn’t mean you can’t track a single metric for some period of time — just make sure everyone acknowledges what specifically that metric is tracking.

Beyond spreadsheets: Talk to your users to drive real change

If you’re at the start of your productivity journey, it can be tempting to cast a wide metrics net in search of problems worth solving. You’ll be more successful with an approach that is connected with specific pain and ideas about how to address it, leading to specific goals to reduce or eliminate that pain. This may be a different way of working than you’re accustomed to, and that’s to be expected: you’re dealing with a fundamentally human system here, not computers.

If you’re new to this way of working, make sure you have room to iterate on metrics and targets for them; you will learn as you go, and you shouldn’t stick with a bad decision just because it was the last one you made. This goes doubly for tactical execution: be open to creative, innovative, and delightfully simple ideas for how to attain the goal. The outcome-based goal is the one thing that should be considered somewhat fixed over a longer period of time.

Remember, again, your goal is not more lines of code or more story points or pull requests or deploys: all of these can be an indicator of progress, but you risk failure if these are the things you incentivize. Your goal is to learn and deliver value to the business, faster. Your essential task is to keep the humans themselves top of mind as you work toward achieving it.

Do you want to learn more about how to measure productivity? Otto Hilska wrote a broad overview of productivity metrics and how to use them to drive valuable change — it's a great guide on what to measure once you have a grasp on what you’re trying to improve.

Rebecca Murphey
Rebecca Murphey helps Swarmia customers navigate people, process, and technology challenges in pursuit of building an effective engineering organization.

Subscribe to our newsletter
Get the latest product updates and #goodreads delivered to your inbox once a month.