There's no such thing as 'developer ROI'
Engineering productivity is real, but trying to quantify it in dollar terms is a losing, disastrous battle.
Everyone is talking about developer productivity - ROI, utilization, velocity. The reason is simple: after decades of non-stop hiring, the belt’s tightening on engineering teams. In 2023 alone, Big Tech slashed 250,000 jobs.
For CTOs, this is new territory. For as long as they can remember, the focus was on attracting and retaining talent. Engineering leaders rarely - if ever - had to justify the existence of their engineering organizations. It’s such a 180° that most tech leaders, when asked about ‘productivity’ (aka: a potential downsizing), earnestly believe that they are being asked about actual productivity.
Chaos ensues: CTOs point to frameworks like DORA (how do we improve deployment efficacy?) or DevEx (how do we make ICs more empowered and effective?). They introduce developer sentiment surveys and new dashboards to find bottlenecks in their deployment processes. They post on LinkedIn and debate amongst themselves re: the best way to conduct design reviews and run post-mortems.
It is such an engaging conversation that they’ve completely missed the point. A developer sentiment survey is not going to answer the CFO’s question re: utilization because:
Engineering isn’t a cost center. You can’t quantify ROI in $ values.
This isn’t to say that you can’t meaningfully understand, track, and improve engineering performance and productivity. Just that the traditional productivity metrics that a CFO or COO might need to rightsize - % utilization, $ ROI - don’t apply to engineering organizations. Trying to measure them is at best wasteful, and at worst dangerously misleading.
Fundamentally unfeasible
To understand why it’s not possible, let’s look at the basic dimensions you’d need to quantify in order to understand productivity in dollar values:
Standardized units of work: work broken down into fungible, interchangeable units
Capacity: the total amount of work that can be done
ROI: being able to track and assign dollar values to the impact of work done
In engineering, you can’t track any of this. In the next few sections, I’ll break down precisely why you can’t - and also why it’s so tempting to try.
1. Standardized units of work:
Big Agile has us describe engineering work in story points, so it’s easy to think of engineering work as fungible, standardized units. I’ve written an entire tirade on how we got here, but the TL;DR is that there are no standard units of work in engineering.
A junior engineer can be 100% utilized on a project that takes 2 weeks. That same task could take a senior engineer a few hours. Another equally senior engineer, with the ‘wrong’ set of skills, might take 3 days. There’s no way to tell - it depends on the individual engineer’s seniority and skillset and what the work requires.
The work itself is also a moving target. Near-constant updates from dozens of dependencies change your system every day, shifting the amount of effort tasks will take in unpredictable ways. Ex: An innocuous update to a language might break something in the library you use for graphs. Suddenly, a simple change to the colors of a pie chart becomes much, much harder.
Even if you could perfectly match this constantly changing work with the ideal engineers to do it, you still can’t clear this fundamental hurdle:
Writing code is a creative activity.
Software engineering is not an assembly line. Doing it well requires ingenuity (you’re building new, complex things by trying to get multiple other complex systems to play nice with one another) and empathy (you’re thinking about how other engineers will be able to continue building/maintaining what you’ve build in the future). It’s creative, and like any other creative activity, you need to be in the right headspace to do it well.
You can slash through hard problems quickly if you’re in flow state, but even ‘easy’ things can feel downright impossible in the wrong environment.
You can set policies that encourage creativity, but nothing guarantees it; there’s just no ‘fix’ for human emotions and life events.
This has gotten lost in the $200B a year we spend on ‘productivity’ dashboards: your developers are human beings doing creative work, and creative output - by definition - can’t be standardized.
2. Capacity
Let’s move on to capacity, ex: the total amount of measurable productivity you can expect from your team. Not only is this near impossible to quantify for engineering, it’s often irrelevant.
At a glance, it feels natural to gauge capacity. You already have some history of ‘velocity’ - ex: the number of story points each engineer typically completes each sprint. Now, you just need to account for the fact that new hires join, people go on vacation, processes are updated, etc.
This mental model is wrong for two major reasons:
Story points are made up
Engineering work is interconnected
Story points are estimates. There are endless tools and methodologies to getting story points ‘right’, but - at the end of the day - they’re self-reported and aspirational. They can be inflated to show higher velocity, or omitted when work is being done behind the scenes (ex: usually in the form of ‘unapproved’ maintenance). Even in the best of circumstances, they are a categorically bad way to track work.
Then, there’s the uniquely interconnected nature of software engineering. If you work in a call center, it’s relatively straightforward to quantify how one line worker leaving will impact capacity. Their job has standard tasks and their work is largely independent. If Jane doesn’t pick up the phone, it doesn’t mean that John can’t.
In engineering, capacity works differently. When an engineer leaves, they take their institutional knowledge with them. One IC leaving can have a network effect of slowing down everyone’s capacity - especially if they were the only person to hold deep expertise in a critical area. These individuals get lost in Big Agile frameworks - they don’t look particularly ‘productive’ because they’re too busy playing air-traffic controller helping everyone else complete their story points.
Combine this with the fact that the work itself is creative, unpredictable, and open-ended, and mapping the precise impact of losing an engineer becomes virtually impossible. You can (and should) assess and understand this impact, but trying to squeeze a dollar figure into a dashboard is just not feasible.
Then, there’s the completely separate but much more important question of capability. Not all engineers are created equal; the team that delivered successfully last quarter may not be capable of delivering your future roadmap.
Capacity ≠ capability.
Case in point, virtually every company is trying to transform into an AI company right now. But if you don’t have the relevant infrastructure and skillset, your capacity to roll out an AI strategy is effectively zero - regardless of your headcount and how successful you were delivering last quarter. In these cases, diligently measuring the amount of work your team could potentially do is a waste of time and a massive distraction: it’s not the work you actually need done.
3. ROI
Last but not least, there’s the question of ROI itself - being able to quantify the $ input and output of engineers and their work. This one is tricky; while you can and should track ROI on some specific features, it’s virtually impossible to track ROI on individual engineers or the engineering organization at large.
Let’s start with what’s trackable: the return on simple, user-facing new features. When I say simple, I mean features that don’t really add to your platform's maintenance burden. They require a certain upfront investment, but there’s a negligible impact on the overall carrying cost.
This is the area where commercial Agile really shines - the frameworks force you (rightfully) to articulate how many users you’re building for and what you expect the business outcome to be (more retention? more revenue? engagement?). You pay in ‘story points’ and see if you get what you paid for.
It works so well that it becomes tempting to apply the same ideas to everything else engineering does. After all, everything is conveniently laid out in story points already.
This is where Big Agile shits the bed. Whether you’re using Kanban, SAFE, SCRUM, or XP - there is virtually no commercial framework today that actually accounts for maintenance - ex: the carrying cost of keeping your platform alive.
The ‘ROI’ on updating your platform after a major dependency update is potentially your entire revenue base.
You’re most likely not going to die by skipping one update, but skip too many and you’re playing revenue Jenga. There is no way to tell exactly how many “too many” is - you’re not in charge of when your dependencies make updates and what they decide to change when they do.
So what do you do in the face of this? Assume that the ROI on all maintenance activities is 5 years of projected ARR? Zero it all out? There’s just no good way to do this because it’s an ongoing carrying cost - not a one-off ‘project’.
It’s also worth noting that the “simple” features I described earlier are the minority. The vast majority of roadmap items will incur some amount of carrying cost by virtue of expanding the platform. A one-off Salesforce integration might net you a new contract, but it also expands the amount of expertise you need on your team, the surface area of maintenance to be done, and your overall platform risk (more stuff = more stuff that can break).
You can understand and assess this heuristically and intuitively, but pinning down a specific number is impossible. You just don’t know what Salesforce might throw at you down the line, so you can’t precisely estimate the total cost of maintenance.
To try and sidestep the issue of assigning ROI to individual work units, some orgs have started to measure things like “Revenue per Engineer” or the cost of engineering as a proportion of total revenue. To understand why these don’t work, we just need to extend what we’ve already covered. Because there is no way to standardize software engineering work, there is no way to standardize engineering organizations.
WhatsApp famously had 32 engineers when it was acquired by Facebook for $19B. But if you’re writing software for the Department of Defense, the layers of redundancy and failsafes would require thousands of engineers. Neither is wrong. Engineering is infrastructure - there’s a fixed cost to keeping things alive and that fixed cost depends on your risk tolerance and business context. Your revenue is irrelevant.
So how do you assess engineering teams?
To measure engineering, you need to measure it for what it is - critical infrastructure fueled by creative work.
This sounds like a huge task, but it’s actually how we manage most corporate functions - finance, marketing, HR, or even the C-suite. You’re not measuring percent utilization or tracking individual ROI, but this doesn’t stop you from setting clear goals, staffing appropriately, and tracking success.
The key is that you adapt KPIs and dashboards to your goals, not the other way around. Depending on whether your HR team is trying to improve culture or hire fast, you’re tracking very different things.
This is common sense we’ve lost when comes to engineering. Big Agile has made it feel natural to try and measure engineering like a factory where work is standard, interchangeable, and compartmentalized.
At best, you waste time. At worst, you’ve laid off key staff because you couldn’t identify the ROI on life-preserving maintenance.
It’s not only very bleak, but boring. It misses the richer, bigger picture because there’s so much we can do to understand and improve engineering teams.
There’s a reason why engineering leaders are downright giddy to talk productivity. Engineering is creative work. And it turns enabling creativity is not only both fiscally and emotionally rewarding; it’s also fun.
So stop trying to stuff engineering into a box that doesn’t fit, and join in. It might not be the conversation you expected, but it’s the one you need.
Godfrey helps you build a mental map of your engineering org by tracking maintenance, resourcing, and architecture in a way that anyone can understand. It helps you heuristically and intuitively assess your team’s load, capabilities, and hiring so you can meaningfully improve your engineering org.
A huge thank you to Alvaro A. for helping read earlier versions of this.
Christine this is an excellent article i wish it gets more attention