I’m looking for testimonies and knowledge: - how is productivity measured in your team if you’re an engineer? In different teams if you’re at a management level? Conversely, is it not measured and do you know why? - how is productivity measured in companies you know, either through firsthand experience or well-documented sources? (Company articles, books) - are you aware of research in this area?
On a last side note, I would count research displayed in Accelerate out of the productivity question. The DORA metrics don’t delve into the complex details of “what is value for my team” and focus on the release level. The books and research are great, but arguably not about engineering productivity.
Still, we can consider some interesting thought experiments. Say, you start with a purely technical team; there are no marketers, no HR, no managers, etc, and the team gets a product going in the market. In such a limited case, one can characterise productivity with "revenue per engineer" (essentially - total revenue / number of engineers).
However, in reality, if the marketers, the HR, and various managers, leaders and salesman don't do their jobs, despite having a great product, revenue will approximate towards zero overtime. In reverse, if the marketers merely do their job perfectly, but there are no engineers delivering the product, again there won't be any profits.
Despite all these caveats, let me share why I think "revenue per employee" seems like the most useful metric here. Consider the "average revenue per employee" for these two companies:
ADP: $284,453
Apple: $2,560,571
Google: $2,020,329
Apple/Google are 10x ADP. I'd argue a good part of this 10x difference we see in the output primarily due to the differences in engineering sophistication. The engineers in these more profitable orgs are doing qualitatively different "types" of engineering (hardware? prototyping? deep research? higher standards?). Increasing engineering prowess, trying more sophisticated projects seems like the biggest boost to the bottom line.
Well, specifically, you can measure onboarding, which is a proxy:
How quickly can someone get started on a feature and ship it to production? How obvious is it for them to follow your teams procedures, guardrails, tests, review processes, etc and go?
Maximizing this means maximizing the available colleagues that can work on your codebase and sets up a positive culture of enablement. It means maintaining your code is not an inscrutable mess.
By doing that, it was fairly easy to give a relevant score to each feature.
Engineering team was doing this to measure solely engineering productivity, but product team could do this as well to measure added value to the company.
By tracking the start/end date as well for each feature, you can then measure productivity at any time scale you want.
You already know where you are. A measurement doesn’t help your team ship better or faster.
If you’re shipping product improvements at a regular cadence, and the customers are happy, then there’s no problem with productivity.
If you’re shipping poor quality, identify why and improve that.
If you’re shipping so slow that it’s negatively impacting customers, identify why and improve that.
Shoot me an email, I’d be happy to offer more perspective or advice.
If you build this way, I feel more comfortable than building something fast, that at the end has close to zero usage.
2 - 7 eng dev teams is a small one. It means, just put people on top of things and things should develop faster than in a an average team in any corporation.
This approach/mindset might not work in corporations etc. since they want to have "reports"
Not counting refactoring as output makes sense, because it's more of repositioning for future output, and/or recreation.
But if fixing bugs doesn't bring value, why do you do it?