HACKER Q&A
📣 koragan

What are useful OKRs and KPIs for SWEs


I know Objectives and Key Results (OKR) and Key Performance Indicators (KPI) can be specific to the domain in which they're defined; however, I was hoping that the community could help me define some useful, generically-defined OKRs and KPIs. Generic in the sense one could rewrite / rework them to apply more to their own domain, not necessarily that they do not have a specific goal.


  👤 wschroed Accepted Answer ✓
I prefer studies over anecdotes and found this: https://www.amazon.com/Accelerate-Software-Performing-Techno...

According to this study, you can measure a team's progress and ability with:

* number of production deployments per day (want higher numbers)

* average time to convert a request into a deployed solution (want lower numbers)

* average time to restore broken service (want lower numbers)

* average number of bugfixes per deployment (want lower numbers)

I am curious about other studies in this area and if there is overlapping or conflicting information.


👤 dfcowell
It’s impossible to give a rational answer for this question.

There aren’t any general OKRs or KPIs for software engineers (or most roles in a business) because there’s no business where the raw work product of an individual is an indicator or result in and of itself, barring maybe primary or secondary production industry.

The business has objectives and the individuals in the business work towards achieving them.

Key results depend entirely upon the objectives of the business.

As far as KPIs go, a SWE in a 10-man startup might be evaluated based on getting the damn MVP out the door, while the same SWE in a 100-man startup may be evaluated based on the automation they implement which increases the speed at which their 20-person team can ship.

OKRs and KPIs are tools you can use as a manager to focus and direct the efforts of your team in the right direction. They’re not really an evaluation tool in and of themselves.


👤 harterrt
The other posters are right, we're not going to be able to give you general OKRs that work everywhere. It's important that your goals ladder up into your companies goals.

I just wrote a piece on how to break down corporate goals into something that's meaningful for you and your team [1]. If you're having a hard time figuring out what your goals should be it might be that your company's goals are too broad and need to be broken down more.

[1] https://blog.harterrt.com/cascading_metrics.html


👤 somehnrdr14726
These business metrics require objective measurement to be useful, so one of the best things engineers can do is make the unseen seen.

You can bootstrap the objectivity by going meta. An objective like, create performance/reliability baseline for application. Key result: N service-level agreements adopted by org.

All the hard work is in avoiding statistical folly. All the busy work is in instrumentation.

When you're done, it should be hard for product or sales types (whoever controls your roadmap) to feature factory the app without tripping an SLA.


👤 Jugurtha
I don't think there are or it could just be that I don't understand the question.

Software engineering is an activity to produce an output, to fulfill a "job to be done". Focusing on things with the bigger impact, and these change and are specific.

I know what "general" actions I would do if I were to start a new project based on pain points we have suffered through many projects, and that saved us a bunch of time and morale down the road:

- Include monitoring for errors and system's health, and analytics for user behavior: I shouldn't dig through logs to see an exception (Sentry). I should look at one dashboard and have an idea on which systems are up or down, metrics, load, latency, etc. (Prometheus and Grafana). I should know how users are interacting with the product, where they are failing and succeeding (PostHog).

- Add communication channels to interact with users into the product (issue template, Slack, something, email): asking users about which ways the product failed to solve their problem.

- Use a plug-in architecture in order to easily add and, more importantly, remove functionality easily. It makes testing ideas easier: easily introduce a feature, and as easily kill it without rewriting the whole codebase.

- Make functionality accessible through APIs (one guideline we have is that everything should be accessible through an API call). Have an SDK for the main language first to simplify this.

- Document everything. An issue is closed if there has been a test written for the bug, or it's documented somewhere so anyone has access to it.

- Align everyone: through written communication accessible anytime by anyone. Communication should tie objectives to actions and penetrate several abstraction levels and be consumable by advisors, board members, non-technical people, technical people. Call it "fractal communication" that still makes sense from any abstration level you read it. This enables everyone on the team to chime in at the abstraction level they're comfortable with. You write an email that ties objectives on say, revenues, in a language that speaks to some, and then tie it back to specific activities, an issue in the issue tracker, and a line of code, so that everyone sees how things fit together and are intertwined. Everyone knows what to do and why they're doing what they're doing. They can come up with ideas or correct assumptions or mistakes.

- Have issue templates for incidents, features, improvements, and bugs: this reduces the "activation energy" to writing good issues. It beats having a blank screen and reduces variance in issue quality.

These are just a few points.


👤 wbharding
I wrote about this a few months ago https://www.gitclear.com/five_best_engineering_kpis_and_how_...

Most KPIs stop working when they become a target but there are a few that resist it.