Risks in the usage of KPIs as measures of the effectiveness

Yury Niño
4 min readJul 8, 2018

I have worked as a software developer using Waterfall and Agile methodologies for several years. In my experience working with Scrum, I have to admit that the “agile” methodologies, strategies and people, are far from perfect. However, this entry is not about the pains of Scrum, but about KPIs, PBIs and velocity metrics. Next, I am going to review some common KPIs, discuss the value of these KPIs for evaluating the development teams and show some dangerous emergent behaviors in the practice.

Taken from [8]

Agile Metrics

It is not surprising that most of us have a love and hate relationship with the metrics. However, it is a fact that the metrics are a tool for tracking and sharing the progress of the teams. They allow to identify bottlenecks and can reduce the probability of failed projects.

A key performance indicator (KPI) is a measure of performance and in SCRUM the velocity is a key planning tool. Velocity is the average amount of work to scrum team completes during a sprint [1], which allows doing planning when the new versions can be delivered and ensure consistent performance over time. However, velocity is not the only metric used to measure a Scrum team, there are other measures that the managers use to assess the competence, success, quality, and quantity of work of a development team. Some of the most common include [2]:

Stories Completed vs. Committed Stories
Technical Debt Management
Quality Delivered
Team Enthusiasm
Retrospective Process Improvement
Communication
Understanding of Sprint Scope and Goal

Risks of using KPIS as an incentive

Taken from [8]

KPIs allow to monitor activities and quantifiable outputs, but operating with poor KPIs or use them as a measure for giving incentives, bonus, and promotions can be counterproductive. It can not only give an illusion of progress but also increases the risk of failure. In my experience, if they are badly implemented, they can distract the organization from the real progress measures. Next, some risk behaviors, which could emerge as a consequence of bad used KPIs:

1. For showing big numbers and capacity full, some teams give story points to everything: go to meetings, help to others, read the documentation, support other teams, etc… Although these tasks represent effort, they do not add value to the product, customer satisfaction or return on investment.

2. In order to avoid locks and dependencies, that could put in risk the achievement of the KPIs, some teams are less strict with the “Definition of Done” of the stories. In spite of having an established checklist as “definition of done”, they adjust the definition of the story in order to avoid some items in the checklist. An example is marking stories as finished without QA tests or security tests with the excuse of that the story has many dependencies or it is so big.

3. If all on one team: scrum master, product owner, developers and testers are not measured the same, there could be a conflict of interests. An example are the ego wars between developers and testers, documented in [11] who, in some teams have unhealthy relationships.

4. Some teams tend to vote large estimates for simple or short stories. They are afraid of uncertainties associated to estimations. In order to avoid work long hours (which are not a reality) and to impress the management, they show beautiful numbers without value for the products. This situation is a common reason for discussion in forums [9, 10].

5. Although I have not seen , the teams could start the sprint planning stories which they have implemented previously. They have a carryover of done tasks, that ensures that the next sprint will be achieved. It is a dishonest practice that can delay the release of a product and generate a false relationship of trust between managers and developers.

6. The teams include stories, which they plan to develop in a hidden way. So this way, if they achieve the story, they include them in the middle of a sprint, but if they do not achieve it, nothing happens. Although it has a generous intention, it can be a risk practice because hide the locks and bottlenecks for the management.

Taken from [7]

Conclusions

The KPIs are the vital signs of an engineering team. It is a fact that the management has to measure its teams, but it is important to be aware that the desire to measure people and their performance can be risk.

I want to close my entry with this phrase: “Bad usage of the KPIs is like flying blind in a hailstorm without instruments, attempting to land on a postage stamp, airstrip, atop a cliff. The only solution is to be completely reactive and instinctive, trusting your judgment rather than data” [6].

References

[1] https://www.atlassian.com/agile/project-management/metrics
[2] http://tracks.roojoom.com/r/1197#/trek?page=3
[3] http://marcos-pacheco.com/patterns-for-a-useful-kpi/
[4] http://ryeok.com/blog/2014/2/1/5-key-metrics-for-engineering-departments
[5] https://www.devteam.space/blog/how-to-measure-developer-productivity/
[6] https://www.changefactory.com.au/our-thinking/articles/common-mistakes-with-kpis/
[7] https://www.cartoonstock.com/directory/m/metrics.asp
[8] http://www.tom-gray.com/2013/07/02/incentive-plan-for-small-business/
[9] https://softwareengineering.stackexchange.com/questions/368658/how-do-you-cope-with-a-team-who-tends-to-underestimate-time-needed-to-complete-t
[10] https://softwareengineering.stackexchange.com/questions/159502/how-to-prevent-intentional-over-estimation-in-user-stories
[11] https://www.virtualforce.io/blog/ultimate-bug-war-developer-vs-tester/

--

--

Yury Niño

Cloud Infrastructure Engineer @Google. Chaos Engineer Advocate. Loves building software applications, DevOps, Security and SRE