The story behind this post
I work for a large company attempting a transformation. Our fearless leader, lets call him Jim, and technically the manager of the agile coaches, asked the coaches to present metrics about the teams that we were coaching. I think the intent was for us to be less subjective in our coaching and to validate our assumptions in our approaches. I agreed with this until one day, Jim made me aware that each time I presented metrics about the teams I was coaching, they were different. He asked, “how do I know if what I am doing is working, if I look at something different each time?” I kindly explained that this was intentional and valuable, and he agreed. Everyone else was presenting the same metrics each week, which I secretly considered boring and dull. In the end both of Jim and I seemed confused why some of the other coaches were presenting the same metrics each week while I was changing the metrics each week, yet both were valuable.
Upon weeks of introspection and subconscious processing, I realized there are different types of metrics coaches and agile teams can use to help them be successful and each of those types of metrics has a different purpose. I shared these ideas with other fellow coaches and all seemed to be both intrigued by the idea because no-one had thought of it this way and they all seemed to agree to the truth in this. So here I am, posting this article, to not only offer the learning I think we have had here, but also to get feedback and further validate the experiences of others. So please provide your feedback about this.
The three types of agile metrics
We collect this information on a regular cadence in order to show a trend which indicates some type of pattern in our system.
This type of metrics answers the question: Am I making progress toward a specific goal? Some examples would be # of defects per release, velocity, % of stories accepted during an iteration, # of blockers in an iteration, etc. These metrics are mostly shown as a trend and help to validate whether or not something we did in the more distant past is having a positive impact on the recent past. Teams use these metrics to show if they are on the right path towards their goals. They are great at validating whether or not some changes we made during a retrospective are having the intended impact to our system.
This type of information often, but not always, already exists and can reveal where a potential problem might exist within our system.
This information is usually collected ad-hoc based on a hunch we have about a specific problem we are currently dealing with. Some examples would be % of defects that occurred in module A of our codebase. In that example we might notice that defects are occurring, but we don’t know why. So what we might do is collect all the defects and attribute them to a certain part of the code base. If while doing this we discover that 90% of the defects occur in a small area of the code and we also discover that the cyclomatic complexity of that code is 3x higher than average, we might conclude that the high complexity in this code is causing these defects and that complexity needs to be reduced. This type of metric is the best type of data to base a change in a retrospective off of, because it has very low subjectivity. Of course there is always room for interpretation which is why teams should approach solutions as experiments.
This data is usually collected and made highly visible in order to cause a change in a system.
Usually by measuring something and making that measurement very visible, we are saying that we want that thing to change. Rarely do we measure something and radiate it, because we want it to stay the same, although it is plausible. An example of a transformational metric is, # of tests (to increase the # of tests), time in process (to reduce it), complexity of code (to reduce # of defects), # of stories accepted (to reduce the size of the stories), etc. The last one is a good example. Measuring the # of stories accepted is great because in this case our goal is not necessarily to increase throughput, although it might look like that. Increasing throughput is difficult, so if you measure the # of stories completed and encourage teams to increase that #, normally the size of the stories will get smaller (because that is easier than increasing throughput). Although transformational metrics could be used by team to help themselves cause a change, I more often see them used by scrum masters and coaches. These metrics are strategic in nature and normally the behavioral change they cause can be predicted. While there are usually some unintended consequences, if the consequences cause a behavioral change, and those consequences can be managed, or are less bad then they are effective, then they are effective. Leaders in a team may encourage the team to improve on one of these metrics as part of a retrospective change commitment as a way to cause behavioral change in the team.
A previous example of where used transformational metrics to cause change. It has been said that teams should not be compared to each other. In my opinion, it depends. In one case I was working with a program of about 150 people. All the teams were missing their iteration commitments regularly. So what I did was create a radiator that showed how far below or above each team was from their iteration burn-down. Each team was displayed next to one another. Red was bad, green was good. In the short term teams started paying attention to their burn-down, which they had not before, and which was largely related to their inability to finish their user stories on time. The side-affect in this case was that teams became more interested in other teams burn-downs than their own. While this was slightly distracting, all the teams had as significant improvement in meeting their iteration commitments from that point forward. As soon as I felt the learning had occurred and the behavioral change was sticky, I stopped radiating the information.
How these relate to my learning story
Picking up from the original story that caused me to write this post. Jim was confused because, I was presenting diagnostic metrics while other coaches were presenting trailing indicator metrics. I presented diagnostic metrics because I wanted to collaborate on the potential causes on the problems I was seeing in the teams. I would also present transformational metrics when I wanted to show the type of change I was trying to cause in the teams. Although I would also look at trailing indicators, I personally did not see them as super valuable in the company of coaches because I perceived the intention of the meeting to be more as a collaborative session than a way to prove the coaching I was doing was working or not. I did eventually subconsciously see a difference but only recently could distinguish the difference in purpose between the different types of metrics used by agile teams and coaches.
So from now on, I will be more mindful of the type of metric I am presenting and what my expected outcome from sharing those metrics is. And if Jim wants, I will also show trailing indicator metrics. Thanks for the learning Jim!