I’m not a big fan of metrics, measures, charts, reporting and data collection. I’m not terribly impressed by dashboards with 20 little graphs on showing loads of detailed information. When I’m involved in projects I want to know 3 simple things:
- How quick are we doing stuff?
- Are we on track or not?
- Is the stuff good enough quality?
There can be some deep science behind the answers to those questions but at the surface that’s all I want to see.
Organisations need to know that teams are delivering quality products at the right pace to fit the business need. To achieve this goal teams need to be able to demonstrate that their product is of sufficient quality and that they can commit to delivering the required scope within the business time scales. If the project goal may not be achieved then the business or the team need to change something (such as scope, resources or time scales). This feedback mechanism and the open transparent communication of this knowledge is key to the success of agile delivery.
The goal of delivering quality products at the right pace can be measured in many complex ways however, when designing the Project Forum agile at scale practice we looked at just 3 measures. In fact I should probably call them 2.5 measures as the throughput/release burnup can be considered mutually exclusive (if you’re continuous flow or iterative). The most important measure is people’s opinions when you go and talk to your team.
Note: in the measures section I often refer to “requirements” as a simple number, this could be a count, a normalised count, magnitude, points, etc. it doesn’t matter what’s used so long as it’s consistent.
Throughput is a simple measure of how many requirements the team is working through within a time period. The y-axis shows the number of completed requirements and the x-axis represents time. Of course requirements are going to be of different sizes but tracking the average of this line, or indeed a moving average, is useful to see if the trend is steady, increasing or decreasing.
If the trend is increasing then the team is completing more requirements per day, meaning they are taking less time. This could be due to efficiency or the level of requirements changing. More importantly if the trend is downwards then the team is taking longer to deliver requirements, this may put the project schedule at risk. A lower limit can be applied to provide an indicator of when corrective action might need to be taken.
The Release Burnup is an established measure in agile methodologies as it shows the planned work (the 100% scope that needs to be reached for a release) and then progression towards it over time.The gradient of this line is driven by the team throughput and so the Release Burnup is based on the team velocity and can be used to extrapolate a likely time when the team will reach the full release scope.
One of the nice things about a burnup rather than a burndown is it allows for real life intervening in plans and the scope changing either up or down. In terms of perception the team can feel more positive about achieving something as the line goes up rather than reducing remaining work as a line goes down in a traditional burndown chart.
Quality confidence is an indicator of how sure we are that the current product development state is of sufficient quality. Calculated by examining the current release scope requirements, their coverage by tests and the results of those tests (including their historic stability) it combines a lot of data in one simple measure.
This measure allows the team and customer to say what “good enough” quality actually means and to see if the quality of the product is high enough, and has been stable enough to release. If all requirements in the current release had test coverage and all tests passed successfully for the last few test runs then the quality confidence would be 100%. Of course this doesn’t mean the quality is really 100% as there will always be escaped defects, how many escaped defects there are over time can be used to calibrate this confidence measure.
The current quality confidence can also be expressed as a simple dial indicating the current value with an indicator of the current trend. Using these three measures together means a team can track how much work it can process, whether that means they’re on track or not for the next release and what likely quality of that release will be.
For more on Quality Confidence see here.
The great thing about these measures is that they can be applied recursively throughout an system of systems environment and allow for different working patterns in the detail of the contributing teams. Cycle times can be varied without affecting the meaningfulness of these simple measures (although a really small cycle time will make the x-axis of the throughput graph rather short and the quality confidence line rather smooth).
I use these measures when scaling agility in an organisation see What does “Agile at Scale” mean? and Scaled Agility: The Project Forum