Within the world of projects there is a phrase that says “what gets measured gets managed” and whilst such one liners can be used as a lazy justification of poorly thought through management techniques, if we explore the meaning more deeply then I think it reveals some interesting points.
The first thing I’d observe is that the saying is broadly true. It’s like shining a torch beam on something, and when others can see the areas being highlighted then they will focus their attention there too. This can be both a good thing and a bad thing. Are we sure that we are shining our torch on the right areas, and are we confident that those areas left unilluminated aren’t adding unacceptable risk. Is there a monster growing in the shadows that will leap out and bite you further on in the project.
An important thing to think about now is what you are measuring. Projects are frequently driven by the achievement of milestones and this is taken as a measurement of progress. But I have seen many projects where the milestone is simply the delivery of a design artefact, e.g. “have we delivered the systems requirements specification”, and the milestone on its own tells us nothing about the quality of the deliverable. It is the quality of the deliverable that is the true measure of progress (i.e. the maturity of the design) and not the existence of the deliverable itself.
Significant design milestones of course tend not to be just the delivery of a document, but are also attended by an in-depth review. Provided the review is well conducted by staff competent in the area then poor quality in the design should be trapped and actioned for correction. But the risk is that we become reliant on the review to perform this function rather than building it in to the design process. And of course if we only detect poor quality at the review then we are already late. One way of ‘building it in’ can be through appropriate measurement.
So what is ‘appropriate measurement’ if it’s not just about delivering ‘things’? I believe it’s about thinking what the desired outcome of the ‘thing’ is and trying to develop measures that can monitor the progress towards that outcome. Measuring something of the quality of ‘things’ is much harder than measuring the existence of these ‘things’, but these are the only really meaningful measures.
So, an example may help. In a previous role I was responsible for delivering the Strategy and Process Improvement programme for our department and this required a rigorous approach to measurement (for which we happened to have selected the balanced score card approach, but I won’t dwell on that in this post).
One area of improvement was in our relationship to our customer, for which we developed a series of training materials to develop the staff in this skill. The leader of improvement activity proposed measuring the delivery of the pack of materials and the number of staff who had been through the training. I argued that this would tell us nothing about our progress towards actually improving our relationship with the customer. This outcome of course is much harder to measure, and the data that is available (e.g. number of customer accolades) may be the result of a number of actions that the business has taken, but it is still this outcome that the business really cares about.
This example illustrates the tension between those who are responsible for delivering products, whether that is individual process products or the overall project, and who many want to measured on delivery of the thing and those who understand the responsibility of delivering the ‘thing’ that meets the stakeholders needs.
I will mention here as way of concluding this post that the International Council on Systems Engineering (INCOSE) has developed a product called the “SE Leading Indicators Guide” which has provided some possible ways of measuring the activities of a complex engineering project, and provides plenty of food for thought.