From the book organisational-abilities
< previous page (Understanding Context)
Let’s now look at how to approach measurement of progress in the different contexts.
I contracted a builder to convert our loft into a bedroom. He’s done this before in very similar houses. The high level outcome I wanted was space and value (since house prices go up with an additional bedroom). I could have achieved the same outcome by building a basement, an extension, a conservatory or a cabin in the garden. I had options. The option I opted for was a loft conversion. The builder performed actions to create the output, which realised the option; the converted loft.
I needed to keep track of both the state of the option (completion of the conversion) and the outcome (whether this would give me space and value). If the loft doesn’t get converted then I have no outcome, so if I don’t follow progress how do I know that any work is happening? You may think you would just wait for them to finish the job. But would you wait indefinitely, or pay any price? I had to measure progress to some extent (I hadn’t used the builder before, and I’m not what the cowboys would call a ‘sucker’). In this case, I could measure progress by the builder checking in with me to tell me what he expected to have done at a given moment, and where he was actually.
Tracking the output is pretty easy in this linear and predictable example. But then it usually is in Simple contexts and even at the lighter end of Complicated. Progress on the option I chose was important, but still subservient to the outcome I was looking for. If at any time it seems the option you’ve chosen won’t deliver the outcome, then you need to change tact. In this case I knew the space and value I wanted out of this would be achieved if we got the loft. But imagine if the reason I wanted the space was to take up the drums? My wife and neighbours wouldn’t put up with that upstairs. If I discovered this early on, I might be able to choose a different option. Maybe to soundproof the room, or build a cabin down the garden. The outcome is what guides the option you choose. It doesn’t matter how fast you’re driving, you’ll still be late if you’re going in the wrong direction.
An option is a combination of actions or outputs that together contribute to the outcome.
In my example, the builder could estimate cost accurately, because he knew all the steps to carry out the work, how long they’d take and the materials needed. This type of project is linear and predictable. He could tell for himself, and show me how things were progressing by looking at the schedule. I could also see progress by looking at the physical space, but I trusted his plan on a page because he knew what should have been done at any point in time for the project to be on track. This won’t be adequate for most of you reading this book. If you’re working in an organisation creating new products, developing and integrating software, or in any domain where work is complex and innovation and experimentation is needed, following the linear approach of planning all the work up front and tracking the plan is a bad way to measure progress. Why? Because the plan fools you into thinking you know what’s going on and what the end result will be when in fact it’s impossible to predict.
Whatever your desired outcome. More market share. Lower customer churn. Etc. To achieve it actions will need to be taken and usually some kind of output produced. Often, in business and software development, at first it is not evident what will deliver the outcome or how technology will be used to achieve it. That’s what makes the context complex, requiring a test and learn approach.
Let’s take the example of a telco looking to increase market share as it’s outcome. The teams have a list of ranked options that they believe will achieve this. Top of the list are several features which will bring more customers to the site. This is more than a hunch, they’ve been looking at the data they’ve got and used it to create a hypothesis for each option. The work is complex, they don’t know for sure how the options will perform until they test them with real users. Expecting a fully estimated plan from them, is asking for the unknowable. Worse, held to a plan they will be pressured into delivering options (probably in the form of outputs - functionality) that don’t deliver the outcome. They’ll have done what you wanted, but it’ll have wasted everyone’s time. My builder could provide a plan, because his work involved largely things he’d done before or at worse known unknowns. The teams creating something new face unknown unknowns.
In this example, the relationship between the options (in this case functionality) and its effect on the outcome needs to be measured often because it’s not a clear cause and effect relationship. They need to test often to make sure they’re on the right path and tweak accordingly. Hypothesis may be created, but they need to be tested.
In a complex context the only true measure of progress is the effect of the option on the outcome. In the software example, this would be measuring the effect of a new feature (the option) on market share. In development of a new antiviral drug, it would be the effect of the drug on the virus. This is how we can really tell we’re doing well.
Other measures are at best weak proxies, and at worst fool us into thinking progress has been made when in fact we’re standing still. You measure what matters, so if the outcome is what’s important then that’s what counts. If you are only measuring the implementation of your options (as many in business do), then you are merely focussed on output. This often happens from the top down; a senior manager will focus on an output (mistakingly equating this with the outcome), they hold their people accountable for the output and measure them against it, and before you know it the only thing that matters is delivering the output. That’s perfect if there is a 1:1 relationship between the output and achieving your outcome, a predefined route; however with an untried route you’re simply looking at progress in mileage without checking that you’re closer to your destination.
Management should be focussed on the destination, but there is every reason to check the mileage if it allows for better support of the teams. Shortly we’ll see how that works. But first lets look at how breaking options down benefits delivery in a Complex context.
Let’s say that you have a series of options that independently affect the outcome. Their independence is important. If an option only affects the outcome when combined with another then it’s possible to deliver a whole lot without having any realisable value. This creates waste and contributes to many failed programmes[1].
The teams break the option down in a way that suits their work. You could break an option down ad infinitum into smaller options. The constraint is that it must affect the outcome. Breaking work down into smaller batches leads to faster feedback, so smaller options can be turned around, explored, tweaked, ditched faster as teams test and learn towards their destination. These are better in Complex contexts but Complicated contexts also benefit from smaller batch sizes since an option produces benefit in the form of an affected outcome or learning. The earlier we get this, the better as we’ll start realising the benefit sooner and also reduce some of the risk.
Here are some examples of completed options:
What does completed mean? The Agile manifesto articulates the value of “working software over comprehensive documentation” because clearly you are closer to delivering value with working software than a document. The best measure of milage is the completed option; working software, a drug that can be tested, the bedroom in my loft. In the Simple and Complicated contexts where the road is known, mileage is actually a pretty good measure. Not so much for the Complex, but still the next best - and many organisations would fare much better if they stuck to this as a measure of progress rather than putting confidence in the current position in relation to a plan.
So how do we measure when the option is “complete”? Isn’t it subjective? It’s obvious that my bedroom wouldn’t be accepted as complete if the builder hadn’t connected the wiring so that electricity was available in the room. In the same way working software isn’t complete until it integrates with other components that enable it to create value. A drug being developed isn’t complete until it can be trialed. Think of it this way, for an option to be complete it needs to be at the point where it can be used or executed. This is because anything else requires further work, possibly a lot more - in the complex domain we can’t be sure. Be conscious of what a completed option should look like - the more complex the context the more concerned you should be with anything less than an executable state.
In Agile, teams talk about a Definition of Done to avoid doubt. It simply makes explicit everything that we think should happen, but that may be assumed by some and missed by others. This may include automated tests being in place, and documentation being updated. Things that could be unique to a team, or ambiguous can be in a definition of done. A working, integrated solution should be viewed as important as connecting the bedroom up to the electricity. In the latter case, it is so obvious that I don’t need to make it explicit to the builder - in the former case, it would be an advanced organisation that recognises this as obvious - they will do well from it!
A completed option is executable.
Can you see why management shouldn’t be interested in number of story points[2], number of stories completed or hours worked? They are too far removed from delivery of verifiable value.
What should management be looking at?
Imagine the following options as measures of progress.
a. Each iteration I get a car that I can drive around.
b. Each iteration I get a list of completed work that’s been done to a car.
c. Each iteration see how many hours or work units they have managed to do.
If your life depended on getting out a working product that affects the outcome, which would you use as a measure of progress?
In Simple and Complicated contexts (b.) will suffice if the right people are working on it, whilst (a.) is clearly the better option. (c.) is rarely useful even in these contexts - if you focus on this you will surely have everyone busy. But producing value?Management should be leaving (b.) and (c.) to the teams.
What we should aim for is the following:
Team decides to go with X options which fit into a given time box. Could be 1, could be 10.
At the end of the time box, a number of these options have been implemented. Some may not have been finished.
The team and relevant stakeholders, including management review the implemented options and their effect on the outcome. This is the progress. Management asks the 5 questions of themselves. If it’s too soon to measure the option’s effect on the outcome, this will need to be measured as soon as the data is available.
Feedback, learning and continue, pivot or ditch decisions are made.
The process loops back around.
Management should be reviewing progress by objectively looking at the implemented option on a given cadence. What you see above may look somewhat like a Sprint Review used in Scrum - but we can be framework agnostic. This isn’t rocket science - everyone reviews progress - the difference is in using the right measures. That means looking at things of verifiable value (the option) rather than tenuous links to the things we care about. Management shouldn’t be looking at anything other than options and outcomes. No proxies, no tenuous links, no self deception.
Management tracking the work at a task level can show lack of trust. You’re better off finding great people that you don’t feel the need to task track.
For simple and some complicated work, with regularity appropriate to the situation, it will give a fairly accurate view of where you are in relation to the destination. You can know how much there is to do in relation to what’s been done and take action accordingly.
For complex work this is futile. More than any other domain, you need people who can make decisions on their own and escalate only when something is outside their sphere of influence. If they’re not on track to deliver an option in a given time period, allow them to work out what to do without breathing down their necks. Constraints and difficulties can result in creative solutions. Aside from distracting the teams, looking at the tasks required to deliver the option, as we have seen, will be misleading as in the context they may change.
Remember:
Most organisations are operating in Complex contexts. In this situation the effect of the option on the outcome is like your SatNav telling you how much closer you are to your destination. It is the only true measure of progress.
If you cannot measure the outcome at a given point, a completed option indicates you are making progress - it will either affect the outcome, or you will learn something
Summary of domains: table based on the Cynefin framework [3]
The Good (more of this)
The Bad (can be misused or produce negative results)
The Ugly (bad practice in most organisational contexts)
[1] As an example: ‘Abandoned NHS IT System Has Cost £10bn so Far’. The Guardian, 18 September 2013.
[2] Story pointing is an Agile technique whereby User Stories are given a point value. This represents the amount of work, it’s complexity and any risk and uncertainty around them. The stories are pointed relative to one another. It’s a heuristic, not a science, and for a dedicated, stable team this can make the process of deciding what to work on, and how much work to pull in faster. It’s a better, more systemic representation of the work as opposed to time based estimation. The latter takes longer, is generally less accurate (as less holistic) and encourages managers to hold people to their estimates.
[3] Snowden, D. J., and M.E. Boone. ‘A Leader’s Framework for Decision Making’. Harvard Business Review 85, no. 11 (2007): 68–76.