High, High, High: Why prioritisation goes bad

Prioritisation ImagePrioritisation is simply the act of deciding which things are most important. We’ve all done it in one form or another and it doesn’t seem very complicated. Usually, we create a list and juggle the items around to put the most important ones at the top. Or if the list is long, we rate the priority of each item, for example using ‘High’, ‘Medium’ and ‘Low’, often discussing the ratings as a team.

However, in my experience, although it seems simple enough, prioritisation never quite works the way you expect it to. For example:

  • You end up with a set of High priorities that don’t feel right – they don’t fit together, or don’t add up to delivering something interesting.
  • You realise that you need to deliver some of the Low priority items in order to deliver some of the High priority items.
  • You notice that some of the items are too large to be delivered in one go and need to be broken up.
  • Finding that, although you have captured a reasonable sense of what is important, this doesn’t seem to help determine what should go into each of the releases that you have planned.
  • Realising that you have no idea what some of the items in the list mean – they are poorly written, the team has forgotten what they refer to, or they take the form of a general aim rather than a specific deliverable (‘make checkout quicker’).
  • You end up with everything rated ‘High’.

Having come up against issues like these again on a recent project, I decided to take some time to analyse the problem and try to come up with a framework for thinking about prioritisation so I don’t fall into the same traps again.


Typically when the need for prioritisation arises, you are presented with a single list of items and asked to determine which are the most important. When looking through the list, some issues may already be apparent, for example:

  • They vary significantly in size, some require only a few days of work, whilst others may take months.
  • Some items represent complete pieces of work, whilst others are just small tasks within bigger projects.
  • Some items need to be done before others, rather than prioritised against them.

However, these problems are usually only symptoms of a larger problem: that you have one list, when you should have several.

Any prioritisation exercise is just one part of a larger problem space: one level in a hierarchy of options and choices. Making optimal choices requires that you first understand the bigger picture.


At the very top of the hierarchy, any activity is governed by a set of overarching aims that explain why the work is being done and what it is hoped will be achieved. Although they are very important, the aims are usually either considered too obvious to be explicitly stated, or simply forgotten in the course of discussions. Clearly formulating and stating the aims can help to promote a common understanding amongst stakeholders and referring back to the aims during priority discussions can break down disagreements over the rating of particular items.


The items are the individual activities that are being considered are intended to work towards the overarching aims. For simplicity we can see these activities as being made up of three levels:

  • Streams: At the top come general streams of activity. At this level, the items are typically large in scale and will take considerable time to achieve. They will often have the property of delivering incremental value – i.e. they can be partially delivered, providing progressively more value as the level of completeness increases. Prioritisation decisions at this level are about focus and investment. Certain items may be excluded completely, but more common is that the discussion is around how much investment to give each one. Examples might be: improving performance of the backend; fixing bugs reported by end users; and building a new user interface.
  • Deliverables: In the middle level are large deliverables that will provide a specific recognisable benefit and contribute to progress on the overall streams. Typically they are all-or-nothing deliverables – they provide no value until they are complete, although it may be possible to vary the overall quality or quantity of delivery. At this level, prioritisation is generally about choosing which of these deliverables to pursue and which to exclude, or postpone until later. Examples might be: upgrading the servers; creating a new datacentre in Asia; and building a new homepage.
  • Tasks: At the lowest level are specific tasks that must be completed in order to make progress on deliverables. Typically, completion of individual tasks does not provide value – this only occurs when the deliverable they are a part of is completed. At this level prioritisation is largely about sequence – which order to do things in so as to maximise efficiency; address dependencies between tasks; address risks and uncertainties; and overall to achieve the deliverable as soon as possible. Examples might be: buying new servers; installing the servers; deploying software to the new servers; testing their operation etc.

A list that contains a mix of items from these different levels cannot be effectively prioritised. To make progress, it is necessary to separate the list into streams, deliverables and tasks, creating three separate lists.

Prioritisation can then start with the top level, defining which streams are most important and how to divide effort between them, bearing in mind the overall aims. Prioritisation can then move on to the middle level, defining which of the deliverables will contribute the most to the selected streams. Finally, prioritisation of individual tasks can take place – working just with those tasks that are related to the selected deliverables.


Having fixed the list, we are faced with a set of separate, but related, prioritisation events. The best way of prioritising each one depends on its particular characteristics. The prioritisation scenario describes the structure of the situation that you are prioritising for. There are three main types:

  • One-off opportunities: In a one-off opportunity, the prioritisation is determining what will happen in the one and only run of a situation (or at least, the only opportunity that is currently foreseeable and under discussion). This opportunity is also of a limited type – usually it has a limited duration, either because there is a specific time limit, or because there is a benefit in completing sooner rather than later. Examples include: preparing for a birthday party; deciding what to include in an annual product release; and which new car to buy.
  • Repeated plays: In a repeated play, the opportunity that you are prioritising for will be repeated several times. Anything that is not done this time around can be rolled over to the next time. This changes the nature of the prioritisation, since the overall question becomes more about timing than an absolute decision whether or not to do something. Examples include: where to go for dinner on a night out; what to put into the next monthly release; even where to go on holiday this year. In each case, it is conceivable for an item to be highly desirable, but to be postponed to a later play: “we’ll go for Thai food next time”.
  • Ongoing activity: The third scenario is one in which there is no specific end point and in which there are no real intermediate deadlines either. There is a steady state within which progress can be made, but no absolute timescale or points for review. Examples include: the general running of a business; planning upgrades to software with a daily deployment cycle; and maintaining a home.


Once the list has been prepared and the characteristics of the particular prioritisation event are understood, all that remains is to choose an appropriate method.

Often we attempt to assign priorities by marking each item in the list as ‘High’, ‘Medium’ or ‘Low’, or rating using numbers 1, 2 and 3. However, these approaches usually prove to be ineffective. A big part of the problem is that the levels do not have a clearly defined meaning: although it is clear that ‘High’ is more important than ‘Medium’, there is nothing to define where the boundary lies, leaving each individual involved in the discussion to form their own impression. This is what often leads to disagreements and problems like everything ending up ‘High’. As a result these approaches are best avoided.

Here are four alternatives to consider:


MoSCoW prioritisation uses four levels: ‘Must have’, ’Should have’, ‘Could have’ and ‘Won’t have’. These labels work better because they have an explicit meaning, which is clear to all participants and can be constantly referred back to. “Is this really a ‘Must have’ for the next release? Can we imagine releasing without it?” This gives a much sharper edge to the decisions to be made.

This works well for one-off scenarios dealing with deliverables. It can also be used for repeated plays, if it is focused on the priority of each item for the next release.

It is not effective in ongoing scenarios, because the levels lose their meaning. MoSCoW is ineffective when dealing with streams or tasks, as it doesn’t help to express proportional investment, or deal with sequencing.


Forcing every item into a single rank order is a useful method where it is necessary to get a more precise prioritisation between items, or to avoid stakeholders rating everything as ‘must have’. The method works because it breaks down the complex problem of what is important into lots of paired comparisons: is this item more or less important than that one?

It is a useful approach for deliverables, particularly in repeated or ongoing scenarios, where the rank order sets out a roadmap of action from the highest priority items down to the lowest, that can be executed over time.

It can also be useful for tasks, as it can be used to capture the sequencing of the items, provided that blockers are taken into account when creating the ranking. However, more complex methods would be needed if we wanted to fully express the dependencies between items; which items could be done in parallel; and to analyse the critical path.


In many situations, rating each item according to two factors can be an effective method of identifying the best opportunities that exist in a list. For example, rating items by potential value and likely cost of implementation.

This method is often represented diagrammatically as placing items into a grid made up of four sections, with one factor on the horizontal axis and one factor on the vertical axis. With value and cost, this yields the following sectors:

  • Low-value / High cost – to be avoided
  • Low-value / Low cost – of limited interest
  • High-value / High-cost – of potential interest, if the return on investment is sufficient
  • High-value / Low-cost – the best areas for investment / low-hanging fruit

This kind of analysis is useful for both streams and deliverables. It can be applied in all scenarios, but is particularly relevant to ongoing scenarios.

It is useful where the stakeholders in the discussion appear to have different, or competing, idea about what is important, since it forces a more detailed analysis and an explicit discussion about costs and benefits. However, although it helps to identify the the best areas for investment, it doesn’t necessarily produce a clear prioritisation decision as an output, so it may be necessary to supplement it with another method, taking into account what has been learned as a result of the analysis.


In some cases, when the complexity of the domain and the importance of the prioritisation decision are high, a more detailed method may be needed. This is particularly true where there is a requirement for an audit trail to explain how the final choice was arrived at.

This can be done by scoring each item according to a number of factors (anything from 2 upwards) and then combining those factors, using weights if necessary, to arrive a single score for each item. These scores can then be used to sort the list, giving the final priority.

For example, items could be scored according to their likely benefit, the potential penalty of not doing them, the cost of implementation and the perceived implementation risks.

This approach is useful for streams and deliverables, particularly where the priorities are not obvious, or where there is disagreement about the right course of action.


When the need for prioritisation crops up next, take a step back and approach it systematically.

Establish the overall aims; break the list up into relevant streams, deliverables and tasks; and then work through the hierarchy, picking an appropriate prioritisation method for each list.

Contribute to the discussion: What prioritisation methods have you found useful? What pitfalls have you experienced?



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s