Collaborative Decision-Making - The Ultimate Guide

Written by Dan Dures


AHP (Analytic Hierarchy Process) has two core use cases for supporting better decision making; ‘build a portfolio’ and ‘pick a winner’. Both are premised on structured collaborative decision making, and share a lot of common features, but this guide will focus on the latter.

If you are interested in ‘Build a Portfolio’ type decisions please look at our other guides, these are the solutions we work on most frequently:

In principle ‘Pick a Winner’ can be applied to many scenarios, but these are the most popular applications:

  • Vendor Selection
  • Value engineering / Process design
  • Go / No-Go Decisions
  • Public Policy development
  • Candidate selection for recruitment

We will explore these use cases throughout this guide.

There are a lot of fantastic academic reviews out there which validates this approach (for example this, from the automobile sector) or you can revert to the case studies of Professor Saaty AHP’s creator, but this guide will focus on the practical application of AHP, as experienced by our clients here at TransparentChoice. For more background on AHP look here.

Let’s dive in.

 

 

Why You Need Collaborative Decision Making

The most common reason we hear is that ‘something isn’t quite right’ - clients realise there’s a better way to do make decisions, and here’s why:

  • The opportunity cost of poor decisions is vast. McKinsey’s research tell us only 20% of organizations believe they excel at decision-making, and with a clear correlation to financial performance getting decision making right is the biggest edge you can build.

  • The human waste of ineffective decision making is huge. McKinsey estimate that the Fortune 500 alone wastes $250m a year on wasted labour costs in ineffective decision making.

  • Speed matters. Slow, ambiguous decisions sap the energy of an organisation, while creating capability to make effective decisions at pace will drive both performance and morale.

  • Transparent decisions are better. Autocratic edicts from anonymous committees do not build trust either within or outside an organisation. Being able to justify an outcome builds trust and therefore compliance.

 

When to Use Collaborative Decision Making

Not all decisions lend themselves to AHP but here are the ones that do:

  • One-way decisions - If a decision is effectively irreversible then you need to get it right. Non-software-based Engineering, capital investments, hiring, public policy are all good examples.

  • Big Bets - If a decision is important then it deserves to be well made. Selecting a core component in your solution, deciding where to put a new airport, deciding on whether to stop a €200m project are all classic examples.

  • Complex Decisions – If a decision has complex criteria that make it too large for small group to assess effectively then AHP will decompose it into smaller addressable components.

  • Divisive Stakeholders – If you have conflict within your stakeholder group, then AHP provides a mechanism for reconciliation of seemingly opposing points of view. The ability to quantify and document also adds surety to the outcome – often key for public policy.

  • Best Practice – Using structured collaborative decision making for smaller everyday decisions makes sense too, provided it’s baked into your operating model. Users develop the muscle memory to make fast effective decisions, teams build collaboration into their DNA.

 

Setting Up a Selection Process

  1. Defining Criteria
  2. Scoring Alternatives
  3. Picking the winner – analysing the data

Before defining the goals of the decision it is important to identify the qualification hurdles which alternatives must meet. This has 3 key benefits:

  • Remove qualification criteria out of selection criteria, so the model does not get distorted by hygiene factors

  • Apply hygiene factors to alternatives to remove unsuitable alternatives from full assessment

  • Check you have enough credible alternatives (i.e. more than 1) to warrant detailed analysis

For example if you are running a military procurement process there may be legislation relating to where your suppliers come from, or an IT procurement may be limited to choices with a specific level of security certification. Removing non-compliant options simplifies the decision.

Agreeing the role of Cost & Risk is part of this step. They can be treated as pre-conditions, selection criteria, parallel models…or a mix. Having this clearly thought through is key to producing a clear effective model. We will touch on this later.

1. Defining Criteria

The core of an AHP model is a clearly defined goal, broken down into measurable weighted criteria. We recommend 5 steps:

 

Engage with Stakeholders (a lot)

A great decision is one that people buy into, so building engagement from the outset is key. Here are our recommended steps to get there:

  • Start with the high-level goal: what is your objective from this decision? Be clear and precise about what you want to achieve, breaking this into criteria for the model. This will surface differences in opinion, driven by conflicting points of view. Choosing the location of an airport might mean accessible jobs to one group, or minimising environmental impact to another. Capturing this breadth of opinion is how your decision becomes inclusive.

  • Work closely with your leadership team: failure to align at this point will undermine the whole project, so it’s critical not to rush. Our clients often talk about investing time up front to structure criteria, to save time later with a much smoother selection process.

  • Bring in Subject Matter Experts to develop sub-criteria, using their knowledge to create precise measurable definitions. Avoid ‘woolly’ generalisations.

  • Be prepared to iterate. These things are rarely right first time, so ensure you have scope to integrate feedback before committing a model to weighting.

 

Build a well-structured model (with the right level of precision)

For a well-built ‘pick a winner’ model we recommend the following steps:

  • If you have run a similar process previously consider the outcomes: what mattered most? Link these learnings to your new model.

  • Develop the model before you have alternatives. This will make the model fairer and help inform the choice of alternatives.

  • Apply AHP best practice: limiting the number of criteria on a given level to 7 to make pairwise effective. Where you have more than 7 consider how to cluster.

  • Clustering should be an objective, iterative process to combine the contributions of the leadership team and experts. Look for themes in the detail to cluster more effectively.

  • Do not fragment measurable high-level goals unnecessarily as this adds noise to the model. For example if “time to complete” is a criterion this can be measured and scored. Breaking this into “time to complete steps A/B/C” will distort the result. It doesn’t matter if step A is slow, if it facilitates far faster performance in B & C – what matters is when the whole job is done.

  • If a sub-criterion is not measurable then consider a ‘sub-sub’ criterion as it may simply be too high level to score.

  • Often ‘less is more’. For highly technical specifications detail may be needed, but if a sub-criterion is worth less than (say) 1% of the total model weight then consider if it is really material.

  • Include clear descriptions that make Criteria unambiguous. If respondents can interpret the same criterion in multiple ways it will create noise into the model.

 

Apply Weights (with a pairwise review)

Pairwise is the mechanism for making seemingly hard to compare criteria comparable:

  • For simple criteria it is ideal for leadership to complete this step: effectively defining what matters to the organization as a whole.

  • For more complex criteria SMEs should provide specific insight on any branch of the tree which is too specialist to be scored by the leadership.

  • In both cases the key principle is don’t guess. If a respondent has no value to add on a decision they should not attempt to fake a point of view.

  • Use the weighting process to share perspectives, pooling knowledge to reach a collective outcome with far less noise than an individual would produce.

  • ‘Gut check’ the outcome of your review. Do these weights make intuitive sense? For complex decisions look especially closely at local weights.

 

Work out how to deal with Risk (there is no single right answer!)

Risk is usually a key factor in selecting the best option – but that does not always make it a good Criteria for an AHP model. While the rules are not fixed, here are our recommendations:

  • Some risks are hygiene factors. They should be set as Gating Factors outside the model, eliminating any alternatives that cannot get to an acceptable standard.

  • ‘Value’ signifies the benefit that an investment will bring with respect to delivering your goal. If that value includes the reduction of risk (e.g. a cyber upgrade) then it belongs in the criteria model (just remember to invert it, so the criteria would something like “To What extent will this option reduce cyber risk?”).

  • Where Risk relates to the probability of not delivering on the value, it belongs in its own model as a separate dimension in the decision analysis. Timeliness may well be a criterion if your procurement has a deadline, but risk of missing this deadline should be considered separately.

  • Risk modelling is specific to alternatives, and unlike value criteria should be completed after you have specific candidates ready for final evaluation. This will limit the amount of work (assuming some options drop due to low value score) and enable you to capture specific concerns that arise during the assessment process.

  • Ultimately when designing your decision you should revert to the conversations you want to be able to have at the end of your analysis – this could come down to “do we want to best option” or “do we want the safest option”?

 

Work out how to deal with Cost (there is no single right answer!)

Like risk, cost is always a key consideration, but often not a good criterion for an AHP model. These are the key factors we have learned:

  • “Cost” like risk is a broad term, so think about the types of cost and how to arrange them in your data collection process.

  • Often people are looking for Value for Money as their ‘north star’. As such having a distinct numerator for Value and denominator for Cost makes this analysis more transparent.

  • To calculate cost think about what’s in your formula: is it just capital outlay, or do you need to add in an FTE cost for people? Combining them into a single value will help make this useable as a metric. Parallel overlays are confusing.

  • Cost may be a criterion regarding ongoing costs. Maintenance costs would be a typical example of an important selection criteria on a procurement contract.

  • Cost can also be a Gating Factor, eliminating choices which are beyond the budget available. This can save time scoring something you cannot afford.

  • Keeping cost out of the selection criteria also increases the flexibility of a complex decision. If you need to buy 10 components perhaps selecting the cheapest (that meets minimum criteria) for 1 enables you to include better choices for the other 9. i.e. we can deliver optimization.

  • Applying consistency checks is important. People are sometimes irrational beings, and their answers can lack internal logic, so always review, and re-score where answers do not make mathematical sense.

  • Again, always revert to the decision you are making. If you are launching a satellite, cost may not always be a key consideration, so it can become a criterion (driving the numerator) while the denominator is physical weight – therefore making ‘value per KG’ your North Star metric.

 

2. Scoring Alternatives

You have a weighted criteria model with broad organisational buy-in, and a considered solution for risk & cost… congratulations! You’ve eliminated the obvious non-starters with gating factors. Now how can you score your alternatives?

 

Decide how you will collect measurement data

There are 3 ways to measure alternatives. Consider which works best for each criterion. Don’t be afraid to mix and match to optimise the model:

Surveys built on scales give respondents the means of calibrating alternatives against sub-criteria. Here are our tips for creating scales:

  • Quantifiable scales offer more precision, with less room for (mis) interpretation by scorers.

  • Qualitative scales should have clear descriptions which discriminate levels (i.e. be clear what makes an option “Good” vs. “Very Good”).

  • Each level in your scale has 0-100 weight which determines the score. Think about how you apply these weights: a flat 0-25-50-75-100 distribution or a skewed weighting?

  • Consider the maximum requirement, such that being further above this threshold adds not extra value. This should be weighted at 100. So for example if “Years in Operation” is a criterion, it may be that 30 years is no more credible than 15 years. In this case you simply score 100 for “>15 years”. Over-deliver on a criterion should not be rewarded.

Data entry means taking an ‘objective’ data source and plugging it into the model directly. Here’s how you can do it:

  • Use the output of a model as a source of measurement. For example if you’re choosing a car you’ll have data on emissions, so you don’t need to survey people to determine the greenest option.

  • Understand the two key caveats which may mean you need to manipulate this data. Firstly, AHP models cannot work with negative numbers, and secondly numbers must be ascending (i.e. higher the better). So in our example of carbon emissions we would need to ‘reverse’ this data point such that the greenest solution had the highest numerical value.

  • Set a maximum value. If using Geometric Normalization the highest value in the data set scores “100” with other options’ scores based on a ratio. As such a high performing outlier could make the other options score very poorly. Having no fixed maximum also creates instability, as a new high score would lower that of other alternatives. Setting a maximum also means you will not reward over-deliver: alternatives who score over the highest value you need.

Pairwise is the ‘classic’ AHP approach for both criteria weights and scoring alternatives. While this has proven issues for dynamic portfolio selection, it can work well for ‘Pick One’ given these conditions:

  • All the criteria are relevant to all the alternatives. There is no “zero” in Pairwise.

  • No more than 9 (ideally 7) alternatives. Pairwise is ratio based and becomes too cumbersome with too many options.

  • No alternatives join the process late. Scores are relative, so adding new options alters ratios and can create unintended changes to existing scores.

  • Scorers must be able to cover all alternatives, while a scale model enables scoring to be broken up between different experts.

  • For more read this blog.

 

Use the wisdom of the team to create better data

Having a single person put scores into a spreadsheet is rarely the best way to judge alternatives. Here’s why:

  • Bias & Politics are classic banana skins to good decisions. This spans from downright cynical choices to favour things for the wrong reason (why do you think there are legal limits on corporate gifts…) to sub-conscious preference for relatively innocent reasons (perhaps favouring a choice with which you’re familiar).

    Whatever challenges you face, they will be there, and collaboration can help mitigate them. It’s far easier to manipulate a business case than it is to divert a structured review with your peers. It’s also more likely that you can think beyond your subliminal bias in an environment set up to invite challenge. This is especially true if you’re the boss. And don’t forget that game-players love ambiguity so a well-built set of criteria will also cut scope for their nonsense.

  • Noise is everywhere and creates serious inconsistency in human judgements. This is the key theme for Daniel Kahneman’s recent book on the topic, and it’s exactly what we see every time a client starts scoring. Two rational people voting honestly with similar levels of knowledge still come up with radically different opinions. Indeed the same person might differ from one end of the day to the other. Through syndicating judgements we help cancel out error.

  • Joint problem solving helps focus teams on solutions not conflict. Adam Grant’s Podcast “The Science of Productive Conflict” puts this into perspective brilliantly. Listen to a couple discussing over-ordering pizza. From one perspective it’s about waste… from another it’s about nourishment. Both valid points of view, but nominally in conflict. Left unresolved this becomes entrenched and deeply personal. Listen to respective motivation and it’s easier to find a solution (in this case order freely but commit to eat leftovers!).

    These skirmishes exist across all organizations, and they distort the ability of people to collaborate effectively. By breaking decisions down by criteria you force people to look at choices from all directions. By inviting people to share their point of view you unpackage the outcome and focus on the rationale, where far greater alignment can be achieved.

    Read about how one client benefited from this alignment to halve the time taken to make a Go-No Go Decision on a major public infrastructure project.

  • A recent study in the UK valued the knowledge-based economy at £95bn per year. That vast economic value is based on the output of skilled teams working together to innovate at pace. The Oxford- AZ vaccine story captures this paradigm perfectly.

    For scoring and we see the opportunity to shift decision making from the “loudest person in the room” (as one of our clients recounted) to becoming a team sport, where all the experts in the room can take part. Embed AHP into the DNA of how you make decisions, and you will find that your skilled teams can work together more effectively… and faster. Here is another client talking about that momentum.

  • Collaboration means involving the right people. When you bring scorers together look for breadth of opinion, diversity, subject matter expertise and an willingness to challenge. Cheerleaders have their place, but it is not in a collaborative decision-making process. Ensure that neither the loudest nor most senior person dominate.

 

Use Broader Stakeholder Participation to build support for the decision

Giving people a voice, a chance to be heard, is something we often hear as a key benefit of deploying AHP.

  • Some decisions are not universally popular. When one client was tasked to locate a new airport this was never going to be popular with the local environmentalists. However by listening to their concerns, and putting them alongside those of the local community, keen for new jobs, it helped the client forge an acceptable consensus.

    Similarly looking at taxation systems in a US state. There are always ‘losers’ who will resent extra burden but by integrating their concerns and demonstrating the needs of others it helped build acceptance.

  • A well-made decision has far greater chance of delivering long term political acceptance. Through engaging broadly, and demonstrating views were included in the process it makes it harder for decisions to be reversed, as sometimes happens when political leadership changes.

    We heard about this from a project to build a long-term transport policy in Belo Horizonte – where including the votes of 150 people from an array of backgrounds helped make the outcome robust enough to last (5 years and counting!).

 

3. Picking the winner – analysing the data

The value of the data built in your AHP model is realised through the quality of the final review. Here’s how to complete your project brilliantly:

 

Fix your ‘North Star’

When setting up the model you may have ‘parked’ some key considerations with a view to bringing them back alongside value. Again its key to get your leadership on board for this step:

  • Value for Money is the most popular metric. It only works if you keep cost out of the value equation and comes highly recommended by numerous clients.

  • Risk, as discussed above, is typically a hygiene factor rather than a motivator (terms adapted from Hertzberg’s Two Factor Theory). That means you are looking for an acceptability threshold: the point at which a choice is not too risky. Knowing this enables you to focus on the option providing the greatest value.

 

Explore your model

In a Pick One model your outcome, at its most basic level, is a set of scores. Chose the highest one and you’re done. But life is rarely that simple:

  • Understand source of value to show ‘pros’ and ‘cons’. While weights have already done this mathematically, it makes the final call more accessible.

  • Drill into sensitivities to see what happens if you change a given weight. This can be vital if you have a close call between a couple of front-runners.

  • Create scenarios based on alternative weight sets. Work through some ‘what if’ outcomes. Are some options more robust given multiple potential futures?

 

Be Transparent

The decision-making process, the scoring, the north star, the sensitivity analysis means you have a wealth of data to validate the eventual decision you make. Make this data a point of record. Even better, follow the example of this client and use it to replace that 200 page document nobody would ever read. Following an AHP process means that once you’ve picked your winner you can be confident that your selection represents a quality decision.

 

Why Software Makes Collaboration Work

If you’re still reading then you’ve probably got a real-life challenge you need to deliver and can see the value in an AHP led collaborative approach.

But do you really need software to do this?

If you have a relatively simple decision, and just need to apply more structure and inclusiveness then you may be confident in applying these principles yourself. You may feel that 80% of the benefit can be captured in a well-built spreadsheet. If this is you, then that’s great.

But before you commit to this route it’s worth weighing up the hidden costs. Read our blog on the dangers of the magic spreadsheet. Calculate the savings you can make through better use of people’s time. Look at the value of the decision and check what a 2% improvement in outcome would be worth.

If you’re starting to think that software might be a good idea we’d love to talk.

TransparentChoice: Our Take on Collaborative Decision-Making Software

In addition to the core AHP benefits we’ve outlined above we also offer 5 key points of difference to our clients:

  • The ability to define a "decision process" - the steps to take your alternatives through before, during and after the "AHP bit". Use this for your gating factors, risk reviews & sign off steps to help organise decision flow & analyse data.

  • A robust audit-trail. See who entered cost data and when, who changed that project risk estimate and why… All of this is laid out in a nice timeline that means you can spend public money with confidence and transparency.

  • When running surveys you can "break out" respondents into groups. This is key to knitting together SMEs’ knowledge, and managing people’s time.

  • The ability to have multiple AHP models as part of an evaluation process. This lets you break apart "risk" and "value," for example, and have a model for each. This may sound geeky, but it's extremely powerful.

  • Usability is our North Star, as we believe collaborative decision making should be the choice for every forward-thinking leader, not just AHP gurus.

 

 


Don't forget to share this post!