Collaborative Decision Making - The Ultimate Guide

Updated:
Published:

Analytic Hierarchy Process (AHP) has two core use cases for supporting better decision making; ‘build a portfolio’ and ‘pick a winner’. Both are premised on structured collaborative decision making, and share a lot of common features, but this guide will focus on the latter.

There are a lot of fantastic academic reviews out there which validate this approach (example from the automobile sector) or you can revert to the case studies of Professor Saaty AHP’s creator, but this guide will focus on the practical application of AHP, as experienced by our clients here at TransparentChoice.

The cost of poor decision making

  • The up side of good decisions is vast. McKinsey’s research tells us only 20% of organizations excel, and with a clear correlation to financial out-performance demonstrating the value of building this capability.
  • Conversely, the human cost of ineffective decision making is also huge. McKinsey estimate the Fortune 500 wastes $250m annually on labor costs linked to it.
  • The time cost of poor decision making is another source of waste in many organizations. There are lost hours in meetings that go nowhere during planning, then a knock-on effects of delays in starting projects. It all adds up.
  • The disruption cost of disputed outcomes can be vast. For a public organization this could mean a legal challenge. For a business, it could mean non-compliance and political maneuvers. Time, money, and energy are all at risk.

When to use collaborative decision making

  • Vendor selection – Watch this micro-demo to see how to select a supplier for a new software solution using an AHP model as part of a selection process.
  • Site selection – Notably for controversial public infrastructure projects, such as picking the location for a new airport.
  • Engineering design projects – See how this Systems Integrator made AHP part of their design development best practice.
  • Build vs Buy decisionsSee how one client analyzed the relative merits of in-house vs. external for new ticketing solution for a metro system.
  • Go / No-Go decisionsCheck out this case study to see how one client cut down the time to make a key decision in a public infrastructure project.
  • Public Policy development – AHP can help make policy more transparent and more inclusive. Find out more here.
  • … or all of the above & more – Best-in-class use of AHP means putting data science into multiple decisions (and prioritization).

Are you ready to make a collaborative decision? 3-point checklist

  1. Ensure you have senior stakeholder alignment to define the objectives of your model and support the outcome of your review.
  2. Check you have the data you need – AHP can suffer from “Garbage in-Garbage Out” much as any other analytical solution.
  3. Identify gating factors (more on them to come) and remove alternatives that do not have them. Make sure you have more than one choice left.

Assuming you’ve passed these checks you’re ready to build a model: here’s how.

1. Engage with Stakeholders (a lot)

  • Identify what different stakeholders are looking for in the selection process. Experience, role, and psychological preference mean every stakeholder has a unique perspective to bring into the group so diversity will add value.
  • Work closely with your leadership team to build alignment. Our clients often talk about investing time up front to save time later.
  • Bring in Subject Matter Experts to develop sub-criteria, using their knowledge to create precise scoreable definitions. Avoid ‘woolly’ generalizations.
  • Be prepared to iterate. These things are rarely right first time, so ensure you have time to integrate feedback before jumping into weighting and scoring.

2. Define well-structured unambiguous criteria (with the right level of precision)

  • If you have run a similar process, previously consider the outcomes: what mattered most? Link these learnings to your new model.
  • Develop the model before you have alternatives. This will make the model fairer and help inform the choice of alternatives.
  • Apply AHP best practice: limiting the number of criteria on a given level to 5 to make pairwise effective. Where you have more consider how to cluster.
  • Clustering should be an objective exercise in identifying themes and breaking them down to the level where there is no ambiguity in their meaning. Turn fluffy words (“develop culture”) into precise outcomes (“drive retention”).
  • Do not fragment measurable high-level goals unnecessarily as this adds noise to the model. For example, if “time to complete” is a criterion this can be measured and scored. Breaking this into “time to complete steps A/B/C” will distort the result. It doesn’t matter if step A is slow, if it facilitates far faster performance in B & C – what matters is when the whole job is done.
  • If a sub-criterion is not measurable then consider a ‘sub-sub’ criterion as it may simply be too high level to score.
  • ‘Less is more’. For technical specifications detail may be needed, but otherwise if a sub-criterion is worth less than c.1% consider if it’s adding enough value.
  • Include clear descriptions that make criteria unambiguous. If respondents can interpret the same criterion in multiple ways, it will generate noise.

3. Work out how to deal with Risk (there is no single right answer!)

  • Some risks are hygiene factors. Set them as Gating Factors outside the model, eliminating alternatives that cannot get to an acceptable standard.
  • ‘Value’ signifies the benefit that an investment will bring with respect to delivering your goal. If that value includes the reduction of risk (e.g. a cyber upgrade) then it belongs in the criteria model (just remember to invert it, so the criteria would something like “To What extent will this option reduce cyber risk?”)
  • Where Risk relates to the probability of not delivering a project to plan, it belongs in its own model as a separate dimension in the decision analysis. Timeliness may well be a criterion if your procurement has a deadline, but risk of missing this deadline should be considered separately.
  • Risk modelling is specific to alternatives, and unlike value criteria should be completed after you have specific candidates ready for final evaluation. This will limit the amount of work (assuming some options drop due to low value score) and enable you to capture specific concerns that arise during the assessment process.
  • Ultimately when designing your decision, you should revert to the conversations you want to be able to have at the end of your analysis – this could come down to “do we want to best option that is safe enough” or “do we want the safest option that is good enough”?

4. Work out how to deal with Cost (there is no single right answer!)

  • As a broad rule cost should not be in your Value model, but this doesn’t mean it isn’t a key factor in your selection process.
  • Cost is often a Gating Factor, eliminating choices which are beyond the budget available. This can save time scoring something you cannot afford.
  • Value for Money can be a great KPI. As such having a distinct numerator for Value and denominator for Cost makes this analysis possible.
  • Keeping cost out of the selection criteria also increases the flexibility of a complex decision. If you need to buy 10 components perhaps selecting the cheapest (that meets minimum criteria) for 1 enables, you to include better choices for the other 9. i.e. deliver optimization.
  • Cost may be a criterion regarding ongoing costs. Maintenance costs would be a typical example of an important selection criteria on a procurement contract.
  • To calculate cost think about what’s in your formula: is it just capital outlay, or do you need to add in an FTE cost for people? Combining them into a single value will help make this useable as a metric.
  • Again, always revert to the decision you are making. If you are launching a satellite, cost may not be the key consideration, so it can become a criterion (driving the numerator) while the denominator is physical weight – therefore making ‘Value per KG’ your main selection goal.

5. Apply Weights (with a pairwise review)

  • Trust the Decision Science. Scoring the relative importance of criteria is the best way to build a weight set. Checking consistency then forces you to address points where you disagree with yourself.
  • Apply moderation: “Very Strong” votes in Favour of one option will leave the other with a very low weight in the model, which may not be intended.
  • For simple criteria get leadership to vote, so the outcome represents policy.
  • For complex criteria SMEs can provide specific insight on a given branch.
  • Never guess. An ill-informed opinion is noise. Let those who know vote.
  • Share perspectives to reach a collective outcome – more knowledge, less noise.
  • ‘Gut check’ the results. Do they make sense?

1. Scoring alternatives means testing credible choices against the criteria you have just built. Here’s how:

  • Remove any options which fail your gating factors. Save time by doing this before a comprehensive review.
  • Build a data plan and map out where your sub-criterion scores will come from. There are 3 options, detailed below, and you can pick any for a given sub-criterion. Check out this micro-demo to learn more.
  • Don’t get overwhelmed with mountains of data: work out which signals add value to your criteria and focus on them.
  • Check out our handy template if you’re ready to start planning.

2. Use a scale built into a survey. Here’s how:

  • Quantifiable scales offer more precision, with less room for (mis) interpretation by scorers, so if it’s possible we recommend it.
  • Qualitative scales should have clear descriptions which discriminate levels (i.e. be clear what makes an option “Good” vs. “Very Good”)
  • Each level in your scale has 0-100 weight which determines the score. Think about how you apply these weights: a flat 0-25-50-75-100 distribution or a skewed weighting?
  • Best practice is to always have a zero, in case an alternative has no value to a given sub-criterion.
  • Ideally target data collection at a small group of experts. Too many voters may slow down the decision, while only having only 1 will reduce the value of the process. Around 5 is ideal.

3. Pairwise can also be used for scoring alternatives (as well as criteria) but only in specific circumstances. Here’s when to use it:

  • All criteria are relevant to all alternatives. There is no “zero” in Pairwise.
  • No more than 9 (ideally 7) alternatives. Pairwise is ratio based and becomes too cumbersome with too many options.
  • No alternatives join the process late. Scores are relative, so adding new options alters ratios and so changes all scores.
  • Scorers must be able to cover all alternatives, while a scale-based model enables scoring to be broken up between different experts.
  • For more read this guide

4. Data entry, plugging ‘hard data’ into the model. Here’s how:

  • Use facts. For example, if you’re buying a house plug in the number of bedrooms or new commute time.
  • Use the output of a model. For example, if you’re choosing a car you’ll have data on emissions, so you don’t need to survey people to determine the greenest option.
  • A model can also serve to roll-up several inter-linked sub-criteria into a single driver. For example, one ROI data point could factor in revenue, cost to service and timing of cash flows.
  • Numbers must be ascending (i.e. higher the better). So, in our examples of carbon emissions & commute we would need to ‘reverse’ this data point such that the best solution had the highest numerical value.
  • AHP cannot work with negative numbers, so invert anything below zero.
  • Set a maximum requirement, such that being above this threshold adds not extra value. This should be weighted at 100. For example, if “Years in Operation” is a criterion, 30 years is no more credible than 15 years. In this case you simply score 100 for “>15 years”. Over-delivery on a criterion should not be rewarded.

1. Get respondents to vote independently

  • Ensure respondents understand the process, especially if this is their first time using an AHP review.
  • Add good descriptions, so everyone has the same understanding of the alternatives, sub-criteria, and scales.
  • Encourage people to add comments to capture their insight.
  • Tell people not to guess. Having no opinion is better than faking one.

2. Set clear guidelines for outcomes

  • Clear deadlines aligned with broader decision-making cadence.
  • Set minimum level of compliance required & chase accordingly.
  • Decide how differences in voting will be resolved. Could be simple ‘use average’ or tougher approach that says voters must align.

3. Get respondents together for a facilitated discussion

  • Set out rules for a productive meeting - how much time to spend on each question, what order to tackle questions, should votes be anonymous or named etc
  • Prepare people to listen, consider re-voting, and value outliers’ insight.
  • Find people to speak where votes are highly opposed to foster a debate about the specifics of that alternative in context of that sub-criterion.
  • Use real time scoring to settle on an acceptable outcome straight away and avoid complex ‘next steps’ that delay decisions.

4. Be ready to learn

  • A new process won’t be perfect first time but building this capability has potential to be a competitive advantage for your organization.

BENEFITS OF COLLABORATIVE DECISIONS

This all sounds great… but what benefits can you expect to get from doing all this work?

  1. Trade politics and noise for teamwork and empathy
  2. Make your decisions faster
  3. Reduce the risk of U-turns
  4. Implement dynamic decision-governance

1. Trade politics and noise for teamwork and empathy

  • Bias & Politics are the nemesis of good decision making. This spans from cynical choices (‘playing games’) to sub-conscious preference (e.g. favoring a choice with which you’re familiar). Whatever challenges you face, collaboration helps. It’s easier to manipulate an Excel business case than a structured review with your peers. It’s also more likely that you can think beyond your subliminal bias in an environment set up to invite challenge. This is especially true if you’re the boss.
  • Expertise matters. If you have a project to build a highly technical marketing solution you cannot rely on either engineers or marketeers as the sole source of knowledge. Targeting data collection by sub-criteria and structuring debate in areas of shared understanding means you can make better trade-offs faster.
  • Noise is everywhere and creates serious inconsistency in human judgements. This is the key theme for Daniel Kahneman’s recent book on the topic, and it’s exactly what we see every time a client starts scoring. Two rational people voting honestly with similar levels of knowledge still come up with radically different opinions. Indeed, the same person might differ from one end of the day to the other. Through syndicating judgements, we help cancel out error.
  • Joint problem solving helps focus teams on solutions not conflict. Adam Grant’s Podcast “The Science of Productive Conflict” puts this into perspective brilliantly. Listen to a couple discussing over-ordering pizza. From one perspective it’s about waste… from another it’s about nourishment. Both valid points of view, but nominally in conflict. Left unresolved this becomes entrenched and deeply personal. Listen to respective motivation and it’s easier to find a solution (in this case order freely but commit to eat leftovers).

2. Make your decisions faster

  • Silo-based politics within a decision-making process makes outcomes too slow. Getting competing teams to talk, listen and learn from one another breeds empathy and enables them to focus on the one thing they all agree on: that the decision needs to be made. Read about how one client benefited from this alignment to halve the time taken to make a Go-No Go Decision on a major public infrastructure project.
  • Meetings spent agreeing with one another are pleasant, but pointless. Meetings that don’t yield clear outcomes are frustrating, and cause delays. Through voting up front you find areas of disagreement and use your time to find resolution. Re-vote if you change your mind or use an average if you agree to differ. No issue is left unresolved.
  • High performance organizations develop muscle memory for how to make decisions. Using AHP will feel odd to start with, but as familiarity builds then you pick up speed and confidence. This is how one client talks about getting 10x faster with TransparentChoice.

3. Reduce the risk of u-turns

  • Have you ever been in a meeting dominated by the “loudest person in the room”? (as one of our clients recounted)… but later realized that you missed a key consideration? Again this is especially true if the loudest person is also the most senior. Who wants to tell the boss she’s missed the point? Similarly round-table discussions can become subject to group-think, where you talk one-another into aligned views without due consideration of alternatives.

    In both cases AHP provides a solution, by collecting views prior to team discussion, making opinions a matter of record not a test of how brave you feel today in a group dynamic.
  • Some decisions are not universally popular. When a client had to locate a new airport this was never going to sit well with environmentalists. However by listening to their concerns, alongside those of the local community it helped forge an acceptable consensus.

    Similarly looking at taxation systems in a US state. There are always ‘losers’ who will resent extra burden but by integrating their concerns and demonstrating the needs of others it helped deliver acceptance.
  • A well-made decision has far greater chance of delivering long term political acceptance. Through engaging broadly up front it makes it harder for decisions to be reversed, as sometimes happens when political leadership changes.

    This happened with a project to build a long-term transport policy in Belo Horizonte – where 150 people from an array of backgrounds helped make the outcome robust enough to last.

4. Implement dynamic decision-governance

  • Decision science generates an audit trail that can be used to address any challenges that outcomes face. Through quantifying each component and recording who voted for what, it is possible to document why a preference was made with no extra admin. Read how one client replaced a 200 page report document nobody would ever read with an AHP model.
  • Stress testing a decision helps create readiness in case things change, as they so often do. For close calls apply Sensitivity Analysis to understand what happens as criteria weights flex, or build alternate weights sets for “What If” outcomes.
  • Reduce risk through identifying potential weak spots. Identify key learnings from the losing choices, where they scored better than the winner. Better still make acceptance of the preferred choice contingent on improvements in their proposal that will enhance the score to grow its margin of victory.
  • Develop a culture of organizational accountability that asks; “did our choice deliver against selection criteria”. Through creating this level of scrutiny, it will not only identify areas that need escalation but will also add discipline to how the next selection process runs.

 

5 (MORE) REASONS WE SHOULD TALK

AHP sounds like the solution to challenges in my organization is facing. Do I really need a separate piece of software to do it?

  1. AHP can be built in Excel, but it comes with multiple hidden costs that mean it is rarely a good idea to do so. Read our blog on the dangers of the magic spreadsheet to lean more, or just trust us – it’s almost never a good idea and certainly isn’t “free”.
  2. Easy to use software. We believe collaborative decision making should be the choice for every forward-thinking leader, not just AHP gurus. No PhD? No problem. See what our customers say about us.
  3. The Kanban automates your "decision process". Use it for gating factors, data collection & sign-off to organize decision flow & analyze results. Make your decision a process not an event.
  4. Decision Log provides a robust audit-trail. Track votes, data entry and approval stages that make all steps in the process accountable and transparent. Combine with easy to build visualization to make quality governance achievable.
  5. Build multiple AHP models in one process. Differentiate Value and Delivery Risk for example. This may sound geeky, but it's extremely powerful.
 collaborative decision making