Analytic Hierarchy Process (AHP) is one of the most popular decision making methodologies. It is intuitive and easy to use. However, if you try to use it without dedicated software, the underlying math can be challenging, especially as the number of participants, alternatives and criteria increases.
During the last few years I was engaged in developing a number of AHP-based decision making software packages, including:
AHPproject which was used by thousands of users from around the globe. It was available for free and I think it might have been one of the most widely used decision-making packages ever. Actually, its popularity combined with a “free” pricing killed it – we simply didn’t have enough resources to support it.
MakeItRational which is still very popular, especially the “Desktop version.” MakeItRational Desktop delivers a unique way of supporting offline collaboration behind a firewall, which allows you to collect votes from evaluators without sending data to external servers. This level of data security is something our defense-sector customers, among others, love!
Each project brought us tons of feedback regarding users’ needs. In this article I collected features that I believe are “must-haves” for any effective AHP-based decision making software. AHP software should be much more than just AHP-math calculator.
Building the criteria hierarchy might be a challenge. Before your hierarchy is ready to use you will need probably to reorganize it a couple of times as the team works through the "creative" process.
Things like “drag and drop” are essential here. Of course, you can draw your hierarchy on a piece of paper and then model it with software but what if you need to change it later?
Sometimes participants report new criteria during evaluation. Some software will not handle this very well, losing data or forcing you to restart the data collection process.
We sometimes hear of users leaving important criteria out simply because they can't face "restarting" their project. Your software should, in fact, support your changes easily and should carry your already-collected judgments across seamlessly.
However, when the number of comparisons is large, it is time consuming to provide comparisons. AHP software should help you to reduce that number.
The total number of comparisons can be calculated as n(n-1)/2 (where n is number of compared elements). To get meaningful answers from AHP, however, this number can be reduced as low as n-1.
So, for example, if you compare 10 elements:
Total number of comparisons: 10x(10-1)/2 = 45
Minimal number: (10-1) = 9.
You get 9 comparisons instead of 45. It makes a difference: if each judgment takes 1 minute, you just saved over half-an-hour for each participant in your project - nice job! But it's not quite that simple - see the note on consistency below.
AHP software should help you to provide minimum data needed to perform calculations in the shortest time.
Please remember that redundant comparisons allow to check consistency. The more comparisons you provide the more reliable consistency check you can perform.
If you limit the number of comparisons to the absolute minimum, you won't be able to check consistency.
When you, or your evaluators, provide tens of pairwise comparisons, you won’t be totally consistent. AHP software should check the consistency of entered data and warn if the inconsistency is too high.
Often your evaluators will not be familiar with AHP. They (and everyone else) will need something more than just a “you are inconsistent – fix it” message. What can be of help here:
Inconsistency metric. It is good to know if your adjustments improve consistency or make it worse. You change values of comparisons and observe inconsistency metric. If it is decreasing you know you are on the right way.
Identification of the most inconsistent comparisons. Imagine that you have provided 45 comparisons and got an inconsistency warning. You want to fix it but where to start? Software should be able to suggest a starting point – the most inconsistent comparisons.
Identification of contradictions. There might be some comparisons that are not just inconsistent, but are downright contradictory. In most cases, these are errors that have to be fixed but are very hard to find without support from software.
Information on how to resolve inconsistency. It is possible to guide an evaluator on how to modify comparisons in order to improve consistency by "suggesting an answer". However, this is a problem if people overuse suggestions instead of rethinking their judgments. In other words, you don't want people relying on the suggestions of the software, you want them thinking it through themselves.
AHP is often used for collaborative decision making. The biggest value from AHP software you get during evaluation: that is weighting criteria and "scoring" alternatives against those criteria.
Why? Well, the math behind collecting and reviewing votes is complex and there is no good workaround to do it without dedicated software.
People should be able to make their own judgments and then the software should allow team to quickly identify areas of disagreement. This process becomes unmanageble very quickly without the aid of software.
5. Sensitivity analysis
Sensitivity analysis is another important feature. Without it, you simply don’t know how stable your results are. Maybe a small change in priorities would lead to totally different result?