If you frequent the blog, you have probably heard that we at ArgonDigital recently published a tool study. The second phase of this tool study aimed to evaluate 22 requirements management tools against 207 criteria, which were derived from 8 categories. The breakdown of criteria is as follows:
Category | Number of Criteria |
Plan Requirements | 35 |
Elicit Requirements | 3 |
Analyze Requirements | 36 |
Specify Requirements | 21 |
Validate Requirements | 15 |
Manage Requirements | 38 |
User Experience | 25 |
Administer Tool | 23 |
Misc | 11 |
To me, these numbers make sense. There is little a tool can do to actually help you elicit requirements, leading to the “Elicit Requirements” category having very few criteria. Likewise, there is a lot that a tool can do to help you plan, analyze, and manage requirements.
I wanted to look at the average score of each criterion under these categories, using the scale from the study.
Score | Definition |
0 | Cannot do in tool |
1 | Can do it, but there’s a manual workaround |
2 | Can do it without any workarounds, but it’s still not that easy to use |
3 | Can do it in the tool and it’s user friendly |
I hypothesized that the categories with the lower number of criteria would have a lower category average, because most tools would focus more on having the functionality in the categories that tools are more likely to assist with. However, looking at the results, this is not the case.
Category | Average Score |
Plan Requirements | 2.14 |
Elicit Requirements | 2.17 |
Analyze Requirements | 1.96 |
Specify Requirements | 2.08 |
Validate Requirements | 2.30 |
Manage Requirements | 1.96 |
User Experience | 2.21 |
Administer Tool | 2.34 |
Misc | 2.31 |
As you can see, these findings actually go against what I originally thought would happen. “Analyze Requirements” and “Manage Requirements” scored the lowest of the categories, while “Elicit Requirements” and “Misc” scored relatively high.
The reason for these scores is simply that there were more criteria to evaluate for the more relevant tool associated categories; therefore, there were more opportunities for lower scores. What I can get from this though, is that out of all the criteria that we identified for the less relevant tool categories, the tools seemed to adequately fulfill the criteria.
Download your free copy of the Evaluation Report.