If you frequent the blog, you have probably heard that we at ArgonDigital recently published a tool study. The second phase of this tool study aimed to evaluate 22 requirements management tools against 207 criteria, which were derived from 8 categories. The breakdown of criteria is as follows:
|Category||Number of Criteria|
To me, these numbers make sense. There is little a tool can do to actually help you elicit requirements, leading to the “Elicit Requirements” category having very few criteria. Likewise, there is a lot that a tool can do to help you plan, analyze, and manage requirements.
I wanted to look at the average score of each criterion under these categories, using the scale from the study.
|0||Cannot do in tool|
|1||Can do it, but there’s a manual workaround|
|2||Can do it without any workarounds, but it’s still not that easy to use|
|3||Can do it in the tool and it’s user friendly|
I hypothesized that the categories with the lower number of criteria would have a lower category average, because most tools would focus more on having the functionality in the categories that tools are more likely to assist with. However, looking at the results, this is not the case.
As you can see, these findings actually go against what I originally thought would happen. “Analyze Requirements” and “Manage Requirements” scored the lowest of the categories, while “Elicit Requirements” and “Misc” scored relatively high.
The reason for these scores is simply that there were more criteria to evaluate for the more relevant tool associated categories; therefore, there were more opportunities for lower scores. What I can get from this though, is that out of all the criteria that we identified for the less relevant tool categories, the tools seemed to adequately fulfill the criteria.
Download your free copy of the Evaluation Report.