ArgonDigital - enterprise automation experts

Share This Post

Measuring the Effectiveness of a Feature

Cost vs ValueThere are two possible outcomes: if the result confirms the hypothesis, then you’ve made a measurement. If the result is contrary to the hypothesis, then you’ve made a discovery.
Enrico Fermi

There are four key factors that determine how effective any feature is in achieving its design objectives. They are: Discovery, Comprehension, Execution and Performance. The combination of these factors will give you a very good idea of how useful (or not) a feature is once it has been deployed. I will explain each of these briefly.

Discovery is a measure of how easy or difficult it is to learn of the existence of the feature. Simply put, do your users even know that the application has certain capabilities? At first glance this sounds like an outrageous statement to make. If you think of a feature as the ability to Print or Save data, then this assertion is likely absurd. However, if you think of feature as the ability to either permit or disallow the sales of specific configurations of a system in different cities or municipalities, then the notion of Discovery is not absurd anymore. Practically every enterprise application out there is chock full of these kinds of features that most users have never heard of, let alone used.

Comprehension means the ability of users to understand HOW to use a given feature. Continuing with the example above, most users can easily figure out how to Save or Print. But without guidance, the odds are not very good that they can figure out how to manage their product catalog across different cities without some form of assistance like training, a user manual or online tutorials.

Execution refers to the ease with which users are able to perform the necessary steps to execute the feature on an ongoing basis within a predefined unit of time. Just because they have finally figured out HOW to do something, it does not automatically mean that they can perform these tasks day in and day out without assistance or easily.

Performance refers to how responsive the system is to user inputs, data and screen refreshes. This is the final piece of the puzzle and takes place once the user has successfully navigated the previous three steps. So if they have to take a coffee break waiting for the application to do something then they are likely to seek other alternatives to using the feature, even if they know all the steps involved in executing the feature.

The combination of these steps determines how effective a feature is in the real world. First, assign scores to each factor on a scale of 1 to 10. Then simply multiply these numbers to get an overall score. So, our Print example may get a perfect score of 10,000 (10 x 10 x 10 x 10) since everyone knows it exists and where to find it (Discovery), how to use it (Comprehension), successfully use it repeatedly without any problems (Execution) and get their printed copies as fast as the printer can churn them out (Performance). But the product management feature may score a miserable 120 (1 x 3 x 5 x 8) – 1 in 10 users know it even exists (Discovery), about 3 in 10 could figure out how to use it (Comprehension), about half the folks who used it could do so repeatedly (Execution) and the feature was very responsive when used (Performance).

We want a high score for the features we develop when measured with our user community. Simple surveys, questionnaires or basic observation can easily give us concrete numbers for each of the factors. So long as the samples are statistically significant and representative of the user community, we can very quickly figure out these scores.
The scores themselves provide two very useful and actionable pieces of information. First, how effective is the feature as whole? A score of 4,000 or higher (an average of 8 or more across all the factors) would typically indicate a feature that is working as designed. A lower score would indicate a feature that needs to be improved upon to be more effective and useful.

The individual scores for each of the factors gives us areas to focus upon for improvement. For example, a very low Discovery score can be rectified with education, awareness building, UI changes or Menu modifications. A low Performance score would indicate the need to upgrade the hardware, network or the underlying software architecture.

By using this simple methodology, it becomes easy to determine which of our features have problems and how best to address them. Absent this, we go through endless upgrade cycles governed more by hope and fixing the obviously broken rather than being driven by facts and utility to the users.

Measuring the Effectiveness of a Feature

More To Explore

AI to Write Requirements

How We Use AI to Write Requirements

At ArgonDigital, we’ve been writing requirements for 22 years. I’ve watched our teams waste hours translating notes into requirements. Now, we’ve cut the nonsense with AI. Our teams can spend

ArgonDigital | Making Technology a Strategic Advantage