Measuring product manager performance on internal system products

ArgonDigital - enterprise automation experts

Share This Post

Last fall, Austin PMM had a discussion around how to measure product manager (PdM) performance. Much of the conversation centered around products developed for external customers and focused on sales or profit metrics. My experience, however, has typically been in product management for internal systems where there are no clear-cut external metrics like sales or profit.

I feel that the best way to evaluate internal product development is to consider the major problems that often negatively impact IT projects. Many of these can be tied back to PdM performance so you can measure against them in an attempt to directly motivate improvements around those issues.
Some of these issues include:

1. Scope creep due to poor project definition and/or control

2. Missed requirements due to poor analysis and/or documentation

3. Requirements change due to poor analysis and/or documentation

4. Low user adoption due to dissatisfaction with solution

5. Lack of ROI due to non-realized benefits

For each of these issues, some related metrics against which to measure PdM performance include:

1. Scope creep – Track the requirements that are added beyond the original project scope.

2. Missed requirements – Track the requirements that are added and still within the original project scope.

3. Requirements change – Track the requirements that are changed and still within the original project scope.

4. Low user adoption – Measure user adoption rates and/or satisfaction levels (if they don’t have a choice about adoption).

5. Lack of ROI – Require the PdM to do an analysis of the costs and benefits of the system being developed before the project starts. At some point after deployment you would need to measure the actual benefit realized. Measure the variation between the planned and actual realized benefits.

For the first three metrics above, the team should agree on a baseline state of the requirements. After that point, an unbiased party could determine whether any added/changed requirements were actually missed, added beyond the original scope defined for the project, or changed after baseline. All of these metrics should be tracked through development and early deployment. These requirement issues should be tracked in an issue tracking system along with other defects.

For each of the metrics noted above, you should set a target allowance as a percent of total requirements. As an example metric, “The total missed requirements shall be no more than 1% of the total requirements of the project.” You could also take into consideration the priority of requirements by using a weighting system upfront in the project. If your organization has not done this, you should try not to set arbitrary target metrics. If you set a goal of “less than 1% missed requirements”, but your last project was 25%, you are most likely going to fail.

While the first three metrics above (scope creep, missed requirements, requirements change) motivate getting the right requirements the first time, the fourth metric (user adoption) specifically encourages the PdM to talk with end users when gathering the requirements. The final metric (ROI) motivates the PdM to really understand the benefit of the project before they sign up to the project. Another benefit to tracking ROI is that it encourages a positive relationship between the business and IT, since IT is where most of the project cost comes from.

These metrics all play nicely off each other. If you do the activities to ensure high user adoption, you minimize your risk of missed or changing requirements. However, you do run a higher risk of scope creep because the users will be excited that you are listening to their desires. This risk can, in turn, be mitigated by tracking against ROI as an incentive to keep scope in control.

None of these are perfect metrics as they all require significant thought and customization before being applied to an organization. In particular, it is important to realize that PdMs would need to have some level of control (or at least significant influence) over the factors that contribute to these results. For example, they need to be the final decision makers on the benefits, they need to be able to trust the cost estimate, they must be allowed to control scope, and they must be given access to end users. However, in the end, these metrics form a great starting point for a discussion around measuring the value that PdMs can bring to an internally focused development group and therefore allow organization to determine a course for potentially increasing that value.

More To Explore