Prioritizing and identifying requirements that get developed in a release cycle can be a tricky proposition. It is one of the most important things we do as Product Managers. It is also one of the most challenging.
Most organizations use some means of categorizing requirements. Two common examples I have seen: Must Have, Important, Nice to Have. High, Medium, Low. There are far better ways of prioritizing requirements but the purpose of this post is to deal with another level of complexity altogether. How do we decide which functional unit or Department’s features gets built when dealing with a large scale development project that spans multiple groups / functional areas within a company?
I worked on a very large and complex implementation of a factory system software where we had to confront and overcome this very problem. The software that was being developed would essentially control all the manufacturing activities of the factory. As such, the application had functionality for all the different aspects of manufacturing – product movement between different processing machines, product storage in the factory, maintenance of equipment, sampling of output, manufacturing processes, manufacturing instructions for each step in the process and so on.
There were well established Departments that handled each of these different aspects of the manufacturing process. For example, there were Departments for Maintenance, Quality Control, Materials, Inventory, Production and so on. Each Department had their own specific set of requirements – functionality they needed out of the application to enable them to perform their specific and specialized tasks.
When deciding which features to develop for a given release, we encountered the following problems:
1. A prioritized list of features (across all Departments) was not very useful since we ended up with too many critical features that made prioritization for a release extremely difficult and in many cases meaningless.
2. Critical dependencies that existed between Departments were missed. For example, developing some sampling functionality was quite useless unless certain manufacturing instructions and material movement capabilities were already in place.
3. The sum of individual parts did not quite add up to a desired total of functionality. This was largely because critical dependencies between Departments were missed.
4. Individual Departments graded the success or failure of development efforts based on the “quantity” of features they got in a release and not how the overall manufacturing process as a whole fared.
5. The political wrangling, infighting and shenanigans got totally out of control as each Department tried to get more features into each release, regardless of whether it made sense to the overall operations or not.
6. The overall development process was perceived by all the users as political, arbitrary and lacking in transparency.
We solved the problem by using a Group methodology to decide the features that got included in a release. The way the teams were constituted and the manner in which they functioned are detailed below.
1. Every Department for whom functionality was being developed was included in the Core Team.
2. Each Department was requested to provide three members to participate in the Core Team – One Manager and two additional technical members who represented the Department. These members were authorized to vote for and speak on behalf of their Departments. These individuals were collectively referred to as the Core Team.
3. Attendance at the Core Team meetings was not restricted to the Core Team members. Any number of attendees from each Department were permitted to attend. The only restrictions were as follows:
A. Only the Core Team members were allowed to speak on behalf of their respective Departments.
B. Any amount of consultation was permitted within the Department representatives and attendees so long as they did not disrupt proceedings.
4. Senior Managers, Vice Presidents and other senior executives were explicitly excluded from the Core Team. The output of the Core Team was later presented to them for final review and approval. They were welcome to attend any of the Core Team meetings but could not be one of the three approved participants of the meetings who were authorized to represent their Department.
The Ground Rules
1. Each team had one vote in determining priority of requirements.
2. Each team’s vote had the same weighting as every other team.
3. Each team was required to vote on the prioritization of all requirements including those that belonged to their own team.
4. The decisions of the Core Team were binding on all Departments. The sole caveat was if the Core Team proposal was rejected by Senior Management and a change in priorities ordered. (Incidentally, this never happened).
How It Worked
With the Core Team in place, we prioritized requirements for a release in the following manner.
We first prioritized Departments for a release and then we prioritized specific requirements that would be implemented. This was a key change from the way we functioned in the past. In prior releases, in theory, all Departments were given equal weighting and the requirements prioritized for development. In practice, the number of features each Department got determined what priority they got in a release. We decided to take the politics and guess work out and made it the first decision the Core Team made for each release.
The prioritizing of Departments was not arbitrary. The first pass at prioritizing Departments for a release was done by the System Architect. He took inputs from Senior Management to ensure that there was alignment with the Business Goals and Objectives in creating the list. For example, if Management has defined “Reduction of Scrapped Materials” as a key initiative going forward, Sampling would move to the top of the featured Departments for the next release.
The Core Team was then provided the list, the rationale behind the list and given a chance to vote on the list. Each Department was given 15 minutes to make a case for why they should stay where they were slotted, move up or move down. (Yes believe it or not, once the process was locked in, Departments actually were willing to move down and not shy of saying so!). After the presentations were done, the Team voted for each position on the list and decided which Department would get slotted in where. There were a total of 8 Departments and if you made it to the top 5, you were guaranteed on getting a meaningful number of features in a release. The bottom 3 invariably got features but their features were lighter and typically done to ensure that no dependencies with other Departments were missed.
Once we decided on the Departmental priorities, each Department provided their own prioritized list of features. We usually restricted these to about 5 to 7 per Department for the first pass and iterated on till we felt we had no more development cycles left in the release. In presenting their list, each Department had to provide a justification as to why the feature being requested was important and how it aligned with management priorities. Votes were taken immediately and superfluous features eliminated quite efficiently. As this process was executed, we typically emerged with a list of candidate features for a release within 2 or 3 meetings. In all these meetings, a few representatives from Development were always at hand to provide guidance to the team in terms of effort and degree of difficulty to implement certain features so that the final list of required features was realistic.
This prioritized list was provided to Development for a final sanity check in terms of time, manpower and other resources for feasibility of executing within a release cycle. Based on their feedback, some additional fine tuning was done in terms of adding / removing features and the final list was generated. This final list was voted on by the Core Team and submitted to management for final approval.
Once we instituted this method, we saw the following benefits.
1. Features were implemented that made sense to the whole and not just individual parts of the overall application. These features were in alignment with management objectives and priorities.
2. A significant reduction in the number of missed dependencies across Departments.
3. A dramatic improvement in the satisfaction with the overall process by which features were prioritized and implemented for a release.
4. Development deliveries, quality and schedules improved. The features were frozen and did not change unless there was some unexpected business or technical development that dictated a change in features and schedules. These were for the most part minimal and when they did occur were always accompanied by an adjustment to the schedules.
5. Better quality product since every Department was better able to plan the time and availability of their key resources to provide the necessary support to the requirements and development process.
6. Higher quality requirements since there was much sharper focus on what was needed and going to be delivered.
The above methodology can be replicated easily and successfully in complex development environments where key stakeholders span different functional areas in the company.