Note: this blog builds upon a previous blog published several years ago on allocation and traceability.
We were recently asked the following question: “I’m trying to improve the way our requirements are leveled and allocated. One nuance that we have in our group is that the product has different owners for different components, often in different levels of the assembly hierarchy and/or requirement level (example below). How would you structure the allocation for such a setup? And how would you structure modules in our Requirements Management Tool (RMT) to distinguish ownership? Some of the components have “joint” ownership, which makes the verification discussions challenging. For example:
L0: customer needs (we own)
> L1: product (we own)
>> L2: sub-system_1 (SS1)(we own)
>> L2: sub-system_2 (SS2)(customer owns)
>>> L3 (?): sub-component_3 (SC3) (another vendor owns)
>>> L3 (?): sub-component_4 (SC4) (we own, but is assembled within sub system_2 and supports its total functionality)
Another nuance is that we do not necessarily want the customer or other vendor to know the requirements for sub-component_4 for security/proprietary reasons. In that case, how would we handle allocation “through” sub-system_2? Currently we handle that through an owner attribute in our RMT, and it seems to work well.”
This is an issue that is becoming a very common problem in many organizations. This issue is very complex, with no easy, short answer.
Taking a Systems View: Managing Interactions and Dependencies
Many organizations are still practicing systems engineering using 20th Century methods even thought we are 19 years into the 21st century. Two of the key issues of concern include:
- using a document-centric approach to developing and managing requirements, and
- siloing off parts of the architecture to different organizations as if the parts are independent of each other.
In the development of today’s increasingly complex, software centric systems, these practices are proving problematic. As noted in the question above, allocation is one of the problem areas. Stepping back and taking a system’s view, the behavior of the system is really about the interaction between the parts and their inherent dependencies.
As Gentry Lee from the Jet Propulsion Laboratory (JPL) states: “A good Systems Engineer knows the partial derivative of everything in respect to everything!” That means that the system engineer needs to understand the impacts of a change made anywhere in the system on all other parts of the system.
In order for this to happen for today’s complex systems, the system must be managed from the top within a data and information model that ties all data and information together across all lifecycle stages – including requirements.
Traditional approach to allocation
Traditionally, allocation of requirements from one level of the architecture to another involves defining requirements at one level and then flowing them down to the next level of the architecture. The receiving parts of the architecture then develop children requirements in response to the allocated parent via either decomposition or derivation. Once the children requirements are written, they are traced back to their parent requirement. In fact, managing allocation and traceability are primary reasons many requirement management tools (RMTs) were developed!
Using the traditional document centric approach to developing and managing requirements, allocation has been historically managed within allocation matrices. Often these matrices are developed in a spreadsheet and included in the document as an appendix. Using an RMT, relationships between requirements within a system of interest are managed via links between parents and children. In the RMT this also forms a link between the children and parent – bidirectional traceability.
The problem with this approach is that it inherently treats allocations as independent of each other.
A modern view of allocation from a dependency perspective
In reality, if you allocate a functional requirement to several parts, each has to work together to meet the intent of the parent. This often results in the parts having an interaction (interface) between them that needs to be defined in some sort of interface definition document (Interface Control Document (ICD), data dictionary, Interface Definition Document (IDD), etc.) and agreed to and interface requirements defined and documented for each system involved in the interaction across the interface that invoke those definitions.
Another type of allocation is the allocation of requirements dealing with resources (mass, power, etc.) and performance (accuracy, precision, reliability, time, etc.). In this case the allocations actually form an equation of dependent variables. The parent requirement on the left of the equal sign, the allocations to the right of the equal sign. See the figure below. A change to one variable will most likely result in the need to change one or more of the other variables. This gets even more complex when a dependency is several levels lower as shown. With a document centric, siloed approach to SE, managing these dependences is very difficult.
Levels of Requirements: Design Inputs vs Design outputs
Another key consideration is levels as pointed out in the question. I have reached the conclusion that organizations often develop too many requirements, especially at the lower levels.
A concept I have recently become aware (via the medical device world) is the concept of design inputs and design outputs. Design inputs focus on the “what” rather than the “how”. Design-to “what” requirements are design inputs – they both drive and constrain the design solution. Design outputs include the actual design and all design artifacts including the build-to specifications, drawings, algorithms, code, as well as the system itself. The figure below illustrates this concept:
A key problem with many of today’s requirement sets is including design output “hows” in the design input “whats” set of requirements, i.e., information that belongs below the line (design output), is communicated in the requirements above the line as a design input requirement. This can constrain innovation as well as result in unnecessary change in the requirements.
The reason I brought this up is that once we have the various design input requirements defined for the system architecture and have made the buy, build, code, or reuse decision, everything else are design outputs.
Back to the Example
L0: customer needs (we own)
> L1: product (we own)
>> L2: sub-system_1 (SS1)(we own)
>> L2: sub-system_2 (SS2)(customer owns)
>>> L3 (?): sub-component_3 (SC3) (another vendor owns)
>>> L3 (?): sub-component_4 (SC4) (we own, but is assembled within sub system_2 and supports its total functionality)
I am assuming the parts are being developed from scratch. The approach would be different if one of the lower level parts already exists – for example either the L3 SC3 or SC4.
In this example, L1 would own and control the allocations to all the L2 subsystems. L2 SS1 & SS2 would have their own set of design input requirements addressing the L1 parent requirements allocated to each subsystem. Each of the L2 subsystems would define an architecture and then allocate their L2 requirements to L3 sub-components. Design input requirements for L2 SS2, would be allocated to both L3 sub-components (SC3 and SC4). Some of these allocations could be functions, others performance or resources. Most likely there will be dependences as discussed above.
In response to these allocations, children requirements would be developed for both SC3 and SC4. If these sub-components need to work together to met a parent SC2 allocated requirement, this interaction would be defined and each would have corresponding interface requirements pointing to the common definition.
Concept of defining the “boundary”
What is needed is to clearly define the design input requirements for both SC3 and SC4 along with identifying dependences and interfaces between them. These two sets of design input requirements define a “boundary” around each sub-component as if each sub-component is a black box, with no indication as to what is happening inside the black box. It is these boundary requirements that the completed sub-component will be verified too.
With this boundary, black box perspective, everything inside the boundary (lower level requirements and the design to meet those requirements) can be invisible to those outside the box. What matters is that those responsible for each sub-component only have to show compliance to the boundary requirements for their sub-component and can keep all the details inside the box for their sub-component hidden (classified/proprietary). Another advantage to this “boundary” perspective, is that inside the boundary, the organization can use less formal means of communicating requirements (rather than formal “shall” statements) allowing more agile approaches to developing their system of interest.
What if the design inputs need to be kept hidden from the owners of other components of the architecture?
However, if you also need to keep the set of design inputs for one of the components hidden from another vendor or customer as well, you would have to identify the dependencies and interfaces between the parts and copy those requirements for both parts and put them in a separate document managed at the sub-system 2 level.
I can use the International Space Station (ISS) as an example. When an experiment/payload is going to fly on the ISS, an analysis is done on the dependencies and interfaces associated with integrating the experiment/payload on the ISS. The payload has their own set of design input needs and requirements and design outputs as does ISS. What NASA does is pull out (copies) the experiment/payload requirements dealing with any interactions with the ISS as well as all ISS interface and other requirements affecting the experiment/payload from the ISS documentation and puts both of these sets of requirements into a separate, standalone experiment/payload ISS Program controlled document. Everything else in either’s system requirement documents can be kept hidden from the other. The resulting new document is then what is made part of the contract, configuration managed, and verified to.
Ownership of Requirements and Their Verification
As indicated in the question, ownership of a single requirement can be managed or identified as an attribute to a requirement in the RMT. I don’t really understand or support the concept of joint ownership. If there is a parent requirement at L2 allocated to two sub-components at L3, then the parent requirement belongs to L2 and each child belongs to each sub-component at L3. Each of the L3 sub-components responsible organizations are responsible for verifying their children requirements as a precondition to the integration of the parts to form L2, and then L2 responsible organization is responsible to verify the parent requirement.
In Conclusion:
So now we have come full circle. The initial allocations of performance and resources are approximations, subject to change as the design of each of the parts matures. Thus, the owner at each level must manage these changes throughout the development life cycles – a change to one allocation could result in a change to other allocations. Thus, allocations must be managed from the top. Management can be within a tool (preferred) or within the separate integrated document as discussed in the ISS example above, there shouldn’t be an issue. Whomever is responsible for implementing a requirement is also responsible for verifying their system of interest meets that requirement.
Everything I talked about above is part of our new systems engineering requirements training. If you would like to hear more about this training, send me an email at info@argondigital.com and I will be happy to provide you with more information.
Comments to this blog are welcome.
If you have any other topics you would like addressed in our blog, feel free to let us know via our “Ask the Experts” page and we will do our best to provide a timely response.