Ratio of Developers to Analysts in Agile

ArgonDigital - enterprise automation experts

Share This Post

Conventional wisdom generally holds that the ratio of developers to analysts in an Agile framework is about 4:1 or 3:1. In essence, it says that 1 analyst can support the work of 3 or 4 full time developers working on a project. For Waterfall projects, this ratio is higher and typically in the range of about 8:1. In practice, I have found that the ratio of developers to analysis in Agile development is accurate, but it tends to fall apart once we take testing into consideration.

In a traditional Waterfall model, the requirements are done up front and handed off to the development team. While development is underway, there is a hiatus of sorts for the analyst and this time has typically been used to bring the testing teams up to speed. Detailed review sessions are conducted with the team, data needs defined, test data is created, automated scripts developed and the team is ready to begin testing by the time development is done. However, this luxury of time and preparation that Waterfall provides is not available in Agile.

When working on 2 or 3 week sprint cycles that are pretty common in many Agile shops, I find there is a tremendous strain on both the test teams and the analysts supporting them. Simply put I find there is not sufficient time for the analyst to work with the test team to support them adequately. While the sprint is underway, the analyst is doing two things that are critical to development. First, they are actively supporting the development that is ongoing by working closely with the developers to answer their questions and flesh out details as they go through the process. Second, they are working on requirements for the upcoming sprints so that they can keep the backlog healthy and ready to keep the wheels turning. What little time is left apart from these two activities is devoted to testing.

In practice, I find the whole process disjointed with the test teams getting short shrift. They end up getting the bare minimum in support and input from the analyst. This problem is particularly acute in teams that are transitioning to Agile from Waterfall. The test teams are used to a certain level of support and input from the analyst that is suddenly drastically reduced. Yet the demands on the test teams or the expectations of the level of coverage and testing is not reduced. In essence, they are being asked to do the same levels of testing with significantly less analyst support. This has resulted in more problems later in the release cycle as we get closer to launch and ironically, a lot more fire drills than I have experienced in Waterfall projects.

Agile is very developer friendly but puts tremendous strain on the analyst and testers. This problem becomes a lot more acute when dealing with complex applications with multiple dependencies and a legacy code base. While unit testing a specific piece of functionality is relatively straight forward, understanding all the upstream and downstream implications becomes extremely challenging in time constrained environments. This problem is further exacerbated when the analyst is giving the minimal support she is able to the test teams.

So how do we get around this problem? There are a couple of possible approaches that can be experimented with. First is to consciously increase the lengths of the sprints to accommodate testing needs. An increase of about 33% will go a long way towards freeing up time for the analyst to adequately support testing while also tending to the needs of 3 or 4 developers on a team. For example, if the initial plan called for 2 week sprint cycles, move it to 3 week cycles without increasing the scope of each cycle. This will mean a lower velocity overall from a development perspective, but will give adequate time to do better testing. In the long run, I believe that this will lead to a better final product.

A second possible approach will be to have analysts who are dedicated to supporting test teams. These analysts can work across multiple sprint teams but their sole focus is in supporting the test teams. From practical experience, I think that one analyst can support the work of 2 or 3 sprint teams. Since they are solely focused on looking at the overall solution from a testing perspective, they are in a better position to look more carefully at interlocks, dependencies and impacts across the board. They can also give the time and dedication that the test teams need to create data, understand the scope of testing and development of automated scripts.

In short, what I am advocating is that we treat testers in the same way that we treat developers in an Agile framework. We need to size the amount of effort needed by analysts to support testers and staff accordingly. By focusing solely on the ratio of developers to analysts when coming up with staffing models, I believe we are missing a significant chunk of effort and introducing risk into our projects without intending to do so.

More To Explore

b2b auto pay

B2B Auto Pay: Automation Use Cases

Migrating a B2B “Auto Pay” Program Companies migrating to SAP often have daunting challenges to overcome in Accounts Receivable as part of the transition. You might have different divisions running

ArgonDigital | Making Technology a Strategic Advantage