“‘How did you go bankrupt?’ Bill asked.
‘Two ways,’ Mike said. ‘Gradually and then suddenly.'”
– Ernest Hemingway, The Sun Also Rises
As I’ve written in earlier posts, legacy replacement projects need software requirements that take performance into account. And the Ernest Hemingway quote above pretty much sums up the way I have seen application performance issues overwhelm these projects: gradually, and then suddenly.
It starts off with a little bit of grumbling here and there. A trouble ticket or two after the launch related to general performance issues. A concerned project sponsor. Worried emails from project champions. A manager asking questions about her team’s output showing sudden unexpected drops after the new solution was deployed.
Things fester for a while. Performance concerns start showing up more and more frequently in Project Status Reports. The issue keeps showing up on escalated items seen by senior management, maybe even a Vice President or two.
More questions are asked. The CIO starts asking questions now. Project Managers are worried. Frequent status meetings are held. Then suddenly, this “performance issues” thing is a runaway freight train.
While the situation was “gradually” getting to a boiling point, how do the project teams react? There is a rough chronology of reactions that starts with ignoring the problem and culminates in an honest-to-goodness, all-out panic.
Sequentially, this is how it plays out:
1. Ignore.
When the first reports of performance problems start trickling in, quick inquiries are made and tests performed to ensure that Performance is “acceptable” and the application is not fundamentally compromised. Once the quick and dirty investigations are completed, the issue is gently put to bed after following established “processes.”
Defects with appropriately low priorities are logged. Reassuring emails are sent out to a few key stakeholders that the team is “hard at work looking at the sporadic reports of poor performance.” Then everyone forgets about the problem. Any new complaints of poor performance are logged and then simply ignored.
2. Rationalize.
But this “performance issues” issue simply will not go away. It keeps showing up every couple of days with in the form of some new complaint, “user whining,” and comments reflecting a general feeling of malaise among our user base. Time to move on to rationalization. Rationalization attempts to dismiss the user’s problem as imaginary, perceptual, and with no real-world consequences.
“Who cares if it takes an extra minute or two for a salesperson to create an order?”
“Are you seriously telling me that taking an extra couple of minutes to review a price approval request is going to break the entire sales cycle?”
“In the grander scheme of things, is it such a horrible thing if policy validation takes an extra 30 seconds whenever an underwriter changes the terms?”
A few more pithy emails are sent out. Eyes are rolled knowingly. A wink. A nod. A reassuring smile.
“Let us all forget about this and move on, shall we? It has no practical significance whatever. Trust me.”
And in the process of rationalization, we further lose our user’s trust.
3. Defend.
Not surprisingly, we find that our users do not trust us that much after all that.
One month after launch and users are still complaining. Time to start defending the solution (i.e. ourselves) vigorously; time to remind users how great our solution really is.
“A modern application that brings them out of the early 1980s to the here and now.”
“The latest database technology.”
“State of the art programming languages and techniques.”
“A truly scalable solution.”
“Web-enabled.”
“A user interface designed by the gods themselves.”
And yet all everyone wants to talk about is that tiny bit of a performance hit.
“Given all the advantages of the new solution, is that tiny loss of performance not a tradeoff you will make?”
4. Ridicule.
Inevitably, the response to the tradeoff question is a resounding “No.”
Time to take off the gloves. No more Ms. Nice Gal. The project team goes on the offensive, and the missives begin to fly.
“It’s just a few disgruntled users.”
Things just keep getting worse and a bunker mentality sets in. Every barb and insult results in more complaints. Communication between the project team and users pretty much breaks down. The issue keeps getting escalated until it finally attracts the attention of someone pretty high up the food chain.
That is when things really start to go south…
5. Panic.
As a general rule of thumb, users will eventually win every single argument about the suitableness of a solution for performing their tasks. Especially when there was something that worked quite well for the better part of a quarter century or so that you are trying to replace.
They will win the argument every time.
EVERY SINGLE TIME.
Repeat that to yourself. Memorize it. Remember it. Never, ever forget it.
So after further review, the decree will come down from up on high: “Improve Performance. Or else…”
The general feeling of unease that pervaded the project team is now replaced by a thorough, complete, all encompassing, all-consuming, honest to goodness, good old fashioned collective panic attack.
“We must stop whatever it was we were working on and fix this performance thing once and for all, because if we don’t, we are all…”
Any sequence of actions arising from a personal and collective panic attack are usually not going to lead to a desirable outcome. But otherwise excellent project teams contrive to maneuver themselves into this position for a couple of reasons:
a. They do not truly understand the implications and significance of seemingly small decreases in performance in comparison to the product they are replacing.
b. They did not properly define the performance parameters for specific features in their application.
c. They did not manage the problem properly once it was identified.
In this series of posts to date, I have defined a common legacy replacement project problem, identified reasons why performance in legacy replacement projects is a non-trivial issue, and described how teams typically react to this issue, often with disastrous consequences.
In my next posts, I will touch on how to properly define requirements that give the development team the proper guidelines and performance targets to shoot for, and how to properly manage these situations when they do arise.