This is a guest posting by Erik Dietrich.
The answer is exactly 23 person-days.
When you clicked on the title to read more, I’m sure you didn’t think the answer would be this simple. And, of course, it isn’t. But I’ll bet at least a tiny part of you was hoping that there might actually be some reductionist, magic answer that you could take to heart.
Well, I can’t offer you that, but I can offer you business insight into what “enough” means in terms of quality assurance.
Quality Assurance and the “Enough” Conundrum
One of the hardest things for me during my school days was another question of “how much is enough?” But back then, this was related to studying for exams, like midterms.
What did it mean to be “ready” for an exam? Would an hour of study suffice? Five? Twenty? How would I know when I could stop?
In theory, I could spend every waking hour between the start of the semester and the midterm studying. So there was always more that I could do to be prepared.
So it goes with testing and releasing software. But my hours as a poor student were a much scarcer resource than the thousands or millions of dollars an IT department can throw at QA for a release. How much is enough, in terms of money, people, time, and effort?
There’s no magic figure. You could put your organization squarely in the red and even still have no guarantees of an issue-free release. And worse, you may find your timeline truncated by delays in software production. All of this uncertainty makes the issue of “enough” incredibly nebulous.
Backlogs to the Rescue
As with students and midterms, the solution isn’t one of extremes and absolutes. Instead, the solution is one of probabilities and pragmatism. You do your best within a window, and then you simply accept some uncertainty and risk. Your fate will be tied to the vagaries of an uncertain future.
But this isn’t to say that the answer to “how much QA is enough” comes from the empty advice to “be pragmatic.” We can certainly do better than that.
Instead, the answer lies in prioritizing your efforts. Agile software teams maintain a prioritized backlog of user stories. This helps them answer the question “how do we know we’ve made enough software” in a world of uncertain scope.
Do the same thing with your QA efforts. “Enough” becomes “what we’re able to do, given reality,” and you use the prioritized backlog to make sure you’re completing the highest value items within that window.
Here’s what that prioritized backlog might look like, at a high level.
1. Verify Your Mission Critical Functionality to Avoid Complete Faceplants
Do you have a major feature in your upcoming release? One that the company has advertised to the board of directors and to all users alike? The kind of feature that makes or breaks careers?
If you do, this should probably go at the top of the backlog. You want to be absolutely sure that the high-profile centerpiece of the upcoming release does what it says, and that it doesn’t suffer from weird problems or tons of bugs.
More generally, you’ll want to sketch out a top priority plan to make sure that your mission-critical functionality doesn’t underwhelm or completely fail.
2. Verify All Delivered Functionality to Avoid Unreliability
With a plan in place to prevent a complete disaster, you might want to move on to verifying all new functionality. After all, delivering the high-profile feature successfully with broken minor features wouldn’t be the worst outcome. But it’s certainly not the best, either.
So as priority 2, you might give the rest of the features the same treatment you gave the headliners. Make sure that everything works as advertised, lacks major problems, and delivers on what you’ve promised. The software should do what it says it’s going to do.
3. Regression Test to Avoid Looking Silly
From there, you might reason, the next thing to do is make sure you’re not breaking other stuff. This is known as regression testing, and it means that you’re checking to make sure implementing your fancy new features didn’t break old ones that people rely upon.
As a business, you might decide that this takes something of a backseat to new functionality in terms of optics. But it’s still not a good look to put out a release and have bug reports flood in about stuff that used to be fine. It’s like taking your car in for an oil change and somehow leaving with a flat tire.
Thus, your backlog might involve verifying all new functionality and then, as the absolute next priority, doing a deep dive into your existing functionality (hopefully in an automated fashion).
4. Performance Test to Anticipate Future Use
At this point, your group might feel pretty good about the software’s functionality, both new and existing. So what next, if you have some remaining time and budget?
Perhaps you schedule some performance testing, such as stress and load tests. When you deliver the software, all might be fine and users might rave about the slick new features while continuing to take existing ones for granted. That’s true initially, but will it be true after a week or a month of use?
With performance testing, you can start to have a data-driven answer to that question. How does the system perform after running continuously for a while? What happens when you throw a lot of stress at it?
Answering these questions as part of QA can help you get out ahead of things that might occur later.
5. Perform Exploratory Testing to Reveal Unknown Unknowns
If you’ve exhausted all other testing strategies that you’d like to execute, then you’re in possession of the proverbial “good problem to have.” So you might put something at the bottom of the priority list to occupy any remaining time you have. And undirected exploratory testing is a good bet.
This is basically a strategy where you throw knowledgeable QA folks at the software and say, “exercise this in strange, unexpected ways to see what breaks.” With all other forms of testing covered to this point, you essentially have a hypothesis that you’re trying to confirm or disprove.
But with exploratory testing, you don’t necessarily set out to look for specific things. This can lead you down beneficial rabbit holes where you find issues you might never have thought to look for. This is, of course, a good way to round out the bottom of a prioritized testing backlog.
Only You Can Build Your Backlog, and You Need To
Of course, I’m talking here in very hypothetical terms about how I’d approach testing for a generic, hypothetical organization. Given the specifics of your organization, team, and software, your backlog may wind up looking radically different.
And that’s fine. In fact, that’s kind of the point.
It’s really hard to know how much testing is necessary for any organization. So you need to look at your particular wants and needs and look at which things are most critical for you to verify. As you work your way through your prioritized backlog, you’ll gain confidence along the way, as you did when you studied for midterms. And, at some point, you’ll probably say that you feel pretty good about shipping the software, knowing that you can’t really know.
In the end, the answer to “how much testing is enough” isn’t “23 days.” But if someone did force your hand, saying, “you have 23 days,” at least you’d know exactly how to spend those days. Still not sure what enough is? Check out this blog post on 7 ways to define enough testing.