When Do I Stop Testing?

Most of the software projects I have worked on come along with either a specification or a set of acceptance criteria. To the product owner, this is a list of what to build. To me, it also looks like a list of test ideas. A tester can take each point in the specification, or each line item in the user story, perform that as a test, and be done. That creates a nice tidy picture of testing that is easy to describe to non-technical people, and is easy to get done in a time box.

As a strategy, it can be a good start, but it’s a bit like a swimming pool with two inches of water – too shallow.

Abandoning the idea that the specification is the start and end of the testing work is an improvement, but it also brings more challenges. For example, without a “list” of “criteria”, how do we know when testing is done – or how to get there?

Get TestRail FREE for 30 days!

TRY TESTRAIL TODAY

Why is Complete Testing Impossible?

When Do I Stop Testing?

Imagine a very simple web page to help someone approximate how much shipping will cost. That single page app has the following controls:

  • A set of radio buttons where one is for < 10 miles and the other > 10 miles
  • A check box if the delivery person has to walk up flights of stairs
  • A check box for deliveries that occur on the weekend
  • A submit button to perform the calculation
  • A label to display the calculated rate

How many tests do you need to perform to completely test this page? 1 test? A hundred? A thousand? You could build a truth table and see that there are 8 or 9 (if you count no selections) possible configurations of radio button and checkbox combinations. But, is that really complete testing? I’m sure that you have guessed the answer is no, so let’s dig into some other possible important test ideas.

There are a few questions I would ask, starting with design. What happens when a delivery is exactly 10 miles, and is miles the right unit of measure here? What counts as a flight of stairs? I see that there is an up-charge for deliveries made on the weekend, but what about holidays or evenings, or rush deliveries?

Those are design questions, and a lot of them could be answered by learning more about where in the development process this product is, who the person using it is, and what they expect. Next I would attack some ideas that involve interacting with the product.

This is a big one — what platforms are supported. One platform is easy, but this is 2017 and one platform is rarely the answer. On the desktop, we have Windows, macOS, possibly some flavours of Linux and likely several versions of each. Mobile devices have Android, iOS, and a number of minor players. iOS generally has one main version of the Operating System in use at a given time, while Android may hundreds or even thousands because of the developer community. The product may behave just a little different in each of these environments.

What about data storage, what about emailing quotes, what if a user spams the submit button — does that create false orders or make the page crash? This isn’t an article on test ideas, but we could come up with some — five or ten even — and just be scratching the surface. Knowing that there are an infinite number of tests that could be performed for any one field, let alone a whole page, we have to decide when to stop.

Stopping Heuristics

When Do I Stop Testing?

Every time I test a new code change, I also make a decision about when I should stop and move on to the next task. Usually, that decision gets made using a rule of thumb or heuristic, like the ones below:

I’m out of time
Software development always happens on a time scale. Moving from idea to production takes a couple of hours, or a couple of weeks. Arguably, the most popular way to develop software is within a two week sprint. By the time it is Friday on the second week, there is some set of features the team agrees are done, maybe some that have not yet been started, and some that are still being worked on. At that point, we might decide to either move the in-progress work to a new branch and release it in the future, or to release with the knowledge that there might be some problems. But, for now, testing on those changes is done.

In the context of a sprint, the idea of “out of time” gets a little fuzzy. If testing is becoming a bottleneck because it’s taking too long, we might be out of time. If testing this one story is taking so long the entire sprint might be late, we might be out of time. If I am testing the first story of the sprint, I may need to guesstimate how long all the rest of my testing might take, in order to determine whether we are pushing the limits of time availability, or not. We rarely use a stopwatch and math to decide if a story is taking too long. Instead, these things are decided with a conversation, often at a standup meeting.

I’m not finding bugs anymore
Testing is like wringing out a soaked cloth sometimes. You fold it a couple of times and give it a twist, and a lot of water falls out. Fold it once more, twist again, and less water comes out. Fold it one last time, twist, and you might get a drop if you are lucky. Bugs are often easy to find in the beginning of a test session, and more difficult later on until finally you are doing everything you can but not learning anything new or important about that part of the product.

Sometimes, this means it is time to move on. Be careful though, sometimes this is a sign that your approach isn’t paying off and it is time to try some new ones.

My manager said I should stop now
Just like writing code, testing can be a game of managing both interruptions and context switching. I might be in the middle of testing a new code change, when my manager gets a message that support needs help. They have a call from a high profile customer about a data persistence problem. My new task is to get an export of that customer database, build an environment on the version of the product they are using, and configure it so that I can reproduce the problem as accurately as possible.

That manager interruption may be permanent; meaning someone else might take the task while I work on the support issue. Or it may be temporary and I return to the testing work once my role in the support issue is complete.

I don’t know what to do next
Sometimes, I get a code change, and do everything I can and then start to question the mission. I have tested the text fields and understand what happens when I enter unexpected data, I know that the date fields only accept one format and in a particular date range, but what now? Not knowing what to do next is a good heuristic for a temporary pause. It gives me time to talk with developers to better understand their concerns, product managers to get a clear idea of who the customer is and what they value, or other testers I’m working with to generate more fruitful test ideas. If nothing comes from all of that, maybe it is time to move on to the next task.

The software isn’t ready for testing yet
Not that often, but every once in awhile I will get a product that is either dead on arrival, or there are just so many problems the testing is like stumbling across a gravel beach in sandals. Sometimes in these cases I will make a decision to pause testing temporarily and return that feature to the developer so they can make some fixes. This is strictly a matter of efficiency when there are easy to find, low hanging fruit, types of bugs. Each bug discovered represents time consumed. That bug has to be investigated, fixed with a code change, merged into a new build, and tested again. When I decide to stop testing here, it is because the developer can quickly and efficiently find and fix bugs like forms that don’t submit, or adding a decimal value to a discount that throws a javascript error.

I used the word heuristic, or a rule that doesn’t always work, in the title of this section on purpose. We might stop testing for any one, or multiple of these reasons, but there are also scenarios where we can break through. For example, if you run out of time in the regular schedule, but are still finding important problems in the software, your manager might be OK with giving you some more time. The more reasons you encounter — I’ve run out of time, I’m not finding any more bugs, and management said to stop — the more clear it is that you can stop testing.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

Building a Test Approach

When Do I Stop Testing?

Knowing that there is an infinite amount of testing to be done, and also that it probably has to end soon, we have to select our test techniques and approaches carefully. It is hard to give specific ideas here about where to take your approach, but I can point to two areas where people often get stuck.

  1. Too much focus on things that aren’t testing: The role of a tester is filled with a lot of things that don’t involve interacting with new software. There are an assortment of meetings, email, documentation, waiting for the next build, and sometimes writing code. Teams that aren’t careful sometimes find themselves crossing a threshold of spending more time not testing, than testing. Sometimes these activities are helpful and teach something new about the software, sometimes they are done because “we have always done that”. Weeding out the aspects of your process that aren’t helping you learn about the software you are testing can be a good exercise.
  2. Leaning almost exclusively on one technique: Domain testing is arguably the most popular test technique in the world. We use this strategy to analyze variables, text fields on a web page for example, and decide which values we need to test and which values might not uncover anything interesting. It is easy to get in the habit of entering some data into a field, observing what happens, and then declaring testing is done. There are a wide variety of test techniques out there, each designed to teach specific lessons about your software. Get friendly with them, learn how to use a technique and how it might be useful at a particular point in time. This can help you become a more effective tester in the small amounts of time that we usually get.

There is always too much testing to be done, and not nearly enough time to do it all. Having some ideas about what it means to be done, or when you might want to take a temporary pause can be helpful. Pair that with making your testing more powerful in the time that is available, and you’re on the right track.

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Software Quality, Integrations, TestRail

How a Document Management Company Streamlined Testing and Boosted Efficiency with TestRail and Reflect

A leading document management company in the document management industry has made significant strides in streamlining digital content access and organization with its cloud-based document management platform. Their suite of software products enables compan...

TestRail, Software Quality

Top 5 ALM/Quality Center Alternatives & Competitors Right Now

If you find yourself grappling with the complexities of a legacy tool like ALM/Quality Center, here are the top 5 ALM/Quality Center alternatives to help you make informed decisions and discover the best test management solutions for your team.

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...