Tips for Writing Effective Test Cases

This is a guest post by Nishi Grover Garg.

When they’re learning the fundamentals, new testers often overlook something very important: They are hardly ever taught to focus on test case design. The result is testers taking very little time thinking about their tests before diving in. They sketch the bare minimum down on a sheet and then proceed right to testing, and they end up spending most of their time clicking away at the application.

Writing test cases is an art that requires effective test design techniques mixed with experience, creativity, and thinking outside the box.

Here are some ideas to think of and write better and more effective test cases before you begin testing.

Test design techniques

Yes, we know about boundary value analysis and equivalence class partitioning. And we might have thought about some edge cases related to our tests. But how much thought have we given these test design techniques in real projects? 

Decision tables are a good way to denote the requirements better, design the architecture of the code, and make a plan to test all possible scenarios. If you look for areas of the application where you can create decision tables, you will be helping the product manager and developers as well as yourself.

You can also look at creating a state diagram, either for the entire application or for a certain complex part of the application you are trying to test. This may uncover potential problems, unreachable states, or unhandled areas.

When designing tests, start looking beyond the UI options for what the system expects or rejects. Think broader; zoom out first, and then look into each intricacy. Leverage tried and tested design techniques for your test design.

History and experience

Leverage your experience with the software to design new tests. If you have worked with the software for a while, you must have learned things that cause common issues and ways the software has failed in the past.

If you are new to the project, try to learn the history of the software by reading past issues and their resolutions in the bug tracking system. This will help you understand common problem areas, high bug-density features, and usual issue types. 

Make it a point to look at related issues and think of the history of the feature before designing test cases for a new feature or enhancement. These will generally be the test cases that help you locate bugs during test execution.

Related areas

Tests for a certain feature generally tend to focus on the feature itself, and seldom on its integration with other existing features.

For example, we added a new feature in our application that lets the user download their run reports as Excel files, which earlier were just displayed in the app. As we tested the new function, we ensured the reports were being downloaded, opened correctly, and were saved at the desired location.

But what if we change the report settings and look at the downloaded reports with different parameters? Before, the reports had no limit to them regarding things like the number of rows. But now that we are downloading them to Excel, what if we try playing around to generate and then download a report that is bigger than Excel’s maximum row size?

For any new feature that gets added to a product, you need to look at its integration with other existing areas. Try to create all possible tests that can verify the new feature working correctly with other functionalities intact. To make this easier, you can also create a list of related areas or impact areas when you design a new feature, to use as a reference for better test design.

Creativity

Finally, some of your tests are just going to depend on your thinking and creativity. Although the time you spend exploring the application may yield some out-of-the-box scenarios to test, if you apply some out-of-the-box thinking early, you might have more creative test cases from just your test design.

For example, if you are going to test a feature that allows users to upload a file, you surely will have tests related to uploading the correct file types, rejecting the unsupported file types, warnings for file sizes that are too large, etc.

What about trying to upload a read-only, write-denied, or read-denied file? How about trying to upload a blank file (0 MB) or a file that is hidden? How does the system behave in those scenarios? Are the files accepted, or do they cause unexpected failures?

These are the test cases that will take your testing a notch above the usual. Thinking like this will make you much more aware of the system, its weaknesses, and its internal workings.

Conclusion

Take time to design your tests. When you put more thought into the tests you design, it makes each of them more valuable. You will find the most unusual bugs with these test cases, and that will enhance both your experience and the software’s quality.

Nishi is a corporate trainer, an agile enthusiast, and a tester at heart! With 13+ years of industry experience, she currently works with Trifacta as a Community Enablement Manager. She is passionate about training, organizing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Agile

Agile QA Process: Principles, Steps, and Best Practices

The agile QA (Quality Assurance) process is a set of practices and methodologies aimed at ensuring that software developed within an Agile framework meets the desired quality standards. It aligns with Agile development principles, emphasizing collaboration,...

Agile

Regression Testing in Agile

In TestRail, you can triage risks faster by monitoring the progress of all your regression testing activities in one place

Agile

Crafting a Robust Test Strategy: 6 Key Approaches

Effective testing isn’t a stroke of luck; it’s meticulous planning. Detecting and resolving issues early is the crux. This necessitates a well-crafted test strategy that sheds light on the entire testing process.