This is a guest post by Cameron Laird.
Software testing theory recognizes different kinds of tests: unit, integration, load, and so on. However, tests outside software, but still about software, also are important.
Here are a few of these other kinds of tests that testers need to know how to perform.
Functionality
Most of what testers test is functionality. For example, if a user tries to register a new account with a name that is already in use, then the system under test should display a message such as, “Username $NAME is already in use.”
A slightly more elaborate requirement might specify the behavior of, say, a shopping cart. In this case, testing responsibilities include correct computation of taxes and shipping, inclusion of discounts, confirmation of customer intent, and transmission of results to Fulfillment, Accounting and other systems.
Functionality of this sort is where application testers have traditionally concentrated their efforts. Testing skills can pay off in plenty of other ways, though.
Accessibility
Lighthouse is an open-source automated tool Google publishes to help teams verify that web applications meet specific standards for accessibility, performance, SEO, and other dimensions.
Accessibility is crucial for effective delivery to all demographics of people, whatever their age, ability, language or location. It’s all about creating high-quality sites and tools that can be used by anybody, so no one is excluded from your product.
Security
Information technology (IT) security is, of course, an immense and ever-expanding subject. Testing specific security aspects is a full-time job, so for a non-specialist to try to keep up is quixotic.
Still, it’s valuable for testers to learn basic application security principles. Not only can they be on the lookout for early detection of security defects, but awareness of security technologies can help design tests of potential functional weaknesses.
Here’s a small example: A particular system might be designed to use HTTPS, rather than HTTP, for sensitive operations. Suppose its use of HTTPS is entirely secure — a good thing, of course! — but the construction of certain hyperlinks involves replacing “HTTP” with “HTTPS”. If end-users request a URL based on HTTPS rather than HTTP, though, they might eventually be directed to https://$SITE/…
While a request for service through https://… should introduce no security vulnerability, it also won’t return a useful result to the end-user. That’s definitely an error of functionality, and it’s much better to detect it during in-house testing, before it reaches users.
Compliance
Imagine a perfect application: It does everything end-users want of it in an adequately timely fashion, its security is impregnable, and it scores high for accessibility. It’s just what it should be — except that it depends on a third-party library in an unlicensed use.
Kiuwan Insights and similar license scanning solutions can help prevent such failures. Other specialized tools can help ensure compliance with such regulations or standards as Sarbanes-Oxley, HIPAA, PCI DSS, SOC 2, COBIT, and so on. Testers would do well to at least be familiar with which of these regulations apply in their particular marketplace.
Texts
Imagine a user pushes a Help icon [graphic here] and sees, “This page intentionally left blank.” That’s not good enough.
It might not be a failure of the software, in a specific sense; the application likely passed on accurately the content it received from a help subsystem, which is of course a good thing. Testing should report that the software behaved correctly.
However, the application as a whole failed, and testing needs to pass on that word, too, as quickly as possible. Even if content — or documentation, or however it is labeled — is managed separately from software development, the testers’ responsibility is to advocate for end-users and ensure the end-user experience is everything it can be.
Content too easily slips through the cracks and ends up missing, incomplete, poorly edited, inaccurately localized, or otherwise more a distraction than a complement to the larger application. Even such small details as words spelled differently in British vs. American English can degrade the end-user experience.
What can testing do about content embedded in applications? As with most test responsibilities, only a combination of approaches will give the organization confidence that it’s adequately ensured the accuracy of its textual content:
- Automated verification of many outputs or displays
- Expert testers spot-checking selected pages
- Static scans of source directed to spell-checkers, grammar-checkers, and so on
- Bounties or other active outreach with end-users to identify errors
Photographs, diagrams, and other pictures
Non-textual content is like text, except harder. The right photograph goes a long way toward making the impression you want for your application or site; a wrong one tells end-users that you can’t be trusted to know a skateboarding champion from a dead screenwriter.
Perhaps such an error is nothing that can be fixed by software developers. Until it is fixed, though, customers or readers won’t give the software the chance it deserves, so testers should be alert to non-textual content and the message it’s conveying.
Other automations
Many organizations limit the responsibility of testing: Testers might not be specifically assigned to test for accessibility, for instance. Even in the most extreme cases, where the work of testing is its narrowest, it’s valuable for testers to know what tools are available.
Specialized tools help manage several other dimensions of the quality of software applications. Here are some with broad applicability:
- Tools that compare results across a range of web browsers (Chrome vs. Safari vs. Edge, desktop vs. mobile, etc.)
- Dashboards to ensure that source test coverage, source stylistic quality, and source churn stay within bounds
- Tools to automate mutation testing
Suppose you notice, for instance, that a new feature has been implemented by a source whose coverage score is 30%, while most of your application runs at 85%. That source was a hot spot of activity during the last three sprints, and it just squeaks by in “linting” measures. Even if that feature passes its formal functionality specifications, those other aspects make it sound like it merits extra attention.
Invest a little extra time, and perhaps a surprise will turn up — one that’s better to find before the application’s general users stumble into it.
Conclusion
It’s easy for most of an organization to take a narrow view of testing’s contribution. Don’t let that limit you. You speak for users! That’s crucial, and you do it better when you think through the different kinds of correctness your application exhibits, as well as the tools and practices that help guarantee them.
Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron’s favorite applications are for farm automation.