This is a guest posting by Justin Rohrman.
Salesforce.com is in the process of eliminating the testing role in parts of its organization. Some people have the option to go through an interview process to see if they are a fit for a development job, but nothing is promised. Yahoo did the same thing a few years ago. These organizations are taking the stance that testing is something anyone can do and that it is generally done as close in time and space to when development happens as possible.
Some organizations will be very successful having no one in a testing role; others will quietly rehire a testing department in a few years and try to rebuild the quality practices that disappeared after the layoffs.
Get TestRail FREE for 30 days!
Do You Have Handoffs?
You have a pretty good idea of what handoffs are if you’ve ever heard the phrase “throw it over the wall.” There is a pause in time and a transfer of information that has to happen every time a piece of work is passed between roles in the organization. The handoff flow usually sees product managers collecting user stories months before they are developed. Developers take those stories once they are at the top of the queue, maybe ask a few questions and then start coding them. Some unit tests might be written, but in my experience, the culture of developer testing isn’t very strong in organizations that have lots of handoffs.
Testers have probably been waiting around for a while, maybe “building test plans,” but now is their time to shine. The tester’s first encounter with the new code at this point is in a new build. They might also have some ideas of what to expect from the bug tracker card describing the change.
When I’m in this scenario, I start with the easy-to-find bugs: fields that take incorrect formats and cause errors when the page is submitted, layout and user experience problems on the different supported platforms, JavaScript errors in the developer console from using the product, and so on. After this I move on to harder-to-find problems that stem from using the product longer and with increasingly complex scenarios that are more representative of what a person would really do. These tests sometimes lead to discovering problems with performance, how the product handles data load, data persistence and usability.
All of that sounds like a normal testing session, but the key factor is that it takes a long time, and it is backed up right before a release is supposed to happen. Throwing work over a wall usually means lost communication. Parts of the development process were not observed by the next person who needs to follow up on that work. By the time the change gets to testers, there are a whole lot of incorrect assumptions. Each one of those assumptions and miscommunications will lead to risk in the product.
Organizations that develop in this style not only need testers, but need more of them than other organizations because of the number of unknown risks (aka problems) in the product that are left to the last days before a release to be discovered.
Do You Have High-Stakes Software?
A handful of projects — regulated fields and mobile projects, for example — work on slower cycles, regardless of how good the development practices are. Mobile projects have a slow cycle to get in the stores and then an unpredictable deployment schedule. Both Apple and Android have a review process that products undergo before an app can appear in the store. That process can be expedited, but that usually still means a few days. Once your new version is in the store, there is usually little control over if or when your customer will actually install the latest version.
Software projects that need approval from regulatory bodies, such as the FDA or SEC, have layers of process built into every release that also add time to the release cycle.
Problems in either of these scenarios after a release lead to unknown amounts of time added to the next release. Testers are useful in both cases, but they’re required in regulatory spaces.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
Do You Invest in Making It Easy to Recover?
Software bugs happen; this is just a fact of life. A customer will eventually find a problem, no matter how many development practices we use to design good code or how much testing we do. The question is, how long does it take to recover from important problems when they are found?
A waterfall organization might push a release to a customer and then start working on the next release. Over the course of that release cycle, customers start using the current production version and notice a problem. They report that problem, and it gets triaged into the next release along with a backlog of 10 or 15 other bugs. That bug fix will most likely get sent out in the next release a month or more out. If the problem is really bad, it might get included in a patch release, but that’s probably a week off at best.
Agile organizations that are releasing every two weeks are on a similar cycle, but instead of months, the bug fix is a week or two out. They might be able to release sooner, but this would disrupt the current development cycle and cause a feature or a different bug fix to get pushed out. (Or maybe someone has to work nights to get the fix done.)
Companies that work in these models need testers because the cost of finding problems in production is disrupting the business sales cycle. Every release is a promise to a customer of some new feature they want, or a potential customer for a change they require to go live. Adding bug fixes to a release outside of the plan will mean that something else has to get kicked out — for companies working at a sustainable pace, at least.
Another option is investing heavily in the ability to recover. This includes practices like test-driven development to design better code, using feature flags so that changes can be added or removed as needed, sophisticated monitoring to quickly discover when problems happen, and the ability to build and deploy changes in a matter of minutes. That is not a trivial set of process and tooling, but neither is the amount of work it takes companies to recover from a bug in production.
Do You Need Testers?
One easy way to answer this question is to fire all of your testers and see what happens. If your product goes down the tubes and you lose customers, you probably needed those testers after all.
I don’t recommend that method. Take a look at the stakes of your software and your ability to recover from a failure. Do your customers absolutely need your product to work on the first try? Are release cycles on the scale of weeks rather than hours? Do you have the ability to shut off a change that is causing problems and deploy a fix minutes after the code is written? If you answered yes, yes and no, then you probably want to keep a few testers around in your organization.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime