Was the testing you did today beautiful? You know, that testing where you copy-pasted Lorem Ipsum into every text box in your application in turn over three long hours? Hmm. But have you ever seen any beautiful testing? I mean, can testing even be beautiful?
Tim Riley and Adam Goucher reckon so and, back in 2010, collected 20-odd essays from software development practitioners including Scott Barber, Alan Page, Karen Johnson, Rex Black, Lisa Crispin, and Matt Heuser describing good-looking work they’d seen or done themselves.
Riley and Goucher have their own take on what comprises beauty: testing which is “fun, challenging, experiential, thoughtful and valuable”. But, of course, beauty is in the eye of the stakeholder, so there’s an abundance of perspectives in these pieces, loosely grouped into three categories: testers, process, and tools.
Some of the writing here hasn’t aged as well as it might have, and some of the ideas about what makes a thing beautiful don’t align closely with mine. But what I still get from it, what’s still valuable to me as a tester, is the variety of viewpoints.
It’s such a handy skill in testing, to be able to identify and take an alternative position, and then another, and then another. These outlooks are a set of lenses through which to gaze on, and assess, problems, solutions, and implementations. So, even if I’m reluctant to agree with their assertions of beauty, I’m happy to allow that the authors find positive aspects in the stories they’re telling, and perhaps that can help me to arrive at positive outcomes for myself.
I’ll talk about one essay in each stream and then describe some other aspects of testing that to my mind could constitute beauty, but which you’re at liberty to regard in any way you see fit.
Get TestRail FREE for 30 days!
People
The title of Scott Barber’s Collaboration Is the Cornerstone of Beautiful Testing gives the game away somewhat. Although Barber factors out a number of properties of beautiful testing (including that it is desired, deliberate, useful, technical, social, respectful, and challenging), what he prizes above all else is collaboration. He describes performance testing projects in which it is working with colleagues from Dev and IT, and with customers that gets the business to the right result.
In one example, a user interface change provokes complaints that an updated application is slow from the call centre team that are carrying out user acceptance testing. Rounds of fix strip functionality out in an attempt to claw back performance, but have no effect. It’s only when Barber notices a pattern in the field reports and asks to sit with one of the users, to watch them at work on the original system and its replacement, that the true issue is revealed.
The problem wasn’t latency, or a slow client machine, or a bad version of some library deep in the stack, but that the revised system penalised expert users, efficient users, users who had internalised and optimised the old workflow using keyboard shortcuts, by forcing them to do more with the mouse.
Armed with this information, Barber can set the development team barking up the right tree, deploy a fix to the “slow” application, and leave his users delighted. “For a performance tester there are few things as beautiful as call-center representatives who are happy with an application you have tested.”
Process
The beauty in Karen Johnson’s medical software project was prompted by her team’s hunch that something was missing from the testing, that there were issues to find that weren’t being found. Seasoned testers will pay careful attention to their gut, and this particular crew took it as their cue to broaden the range of approaches they’d taken.
Strict regulations in her domain required that there was a certain amount of pre-approved scripted testing, but this was augmented by a team of exploratory testers and feedback loops. Potential issues found while executing cases could direct the exploration and exploration could result in new cases being added. A communal test lab created energy and momentum, while a growing team forced additional collaboration, its format evolving with the team and the facilities. The nature of the software – and the potential worst-case scenario of patient death – provided additional motivation and focus, in case it was needed.
It was crucial to simulate real-life interactions and usage patterns, so the test lab became a simulated hospital, with genuine equipment, realistic layout, and with testers representing staff and selflessly executing prescribed roles and interactions with the various hardware and software components.
Johnson reflects on what she found so appealing in this scenario: “I am fortunate to have seen integrity in action, which is clearly one of the more beautiful sights to see … just because it is beauty that cannot be seen on a canvas nor heard at a symphony does not make it less beautiful.”
Tools
Mutation testing is a way of assessing the coverage of a test suite. A series of mutants are added to program code (say, by replacing one arithmetic operator with another, or swapping two variables) then test suites are run and evaluated to see which mutants they detected. Failure to detect a mutant suggests a gap in the test suite.
Andreas Zeller and David Schuler describe this basic concept and then list a set of practical problems, for example — that modifying the source code with a single mutant and then recompiling is time-consuming, that running all tests even though a mutant might only affect a single code path is inefficient, and that choosing the most effective mutants is a non-trivial problem. They describe solutions too, for instance by working on Java bytecode directly rather than on the source.
Their analysis includes thoughts on risk, and specifically a concept they call risk distribution which relates roughly to the proportion of the codebase a single mutation is believed to be able to impact. They provide a couple of different ways of approximating this including comparing code paths taken during test suite runs before and after mutation.
There’s a lot of sweat and probably no little blood and tears in this kind of meta-engineering work, but for them mutation testing “is a way to test a test, a way so simple and straightforward that it can easily qualify as beautiful.”
And The Rest
Three stories, each with an idea of beauty for the authors and yet each with other interesting characteristics, perhaps even loveliness, for me:
- The act of stepping back and then following up led to a successful reframing of the problem for Barber.
- The willingness to recognise the possibility of shortcomings and then act on them leads to increased confidence in the application under test in Johnson’s test lab.
- The thoughtful and data-driven approach of Zeller and Schuler to the analysis of the tool which is testing the tests for a tool attempts to optimise a cost/benefit ratio.
But what other factors might constitute beauty? At least these could be considered:
- Elegance: an elegant test might exercise the system with precision, touching only what needs to be touched, exploiting that which is already there. It might be conceptually very clean, keeping focus on a single concern, or working around some confounding aspects of the system to enable the work that needs doing to be done.
- Efficiency: an efficient test approach might free up testers to do other work, or make more of the same work possible, or simply reduce the amount of housekeeping. Efficient testing might mean keeping resource usage down for the test harness so that it doesn’t interfere with the system under test.
- Effectiveness: an effective test might be one that uses the ugliest, most mindless and inefficient brute force methods to obtain a highly desirable outcome, like running all of your test suites all weekend in parallel against a single instance of your application to smoke out that intermittent memory leak.
And who might appreciate these things? Amongst others, I’d suggest:
- Stakeholders: beauty to a stakeholder can come in many forms. Delivering on time and to the desired standard will sometimes elicit gasps of delight, but beauty can be found also in that research that makes a decision point clearer.
- End Users: beauty to an end user may be aesthetic, and a tester’s efforts can contribute to making the user experience as pleasant as possible. But users are also value-focused, and testing which aligns itself with benefits to end users can make the product more attractive to them.
- Testers: Testers? Yes, don’t write yourselves out of the picture. You’re entitled to want to see something beautiful in your work, your team, your product too!
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
Reflections
When you start a piece of work you might consider what would make it beautiful for the people who matter. When you’ve finished, and are wondering whether you did a good job, you might remind yourself that “good”, like beauty, is relative to the measurement scale and the person doing the measuring.
Here’s a final example that illustrates this point perfectly. Alice Waters, in an extract from her book, Coming to My Senses, says “beauty is the language of care”. The language of care. Wow. A new angle from which to watch the world unexpectedly revealed while I was reading the paper. And now this piece can finish in a different way than I’d planned: to answer my question at the top, was your testing today beautiful? Yes, to some people, if you did it for the right reasons, you took care while doing it, and you care about why you did it.
James read Beautiful Testing, edited by Tim Riley and Adam Goucher and published by O’Reilly. Thanks to Karo for the loan and for bringing the book to the Linguamatics test team reading group. Adam Goucher’s own chapter from the book has just been made available for free at Test Huddle: Beautiful Testing is Efficient Testing.
This is a guest post by James Thomas. James is one of the founders of Linguamatics, the world leader in innovative natural language-based text mining. Over the years he’s had many roles in the company and is currently the test manager, a position in which he strives to provide an environment where his testers have an opportunity to do their best work. He’s on Twitter as @qahiccupps and blogs at Hiccupps.