This is a guest post by Peter G Walen.
Since the concept of Whole Team Quality became popular, there are loads of people who make grand pronouncements on what the term means. It is almost as if it has become its own buzzword with people making money from it. Let’s take another look — as if it is the first time thinking about it.
Organizations claiming some form of “Agile” have heard of “Whole Team” and “Whole Team Quality.” People being people, they tend to interpret the idea in ways that line up with their preconceived notions and not the actual intent. It seems to me that for the most part, there is little critical thought around what that means and how the concept can be beneficial to teams and organizations.
For many, “whole team” translates to one of a pair of absolute positions:
1. All testers must write production code. Code supporting automation is not “production” code and so does not count as “real” code.
2. Since everyone is responsible for quality, there is no need for people who do testing.
Granted, loads of people will go to great lengths to justify their behavior or decisions. Sadly, there are legions of examples where decisions are made, direction set, then information, data and “research” is done to validate those decisions.
I’m reminded of an expression I heard many years ago, “My mind is made up, don’t confuse me with facts.”
The problem
The single greatest issue with the above “definitions” is they have nothing at all to do with the meaning or intent of the most cited “authorities” on Whole Team Quality, Lisa Crispin and Janet Gregory. In their first book together, Agile Testing, they write on the question of the role of testing in an Agile environment.
While many organizations will interpret “whole team” as one of those definitions, they tend to be doing so out of convenience. They are using the term to suggest some level of reasoning for their actions. The point of “whole team quality” is not found in either idea.
The point is that everyone on the team can play a role in improving the quality of the product the team is working on.
They conflate the idea of “quality” with “testing.” This is often a means to an end. When looking at the organizations that have adopted these definitions, the most common outcome is reducing headcount.
My concerns are not exactly with the tautological error presented, but with the lack of understanding of the relationship between quality and testing and how testing specialists can help teams and larger organizations deliver good quality products to their customers.
Development and testing
As I see it, part of the issue comes from the schism between “development” and “testing.” The challenge is in splitting the roles to be completely independent of each other.
I understand that many people working in software today have different backgrounds than what might be “typical.” I also understand that this has been the case for at least the last 35 years. It is not a new phenomenon. My first many years developing software, developers were expected to test their work and the work of their colleagues.
We applied due diligence by working from detailed design specs which were also written and “tested” (we called them reviews and walk-throughs) by developers. These were developed from requirements that were recorded by developers based on notes from conversations with business experts, users and customers.
There were workshops conducted in testing that had nothing to do with creating test plans and writing test scripts. They focused on communication and collaboration. Any developer who refused to test software and refused to engage in these other processes was not employed long, at least at the shops where I worked.
As I write this, I realize that experience is significantly different from junior and entry-level developers now. It leaves me with a conundrum.
The “cost of testing” conundrum
Several years ago, over ten at least, there was much talk in software circles and a push to “reduce the cost of software testing.” There was a book written and edited by testers on the very topic: How to Reduce the Cost of Software Testing. I participated in the project, reviewing chapters and helping writers with some of the copy editing.
From the start, I was empathetic of the problem and aware of the causes and reasons. I also rejected many of the arguments around it. My reasoning was simple: Why look at the cost of testing software as distinct from the cost of requirements analysis or project management or coding the software?
If one aspect of software development is looked at in isolation, why not the others? If one necessary aspect is of lesser value than others, why is that?
As near as I can tell it is because there is a sense that testers, by reveling in “breaking” software, have rather missed the point. Instead of working as a team to make amazing software, a fair number of our colleagues apparently take great joy in undermining the work of other members of the team.
Does this answer the question of how testing became separated from development? No. Not in the least. Nor does it explain why testing is not considered in the same light as design and analysis functions.
Does this explain how it is that “whole team” quality means that testing specialists are not needed? Not that I can tell.
However, it might explain why teams that had testers with that mindset likely don’t see an issue with that idea.
Beyond the conundrum
I’m not sure if there is a single way to reconcile the problem of mindsets. Early in my testing life, I admit I had a certain amount of smug surety that if I found a bug in the code it was a win for me, as a tester.
Now I see a bug as my failure. It means I failed when working with developers on my team in anticipating issues. In my role as a testing specialist, it means I failed to make clear the issues and concerns that need to be exercised. It means I failed to adequately coach the team in test and investigation practices that would head off problems.
Maybe that is a way to address “whole team” and testing.
Everyone contributes to the project bringing their own expertise to the effort. Skilled testing specialists work with specialists in design, writing customer-facing code and other aspects of software development. Everyone contributes throughout the project. Everyone is involved and informed.
The entire team shapes and builds the product.
And still
There are organizations and teams who insist there is no reason to have testing specialists. There are entire companies who proudly proclaim that they “don’t need” testers. They have automated tests to replace the work that testers did.
They might have a point
If the testers are not engaged with their teams, sharing information and ideas with developers, learning from them as they explain and demonstrate test techniques, then automated test tools are likely a cost-effective option.
If testers have built their careers triumphing over development errors, they are likely their own biggest problem and enemy. They have not been working to build up others on the team. I understand that some people may make it hard to work with them. That does not excuse testers from their responsibility to try.
If people are not working together to learn and make the product better, then there is no team. There is no reason to worry about “whole team” because there isn’t one. They are a collection of individuals.
Quality depends on a team working together as a whole. If you don’t have that happening, in my experience, then the presence of testing specialists will not make it better or worse.
Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.