This is a guest post by Peter G. Walen.
People are talking about how AI is coming and how it will change everything around how we interact with and test software. They might be missing something important, like how AI is already impacting our lives and our work.
People who know me often list me as an “anti-automation” person. This is interesting to me because I use various “automation” tools to assist me in my work nearly every day. I prefer to think of myself as one who focuses on “do what makes sense, given the best information and the best tools for the job available at the time the decision is made.”
What I am opposed to is the idea of the new, cool, buzz-word laden solution being hailed as the next great thing that will make the world a better place. Years ago, the early automation tools that did a record and playback were hailed in just such a manner. I was skeptical of them. The next iterations of test automation tools were also hailed as fixing or avoiding the problems of earlier tools.
I want to see evidence, real, repeatable evidence, not hand-wavy advertisements posing as “solid research” before I’m willing to consider something “new.” I suspect it is because I have seen too many people, teams and companies burned by trusting these reports.
Which makes me writing this all the more interesting.
AI, Artificial Intelligence, from HAL in 2001, Skynet in the Terminator movies, and VIKI in “I, Robot,” has been the boogeyman countering the “technology makes everything better” trope in popular culture. Robots, ergo, AI, will take away everyone’s job from assembly line workers to call centers and now, apparently, to knowledge workers working in software.
The scary dystopian future many people fear colors all of us. From a zombie apocalypse to a robot/machine apocalypse, we, somehow, use these unsettling images as “entertainment.” The (original) Godzilla movies were based on the fear of what technology would do – these others are not very different.
And yet, we embrace technology all around us and convince ourselves that the Luddites were wrong and that technology is pretty cool. That is where I generally land. Yes. There are things we must be aware of. We as technology workers and members of the broader society do have a responsibility.
AI in Software Testing
What does that have to do with AI and software testing?
Everything.
Mostly because it is all around us. We are using the fledgling forms to do our jobs better, and to shape and hone our own application of this new-ish technology. From internet searches on the correct way to structure a query we aren’t familiar with but what we know isn’t working, to working on ideas to help our teams work better and more efficiently. We use AI.
But, Pete, search engines aren’t really AI. What’s your point?
What Makes AI, AI?
AI is driven by the combination of large volumes of data, significant (massive might be a more appropriate word) amounts of computing power and the underlying algorithms that power and drive the actual processing. This also describes a search engine. It describes smaller things like automatic braking functions in “smart” cars and autonomous vehicles. It describes the calculations that allow aircraft to fly reliably with little interaction with human beings.
When we humans get it wrong, it can go horribly wrong. When we get it right, no one notices after the first couple of encounters. It becomes mundane and the glossy newness fades. And we forget the amazement we first had because we expect it to work each and every time, without question.
What AI Can Look Like
(even when we don’t think that is what it is)
We can write scripts to test specific scenarios. We can have them make calls as needed to other pieces of software. We can anticipate what the responses should be, at least most of the time. To accommodate test environment limitations, we can mock those calls and responses, and return the correct response for the normal, usual calls. If we observe and trap the responses to less normal or unusual calls, we can verify how they behaved, and confirm what the correct response should be for these unusual conditions. We can then build-in conditional logic to handle those conditions so we can mock the responses from the called systems.
We have a crude form of AI when we do so.
This allows us the opportunity to control the responses we receive from the external applications, those we don’t have a direct responsibility to be testing, and make sure the predicted responses at least, work in our environment. This then allows us to isolate the conditions we need to look for, and gives us a focal point when we actually run tests against the same external systems without any mocks.
We have another form of AI here.
When we have conditions to trap “exceptions,” even without any mocks to external systems – where we define what “pass” looks like and what “fail” looks like, the more broadly we can define what those conditions are, the more likely we can find interesting things to investigate. We can allow humans to do the interesting work of investigation and review so the humans’ senses and observations are not dulled by the massive amount of mundane, uninteresting information that will be flowing past us otherwise.
This is another level of AI.
With these things stated then, what does the future for AI in testing really hold? That, I’m afraid I must leave to Doc Emmit Brown, from the “Back to the Future” movies. “The future hasn’t been written yet! It can be whatever we make it!”
We can free the humans to do creative work in testing, in investigation and understanding conditions and causes we have not yet accounted for.
This level of deep introspection is extremely hard, non-linear and ill-defined within most human brains. Teaching software to do this will be a challenge. Until it happens, we are safe from AI, so all those horrific dystopian ends will not happen.
Or at least, not because of AI.
Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.