Where and Why we Automate

I see more test automation projects that start and then are later abandoned, than started and subsequently celebrated for what they contribute to a software project. This generally starts with a complaint that it takes the QA group too long to perform regression testing. Managers demand test automation, and testers have a knee jerk reaction and begin building tests using WebDriver. Inevitably, these WebDriver tests still take too long and are so unstable that the staffing cost is higher than the team performing regressions testing without automation.

In my experience, software professionals tend to talk about automation without context. Managers want automation, so testers jump to building frameworks, tests, and process without analyzing the real problem. Developers want faster feedback on their code changes, so we get suites of unit and service tests wrapped up in continuous integration. But, what is the actual purpose of automation, and for whom?

Ideally automation is sprinkled through the development process, and across the different roles on a team. I want to talk about automation in context. What roles in a technical team are using it, how are they using, and exactly why.

Get TestRail FREE for 30 days!

TRY TESTRAIL TODAY

In the Code

Software Testing automation. Which roles in a technical team are using automation. How and Why we use Automation in Software Testing. TestRail

Tests in the code are usually written as unit tests or service tests. Unit tests cover a line or two of code at a time, and test discrete pieces of functionality such as authentication. This is probably the most common form of test automation, even though it isn’t what comes to mind for testers when the topic of automation comes up.

In the early 2000’s I was working on a product that would perform analytics on index prices and discounts given for commodity items (tires for example) during a sale. This product was new and not on the market yet. The developers working on this product were the crafts-people in the company. These weren’t the type of developers to write code, check it in, get an alert for unit test coverage dropping too low, and then adding a bunch of assert.true()’s to a test file. They did TDD before it was cool. These two developers would write a test, and then write some code. These tests provided information about what their code was doing, about how a refactor is going, This is testing at a very granular level. The information that each test provides guides what the developer does next. And the faster they can get that feedback, the better.

In my experience, regardless of who writes these tests in the first place, the development group should probably own them. Let’s say you have a test suite that runs against your product’s REST API. A test starts failing because a status flag is returning false when it should be true after an order is placed in your e-commerce product. If the API tests are owned by the test group, someone will have to open the CI dashboard, see what test failed, run it locally to investigate the change, and then gather enough information about the failure to report the problem to a developer. The developer has to schedule the work, investigate the problem on their own, and then make a code fix. Developers have agency to immediately act on failures. A developer can see the test failure in CI, find the person that broke the test, and that person can fix their code and run a new build without having work hand-offs.

The name of the game when you are testing close to the code is getting small bits of information about your product as quickly as possible. These tests point to specific problems in lines or small pieces of code that can be acted on without question.

In the User Interface

Software Testing automation. Which roles in a technical team are using automation. How and Why we use Automation in Software Testing. TestRail

This is what many testers automatically think of when they hear the phrase ‘test automation’. I see people doing a lot of impressive feats of technology with UI automation — running headless, parallelized, in the cloud, and with complex architecture underneath to make it all work. You can get running UI automation, but I feel like the value is diminished, and you steal away important bug finding abilities each time you add a technology stack. Most of the high performing UI automation projects I see are in headless browsers, running on virtual machines in some cloud. In my experience there are a handful of contexts where UI automation provides good information in a reasonable amount of time.

One client I work with has a product that would be accurately described as legacy. The back end of the product is mostly stored SQL procedures. The front end is built on some older javascript libraries, CSS, and HTML. The tech stack makes the product an evil combination of difficult to unit test, and occasionally fragile when changes are made. The User Interface is fairly baked-in at this point. Most of the changes are refactoring to improve performance, new feature additions, and custom configuration for new customer. I built and ran a test automation suite to get the testing and test coverage that they were having a hard time with otherwise.

These tests run in the browser and take about 2.5 hours for a full run against one test environment. In one night, the tests will run against 2 to 4 different environments. The majority of these tests represent some sort of scenario that a user might perform. Some people might refer to that as an end-to-end test, I believe. These tests return more general information, in a slower feedback loop, than something like a unit or API test. These tests provide information in obvious and hidden ways. The most clear way is for a test to fail.

Let’s say that my test is programmed to save a row of data, and then check that a status field in that row changes from New to Saved. The test will fail when anything other than Saved is in that row after the save had occurred. In these cases, I notice the failure, write up a bug, and reference the test that failed. UI tests are limited in scope, though. Things can go wrong and the test won’t fail. We only program them to look at very specific things. Part of my process for UI automation is to either review the application logs visually after a test run, or use a monitoring tool to send an alert when there is an error or exception. These errors can be traced back to a particular test based on the time they occurred, and then you can isolate the bug by re-running and observing the test.

UI automation can be a sign of that your product is hard to test in other ways, and that the test team is detached from the development process. These tests are usually written by a tester that is interested in writing code, or a developer that is focused on testing. The people writing production code are rarely involved in my experience. This type of automation can also be a crucial feedback loop in the right context.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

Quality, Not ROI

Software Testing automation. Which roles in a technical team are using automation. How and Why we use Automation in Software Testing. TestRail

I have heard managers ask about the return on investment of test automation projects. Granted, this usually comes after 6 months into a UI automation project where half of the tests come back failing because of bugs in the tests or the test framework. I prefer to talk about what automation adds to the development process. Each piece of test code, and each person that writes it and performs the testing, adds value to a project. Unit tests deliver information about lines of code and give near-instant feedback to developers about design and the changes they are making. Tests against an API can expose information about the state of your product’s workflows, how it handles data, and how fast or slow your product is performing. Tests at the UI level offer broad swaths of coverage.

Think about what automation will add to your development process before diving in. That will guide where to start, and who should be doing the work.

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

Test Automation – Anywhere, Anytime

Try Ranorex for free

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Software Quality, Integrations, TestRail

How a Document Management Company Streamlined Testing and Boosted Efficiency with TestRail and Reflect

A leading document management company in the document management industry has made significant strides in streamlining digital content access and organization with its cloud-based document management platform. Their suite of software products enables compan...

TestRail, Software Quality

Top 5 ALM/Quality Center Alternatives & Competitors Right Now

If you find yourself grappling with the complexities of a legacy tool like ALM/Quality Center, here are the top 5 ALM/Quality Center alternatives to help you make informed decisions and discover the best test management solutions for your team.

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...