When I first started working in test automation, I asked for help. It was ten years ago; the developers I worked with sent me references and links to articles. The articles were on programming, information about how browsers work, details on javascript, links to an overwhelming list of libraries for programmatic testing and the test automation pyramid.
That test automation pyramid makes everything look so … clean. There is a strong base of very small unit tests that run in fractions of a second, tests at the service layer are on top of that. At the very top was a small slice of UI automation.
Our implementation never looked like that pyramid. But here’s some automation strategies I learned that worked for us at that time.
Get TestRail FREE for 30 days!
Strategy With the Pyramid
Years ago I was working on a pricing science product. The backend took in a year or more of data and performed filtering and calculations in order to price commodity products like airplane seats or barrels of crude oil. Once the calculations were complete, the software displayed the results and helped sales people figure out the best price by considering taxes, extra charges like shipping, and different discounts. All this was taking place when Agile was just breaking through. We delivered software once a quarter on average, but had recently started doing Scrum, and were driving towards releasing software more often. Despite our old-school methods, there was a definite air of software craftsmanship.
Developers on my team had begun experimenting with Test Driven Development (TDD). Programmers would begin any new code change with a very small test. After that test was written, they would write product code to satisfy it and make it pass. TDD is an exploratory development process with a few benefits and side effects. The process of creating new tests acted as a guide for design and a way for the developers to know when they were done writing code. Over time, we had a large battery of tests that ran with new builds. The CI system would alert developers through email when they made a change that broke a test.
Our unit tests weren’t optimal; they touched databases, required product services to be running, and took a long time to complete. A purist would say “they aren’t unit tests” at all!” — but they did the job.
We did have an API, but this was well before the time of microservices. At that point, a developer and I had started to experiment with FitNesse. This required creating wrappers around each method we wanted to test with the tool, so that it would be accessible externally. Building service layer tests was very slow going and never really caught on because of the amount of time it took to expose and write code to test the services.
We bridged this gap by automating tests at the user interface layer.
Each development sprint was accompanied by a set of acceptance tests. Developers used these acceptance tests to help them know when they were done writing code.. The test group used acceptance tests to discover ways our software might fall short. We also used them for UI automation. Using Ruby and a library called Watir, we built a few automated UI tests each release. These were designed to cover core workflows of each feature based on acceptance criteria. Sometimes we would modify an existing test to cover a feature change rather than developing a new one. The idea was to maximize coverage without introducing a new burden.
We had an official product build that ran overnight. If the unit tests were green, that new build was deployed to a test server, a new database export was loaded, and the automated UI test suite was kicked off. The test group got an email notification listing any failed tests about an hour later.
The pyramid at this company was squished. We had a large base of unit tests, a very thin slice of service layer tests, and then another thick layer of UI automation. Each piece was there because it fit in with our skill sets, how we developed software, and how we worked together as a team.
The Pyramid, Flipped
Another company I worked with had a testing strategy built mostly on UI automation.
Their product was built on stored SQL procedures, Javascript, and CSS. The company staff tended to work remote. Business people worked in an office on the east coast, but the development group was located throughout the US, in South America, and in India. We all worked from home. The release cadence was unpredictable, but generally new software went to customers about twice a year. We worked from one main development branch, but there were also customer specific branches for people that needed customization.
The architecture and technical stack made unit testing a challenge. We had some tests built in SQL, but coverage was limited. Our lack of automation led to something called failure demand: work created because something didn’t go quite right the first time around. A performance upgrade might break references to data that were needed in parts of the product. A feature refactor might break a management workflow that tells customers know what person in their company needs to take action next. Big and important bugs appeared in this software in a variety of surprising ways.
I was in charge of the testing effort for this company, which included testing feature changes, bug fixes, and developing a suite of about 150 automated UI tests. The number of tests went up and down. Sometimes tests were no longer useful and I would cull them from the suite. Sometimes we would adapt a test to new product functionality rather than building a completely new one. And sometimes we just had to build new tests. Ideally, every few months our knowledge of UI automation coverage was refreshed and we had a new target going forward.
This set of automated UI tests ran against four different test environments each night. Each test run took just under 3 hours, so as one was finishing, the next one was getting started. In the morning, I had four different emails that alerted me to any failed tests.
Having a lot of UI tests introduces new problems along with the benefits. Timing problems are inevitable when you use a real browser (as opposed to a headless browser), so we would get the occasional failed test from a script trying to click a button, or type something into a text field that didn’t exist yet. There is also the maintenance burden. Any change to the product means refactoring some tests. Good abstraction can make this easier to deal with.
The test automation pyramid at this company had a base of UI automation, and a layer of unit tests on top of it. Since unit testing was slow and difficult, we put a lot of effort into building a UI test suite and were careful about building tests that mattered, and that covered the right areas of the product.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
Finding The Fit
The test automation pyramid is heuristic; a rule of thumb. It is what test automation might look like if you were making software in a vacuum.
If developers make good unit tests that run in seconds. If your product is built on an API or microservices so that the majority of data testing and some workflow testing can be done through code. If you can keep business logic out of the user interface, then you won’t need a lot of UI checking. But that is a lot of “if”s.
In my experience, that perfect pyramid is extremely rare. In the real world, we probably need to take a closer look at our development environment, asking: What testing is being done now? What is missing, and what goals do we have?
These questions should help you discover where UI automation fits in your strategy, and how much of it you really need.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.