When I was working on a project several years ago, the development team was making a case for addressing technical debt. There were some parts of the architecture they wanted to update because it would make future changes easier, some parts they wanted to address because changes in those areas usually resulted in surprising problems, and some third-party libraries that were going to be deprecated and needed to be updated to the next best library. The entire conversation revolved around how this “debt” affected development.
My experience is that technical debt usually affects testing efforts equally.
Get TestRail FREE for 30 days!
Build Time and Feedback Loops
One tenet of modern software development is layering of automation. Developers will often write tests at the unit level. These start out running in milliseconds, until eventually the definition of a unit is blurred and tests start reaching into the database or third-party systems. Developers and testers together might write tests that check aspects of the API such as response times, the structure of JSON returned, and the values and data types returned. And then on top of that is automation against the user interface.
I worked on a project in the early 2000s that built this layering of automation from the ground up. We started with a build that would return feedback in about 10 minutes. A year later, our build took three hours, and the UI automation had to be split out into a separate virtual machine that ran overnight. Eventually, the development team got tired of waiting on builds. Developers stopped writing new tests and stopped maintaining the old ones. Our UI automation took about eight hours to run and was left to rot on the vine.
Some people will shake their heads at how terrible this is, and others will shake their heads because they are currently experiencing this.
One point of automation is a feedback loop. Ideally, a developer writing a new code change can write a test, then write code to satisfy that test. Once things are green, they can refactor the new code by making it more readable, more maintainable or more performant. Making changes to software creates new risk. We don’t always know exactly how the code we write will behave, and we can never imagine all the ways a new line of code will interact with everything else that is already there.
Tests and builds are the feedback loop that help us find problems earlier.
When I start new automation projects today, I start with performance in mind. Unit tests are built as small units and don’t cross product boundaries, and user interface automation is run in parallel from the start. When I work on old projects with a slow build, that is the first piece of technical debt I lobby to get solved.
Testability
About a decade ago, I was working on a pricing science product used by salespeople to help them find an ideal price for a product. A price is ideal when the profit margin is maximized and the customer is still happy (or at least willing) to pay. The front end of the product was bloated and complicated. The back end used Bayesian math to calculate prices and was even worse to test.
The first few sprints on this product were a nightmare. The development group was just learning about the product domain and were under extreme time pressure. One of the first features I worked on was taking a feed of commodity prices — from NYMEX, for example — and displaying them in our product as the feed updated. We were also supposed to display price trends. This was intended to take in large amounts of data every day and then to accumulate more and more over time. My first question was, “How do I test this?” The immediate answer was, “I don’t know.”
Our developers had taken a requirement and turned it into a feature with minimal testing. My job was multi-pronged: I wanted to make sure that our product could consume data from a feed, and I wanted that data to be representative of what might come from a price index. I wanted to see how we handle large quantities and somehow define what a large quantity of data was for our product. And I wanted to discover what happens when we get data we weren’t expecting. That was just for starters. But we didn’t have any test data, we didn’t know what to expect when we encounter bad data, and we didn’t have any tools or test harnesses to trigger data capture and processing.
When someone on the development team says they don’t know how to test something, that means they don’t understand the code they wrote.
We solved this problem over the course of a week or so by writing a command line tool that would call our API and trigger it to consume a file. We handled the data situation through a combination of taking actual index data and creating test files with a mixture of good and bad data. Over time we simulated a reasonable data volume, discovered how our product handled that data, and found some load and performance problems related to an ever-growing data set.
Fixing testability problems means your development can find important problems faster and more efficiently.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
Fragility
I was working for a health care company in 2010. We had a product in the early stages of moving to software as a service. The software was built on a proof of concept using PL/SQL and some impolite usage of CSS and HTML. Over time, different JavaScript libraries were added to make the monstrosity less monstrous. The technology stack was difficult, and our development team used this as an excuse to avoid most code quality practices, such as test automation. We would occasionally do code reviews, but you can only catch so much through visual code inspection.
Our development flow usually looked something like this: A developer would pick up a specification that might be six or more months old. The specification might be wrong, out of data or out of sync with the product, but that’s what they had to go on. That developer would go off on their own and make the feature change to the best of their ability. Some time later, I would see the new code in a queue of things that could be built into our test environment.
Once I built the change, I might find a few things — the build might be dead on arrival, I might log in and navigate to the new change and find bugs all over the place, or I might log in and wonder if the person who wrote the spec had ever looked at our software before.
Changes to our product were dangerous. There was a good chance a new code change would break the test environment, stopping testing of not only that new feature, but also anything else that was in play. There was also the much harder to detect risk that something surprising or obscure broke as a result of this change, and that we might not find out until an angry customer called.
Fragility translates into a near impossible testing problem. One small change could mean 10 minutes of testing, or it might mean hours of waiting and false starts followed by testing and then discovering strange places problems were introduced. No testing strategy works when the product is this fragile.
Some companies try to take this problem on through massive rewrites. My experience is that this usually results in a new product with different problems.
My preferred way to approach this is what I like to call “Boy Scouting”: Each time I walk into a new area of product code, I like to leave it slightly cleaner. That might mean better test coverage, a refactor for readability and maintainability, or just fixing something that is broken. Little by little, we can improve a product with this ethic.
Debt Reduction
Technical debt is usually framed around the negative effects it has on writing new code. Each new change accumulates a little more debt and slows down the development process.
I see debt as something that prevents us from learning important things about our product. The next time you see technical debt — fragility, slow builds, poor testability or something else — take that first step to talk about the problem. Getting technical debt fixed translates to better software down the road.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime