Shifting Left: Performance Testing at the Unit Testing Level

Unit testing is important. Performance testing is important. You’ll get little argument from the people with boots on the ground, actually doing software development, that these tests matter a lot.

Unit testing is widely practiced in development circles, particularly by programmers who have embraced test-driven development (TDD). Unit testing, by definition, is conducted as the code is being written. Now a similar trend is emerging with regard to performance testing—to move it as close as possible to when the code is being written, into the hands of the developer. This is known as Shift Left movement. However, there’s a problem.

As much as it’s a nice idea to have developers conduct performance tests on their code as immediately as they do with unit testing, real-world performance testing is not a simple matter of “write it and run it.” Much more is involved, particularly when unit and performance tests are part of a continuous integration and delivery process. Unless proper testing planning is in place, shifting performance testing left will not only hinder the work of the developer, but also produce test results that are of questionable reliability. The devil really is in the details.

The first step is to understand the difference between unit testing and performance testing.

Unit Testing vs. Performance Testing

unit testing, loop, while, forEach, database connection, network latency, test metrics, performance testing, software development, computer programming, programmer, developer, web browser, Selenium, JMeter, code coverage, test plan, test planning, performance bottleneck, mocking, fault injection

The purpose of a unit test is to ensure that a unit of code works as expected. The typical unit of code is a function. A unit test submits data to a function and verifies the accuracy of the result of that function.

Unit testing is conducted using a tool such as JUnit (Java), unittest (Python), Mocha (NodeJS), PHPUnit or GoogleTest (C++). There are many others. Using a unit testing tool allows the tests to run automatically within a CI/CD process, under a test management system such as TestRail.

Performance testing, on the other hand, is the process of determining how fast a piece of software executes. Some performance tests are system-wide. Some exercise a part of the system. Some performance tests can be quite granular, to the component or even the function level, as in the case of a microservice.

The types of performance tests vary. Some performance tests focus on database efficiency. They’ll execute a set of predefined SQL queries against a database of interest, using a tool such as JMeter. JMeter runs the queries and measures the time it take each query to run.

JMeter also can be used to measure the performance of different URLs in an API. A test engineer configures JMeter to make HTTP calls against the URLs of interest. JMeter then executes an HTTP request against the endpoint and measures the response time. This process is very similar to performance testing a database.

Web pages can be subject to performance testing. GUI tests can be implemented using a tool such as JMeter or Selenium to exercise various web pages according to a pre-recorded execution script. Timestamps are registered as the script runs, and these timestamps are then used to measure the execution timespan.

Networks are subject to performance tests also. Running jitter tests that measure the variation in latency over a network are becoming matter of fact as companies such as Facebook, Slack and Atlassian build more video conferencing capabilities into their products.

The important thing to understand is that unit testing is about ensuring a unit of code produces the logical results as expected. Performance testing is about ensuring the code executes in the time expected. The difference may seem obvious, but there are definite implications when performance testing shifts left, moving it closer to the beginning of the software development lifecycle and near, if not into, the hands of the developer.

There is a fundamental dichotomy in play. Unit tests are intended to run fast so as not to slow down the work of the developer. A developer might write and run dozens of unit tests a day. One slow unit test is a productivity nightmare. Performance tests can require a good amount of time to set up, run and measure accurately. If initializing large datasets or provisioning multiple virtual machine instances is required, a performance test can take minutes or even hours to set up.

The rule of thumb is that unit tests need speed and performance tests need time. Therefore, unless they’re planned properly, one can get in the way of the other and impede the effectiveness and efficiency of the entire testing process in the CI/CD pipeline.

The Need for Speed vs. the Need for Time

unit testing, loop, while, forEach, database connection, network latency, test metrics, performance testing, software development, computer programming, programmer, developer, web browser, Selenium, JMeter, code coverage, test plan, test planning, performance bottleneck, mocking, fault injection

As mentioned above, unit tests are intended to be run fast. A unit test that runs more than a few seconds is a rarity. Performance tests can take time to set up and run, particularly when the test executes a number of tasks during a testing session—for example, performance testing a large number of pages on a website.

The conflict between a need for speed and the need for time is a definite challenge as testers try to move performance testing closer to the unit test paradigm. The further testing shifts left toward the beginning of the test cycle, the faster the testing needs to execute. A long-running performance test will actually create a performance bottleneck in the test process.

Thus, as you consider shifting performance testing left, discretion is the better part of valor. Not every performance test is well suited for execution early in the testing process. The need for speed trumps the need for time.

Short-running performance tests, such as those measuring the time it takes for a single query to run or a series of nested forEach loops to execute, are appropriate for shifting left, provided they can run in under a second. Performance tests that take longer to run need to run later in the test plan.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

Performance Testing Is All About Apples to Apples

unit testing, loop, while, forEach, database connection, network latency, test metrics, performance testing, software development, computer programming, programmer, developer, web browser, Selenium, JMeter, code coverage, test plan, test planning, performance bottleneck, mocking, fault injection

When considering making some aspects of performance testing the responsibility of the developer, another thing that needs to be taken into account is to make sure the physical test environments in which the performance tests run are consistent. It’s an apples-to-apples thing. A performance test running on a developer laptop that has a 4 core CPU is going to behave much differently than the same test running on a VM equipped with 32 CPUs. The physical environment does matter. Thus, in order for performance testing to be accurate and reliable, the physical test environment must be real-world, beyond the typical physical configuration of a developer’s workstation.

Making part of performance testing the responsibility of developers requires developers to change the way they work. The CI/CD process also needs to be altered somewhat. Again, the test environment needs to be consistent and reliable.

One way to ensure consistency is to have developers execute performance tests by moving code from their local machines over to a standardized performance testing environment before the code is committed to a repository. Modern development shops configure their CI/CD pipeline so that code enters an automated test and escalation process once it’s committed. It’s up to the developer to decide when the code is performant enough to commit to the repo. If the code performs to expectation, the developer makes the commit.

Testing code in a dedicated test environment before committing it is a different way of doing business for many developers, but it’s necessary when shifting part of performance testing left.

Putting It All Together

unit testing, loop, while, forEach, database connection, network latency, test metrics, performance testing, software development, computer programming, programmer, developer, web browser, Selenium, JMeter, code coverage, test plan, test planning, performance bottleneck, mocking, fault injection

There is a lot of value in implementing performance testing as close as possible to the beginning of the software development lifecycle. Testing early and often is the most cost-effective way to ensure well-performing, quality software. However, not all performance testing can be put in the hands of the programmer.

Performance tests that measure system-wide behavior such as network latency, database throughput, and integration efficiency are best done in dedicated environments later in the testing process. In fact, a realistic test plan will reflect the understanding that the more complex a performance test is, the farther toward the end of the CI/CD process it needs to be. In other words, simple, short-running performance tests are best run toward the beginning of the CI/CD process; long-running, complex tests are best run toward the end.

Comprehensive unit testing at the developer level is essential for creating software that is as bug-free as possible. Performance testing software appropriately throughout the development lifecycle, even at the developer level, increases the overall efficiency and reliability of the testing process.

Making software requires a considerable investment of time, money, and expertise. Companies that implement thorough testing throughout the software development lifecycle will realize a better return on the considerable investment they make. Testing early, testing often and testing appropriately are key indicators for companies that have the dedication necessary to make software that matters.

Article by Bob Reselman; nationally-known software developer, system architect, industry analyst and technical writer/journalist. Bob has written many books on computer programming and dozens of articles about topics related to software development technologies and techniques, as well as the culture of software development. Bob is a former Principal Consultant for Cap Gemini and Platform Architect for the computer manufacturer, Gateway. Bob lives in Los Angeles. In addition to his software development and testing activities, Bob is in the process of writing a book about the impact of automation on human employment. He lives in Los Angeles and can be reached on LinkedIn at www.linkedin.com/in/bobreselman.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Software Quality, Integrations, TestRail

How a Document Management Company Streamlined Testing and Boosted Efficiency with TestRail and Reflect

A leading document management company in the document management industry has made significant strides in streamlining digital content access and organization with its cloud-based document management platform. Their suite of software products enables compan...

TestRail, Software Quality

Top 5 ALM/Quality Center Alternatives & Competitors Right Now

If you find yourself grappling with the complexities of a legacy tool like ALM/Quality Center, here are the top 5 ALM/Quality Center alternatives to help you make informed decisions and discover the best test management solutions for your team.

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...