This is a guest post by Cameron Laird
In today’s rapidly evolving tech world, quality is the key differentiator between products on the market. If testing is seen as an after-thought in your team’s software development lifecycle (SDLC)—or even viewed as a single step in your SDLC—then most likely, your product will not be well-tested and QA will be seen as a barrier to releases. As a leader on your QA team, you can take a leading role in cultural change as your team migrates to more modern practices, including DevOps, Continuous Testing, CI/CD, distributed collaboration, and analytics-savvy functionality. Organizations need conceptual change even more than technological boosts, and QA has crucial strengths to effect that change.
Here are three winning action plans to change your QA culture and integrate it more closely with the rest of your SDLC:
- Review & update your team’s KPIs
- Analyze your department’s existing culture
- Eliminate obstacles to “green”
No one else in your organization has the knowledge you do about QA’s power to improve the SDLC. Concepts that are routine to you are unknown or surprising to others. Think through these three strategies for success and others will be able to understand there’s more to QA than just holding up releases.
Do these three things to build QA into your SDLC
While your team can adjust these three points to specific situations, you’ll make plenty of progress even if you adopt them as written here.
Each of the paragraphs below can be a SMART goal, one that is specific, measurable, achievable, realistic, and timely. The KPIs below aren’t written as textbook concepts that someone could measure in principle; the idea is that you pick a few, put them into practice, post daily updates, talk them over with others, and give the organization a chance to experience what they teach on a daily basis.
1. Review & update your team’s KPIs
Bug counts are essential. Continue to report them. That said, whenever possible, focus on key performance indicators (KPIs) expressed as costs, values, or risks. “Our testers identified 53 defects in the last week” reinforces that QA is a cost center. When you reframe the same reality as, “we identified and were able to resolve 3 critical defects with a high risk for customers last week”, “we have lowered the average time-to-detect from 87 to 11 days”, or, even better, “the bugs we found in the last week represent $33,420 in customer-support savings”, you help the organization understand that QA is an investment and not simply a cost-center.
A huge range of KPIs is in use in organizations worldwide. As an introduction to the possibilities, consider which of these your organization might be able to calculate and appreciate:
- Requirements coverage
- This KPI measures the percentage of requirements that are targeted by at least one test case. A test manager assures that all the feature requirements link with their relative test cases and vice versa. The practice ensures that all requirements have a sufficient number of tests. The goal is to keep the mapping of requirements to test cases to 100%.
- Authored tests
- This KPI is used to measure the number of test cases being designed during a defined interval of time. Authored tests can be reviewed by the test manager or a peer, as it helps to analyze the test design activity. This also helps in measuring the test cases against requirements and the designed test cases can be further evaluated for inclusion in the regression test suite or ad hoc test suite.
- Defect leakage
- Allows the software testers to analyze the efficiency of the software testing process before the user acceptance testing (UAT) phase of a product. If any defects are left undetected by the team and are found by the user, it is known as defect leakage or bug leakage.
- Defect Leakage = (Total Defects Seen in UAT/ Total Defects Found Before UAT) x 100
- Module-specific defect density
- defect density helps the team in determining the total number of defects found in the software during a specific period of time—operation or development. The results are then divided by the size of that particular module which allows the team to decide whether the software is ready for release or whether it requires more testing. Defect density is counted per thousand lines of code also known as KLOC.
- Defect Density = Defect Count/Size of the Release or Module
- Change risk analysis
- This KPI helps you achieve greater productivity by combining qualitative and quantitative criteria for assessing the risk level associated with a change. You can raise the accuracy of the changes by assessing risks with a consistent and standard process.
- Unresolved defects
- This KPI is used to measure the number of defects that have been registered in the defect-tracking system but have not yet been resolved by the team. This metric indicates the number of unresolved defects at the given date.
- Unresolved severe defects
- This KPI aims to keep the number of severe defects in an application at a time under a limit, if there are more severe defects than that immediate action is needed. But before using this the testing team needs to be properly trained to identify severe defects correctly.
- Rejected defects
- This KPI measures the percentage of defects that are rejected as compared to the total defects reported. If the percentage is higher than a set threshold value then the underlying issue in the documentation of intended product behavior or definition of a defect needs to be identified and acted upon. This could mean more training for software testers or better requirement documentation.
- (Number of defects rejected by the development team / total defects reported) x 100.
- Test case quality
- Written test cases will be evaluated and scored according to the defined criteria. The objective of this metric is to know the efficiency of test cases that are executed by the team of testers during every testing phase and produce quality test case scenarios by paying attention to the defined criteria in all the tests you have written.
- Percentage of automated tests
- This KPI is to measure the percentage of test cases automated against the total number of test cases in your test suite. Usually, a higher percentage means a better probability of catching critical and high-priority defects introduced in the software delivery stream.
- Defect Detection Efficiency (DDE)
- Defect Detection Efficiency, also known as Defect Detection Percentage, is the percentage of the total defects detected during the testing phase of the software. The formula to calculate DDE is:
- DDE = (Number of Defects Detected in a Phase / Total Number of Defects) x 100 %
- Defect resolution time
- This KPI is used to measure the time it takes for the team to find the bugs in the software and to verify and validate the fix. It also keeps a track of the resolution time, while measuring and qualifying the tester’s responsibility and ownership for their bugs.
Why these twelve? They’re a mix of KPIs that have proven advantageous in QA as measurable, historically effective, and diverse enough to illuminate your department’s biggest challenges.
Are you just starting your KPI program? Start small and learn what KPIs work for you. These three KPIs, in particular, are crucial to start tracking early:
- Requirements coverage: In general, it’s realistic to aim that every product requirement appears in at least one documented test, even if some of the tests are manual—target 100% for your requirements coverage.
- Defect Detection Efficiency (DDE): DDE compares the number of defects found before and after release. If QA finds 48 defects before release and customers report two new ones, the DDE is 96%, or the ratio of 48 to 48 + 2. DDE is beneficial for planning both QA efforts and product releases.
- Percentage of automated tests: While all KPIs need to be tracked through time, the percentage of automated tests is almost entirely about comparison rather than an absolute level. If it regresses–if the percentage of automated tests persistently declines during a product’s lifetime–you’ve learned that the product needs immediate structural attention to rework it into a more sustainably testable model.
You’ll review your KPI portfolio repeatedly; there are always new insights to learn and adjustments to make. Your judgment needs to be at its best when assessing KPIs. An example: reliance entirely on legacy KPIs familiar to the organization limits discoveries. If you introduce too many new KPIs, you risk overwhelming your organization and having any confusion and even rejection wipe out potential gains. At all times, look for KPIs that reinforce the point that properly-executed QA helps create more valuable products faster.
KPIs themselves are business investments. Practice with a dozen or so, like the ones above, and learn which ones are meaningful to other departments, good guides to action, and motivational (while inexpensive to measure). If your KPIs are private affairs, known only to you and your supervisor, their accomplishment is limited. When your team or morning scrum starts to have conversations about specific measured KPIs you’ve chosen, though, you’ll know you’re making a difference.
2. Analyze your QA department’s existing culture
The most challenging and most consequential work a QA Director does is to promote a winning QA culture. If you want to raise the standard of your quality culture, or establish a new one, the first thing you need to do is make sure that your company values are clearly defined. Think about the values your company already has in place – do these speak to quality? Are these values ones that you want? Is there room for improvement?
Are your QA employees eager to automate tests that can be automated and proud of their skill in reliably executing tests that are not yet automated? Do they understand that the point of meetings is to decide on actions that are actually undertaken and completed? Are QA workers able to communicate effectively with peers and with team members outside QA? Does your QA staff have the attitude that it makes progress every workday, wherever in the product development cycle it happens to fall? If your employees don’t have these instincts now, what are you doing to cultivate them? Do you communicate your vision for QA so that it’s easy for them to understand what they should do? How do you remove the barriers to doing the right thing and encourage the expectation that the whole team will execute effectively every single workday?
With the right culture in place, the QA team takes the initiative and looks for ways to involve QA throughout the SDLC. Analyze your department’s existing culture, and think about what decisions you’ll make that can bring that culture closer to the one you want.
3. Eliminate obstacles to “green”
Too many organizations regard QA as a barrier to common goals: QA is a hurdle between a product and its customers, QA lengthens time-to-market, and so on. Departments focused on delivering value to buyers should be skeptical of anything that stands in their way. Then, the great opportunity is to demonstrate how QA is part of the solution.
QA can be a barrier when it’s managed as something that happens only at the end of product development. When an SDLC confines QA to what happens after a product is otherwise finished, not only is QA a bottleneck to product release, but QA is so distant from the product that scheduling or estimating QA becomes an exercise in speculation.
With QA fully integrated into the SDLC, you should begin to contribute from the earliest moments of product development. Participation in the first stages provides perspective on testability and schedule impact when decisions are the cheapest to make and pay off best. “Continuous QA” applied from the first days of development or enhancement lowers the risk that large, unpleasant surprises turn up only as a product nears release.
“Continuous QA” goes beyond just implementing Continuous Testing; you can more fully integrate QA in the full SDLC in a variety of ways. Set tests for various steps in your product release process itself to help integrate QA earlier in the SDLC. If you can, take part in requirements development and pair with developers for merge request review so that you can give QA input even before you’re able to get your hands on actual testable code.
Doing this keeps the partial product “green”–it passes pertinent tests–or nearly so, from the start. Product development is much easier for the development staff when the product always stays within a handful of corrections from the green state. It’s easier to remedy ten minor discrepancies at a time than wait until the end of product development and have to solve hundreds of interacting defects.
Everyone involved in product development has a more straightforward job when a product starts on a “green” path and returns to it swiftly after every misstep. We don’t insist that QA integrate fully into the SDLC to be nice to QA or somehow equitable. Instead, early integration makes the most of QA’s efforts, benefits the product, and reduces the load on everyone else involved.
If QA is siloed in your organization, it’s time to make a change. You can do so by looking for KPIs that reinforce the point that properly executed QA helps create more valuable products faster. Analyze your department’s existing culture, think about what decisions you’ll make to bring that culture closer to the one you want, and eliminate obstacles to “green.” Do these three things to build QA into your SDLC and see your product quality improve!