Start of Main Content

Every bug tells a story, and it usually goes deeper than simply depicting a problem and its surface-level cause or fix. There’s also, almost always, a root reason as to why the defect was introduced in the first place that sometimes traces back to before the functionality was even implemented. As a result, when bugs that have emerged over the course of a project are looked at in cohesion, they begin to paint a picture of overall project health, and can also provide clues as to how to improve this metric going forward.

QA Traceability: What, Why, and How?

As a part of our efforts to continuously evolve the effectiveness of our QA practices at Velir, we’ve come up with a process for categorizing bugs that we refer to as QA Traceability.  This process necessitates that each bug has a required QA Traceability field when it is logged that speaks to possible root causes for the issue.  For example, if there’s a requirement that is missed in implementation, we would select the “Not Delivered to Specification” option in the QA traceability field (we’ll dive into the various options that we’ve identified, and what they mean, shortly). 

Now when the bug is evaluated by the developer, they can change this value based on the troubleshooting they do to resolve the bug ticket.  Maybe in our example, the requirement wasn’t actually missed but was purposely removed, and just not updated in the spec ticket.  The developer can now resolve the bug and change the traceability field value to “Specification Not Updated”.  Following this, QA has one more opportunity to review the traceability field before closing the bug.

"This process necessitates that each bug has a required QA Traceability field when it is logged that speaks to possible root causes for the issue."

Using JIRA, we track this field across the entirety of the project to allow easy identification of any trends in the root causes of bugs. We do this by setting up each project with a project health dashboard that is maintained and monitored by the project manager.  This dashboard includes a pie chart that looks at all bugs logged for the project and their traceability values. This allows us to quickly spot any trends such as if one category of root causes is larger than expected or disproportionately larger compared to the others. We can then click into this category on the chart to pull up the corresponding list of bugs to review.  The QA team can also bring this chart to the team’s attention in status meetings to discuss its findings.

Keeping it Valuable

There are a few things to keep in mind in order to make sure this type of classification is useful

  • First, you want your data to be comprehensive.  Complete this field on every bug, every time in order to maintain accuracy.  If your bug tracking system allows you to specify required fields, this is an easy solution to enforce this. 
  • Next, you need to have buy in from everyone on the team.  If your bug reporters aren’t putting thought into what option they select, or the bug resolvers aren’t confirming that the value is correct, you won’t have accurate data. 
  • Finally, you’ll want to be sure that this information is highly visible so that it can be easily monitored and reported on.  I have described how we leverage JIRA functionality to accomplish this, but it could be as simple as having your QA engineer put together a report that is shared regularly.

Velir’s QA Traceability Categories

Now that we’ve established how this data is used, let’s dig into some of the specific categorizations that we have identified and look at potential solutions.  Keep in mind that the traceability value can be changed at later points in the bug’s life cycle – so there are categories that you may not use when initially reporting a bug, but might find appropriate when resolving it.  You’ll also want to use your judgment about when and how you want to try and correct any identified trends.  No project will be bug free, and you’ll likely have at least a couple bugs in most categories.  It’s up to you to decide what a reasonable threshold for each group is.

Not Delivered to Specification

What it means – The deliverables don’t match the specification.  It could be something small like a label not exactly matching what’s documented, or a missed requirement out of a long list.  It could also be a larger issue where a component isn’t functioning properly or has incorrect logic.  We usually find that most bugs fit into this category.

Possible causes and solutions – Since there could be a number of plausible options here, you’ll want to review the bugs in this category and see if there are any trends.  Are they mostly minor issues?  Maybe the team isn’t re-reading through the spec before flipping the tickets to QA, and you’ll need to find a way to encourage increased attention to detail. Or maybe they are larger misses?  This could be because the specs aren’t clear or the concepts are difficult to understand.  In this case, you could try having more in-depth ticket handoff meetings or see if there are ways to be add additional detail to the specs.

Functionality Not Documented

What it means – An important requirement or piece of information is not captured in the ticket.  Perhaps something is taken for granted as common knowledge and missed in development, or the bug reporter finds an issue with a scenario that was unaccounted for.

Possible causes and solutions – Are you seeing a lot of issues with functionality that is typically standard across projects?  It could be that your developer is unaware of these standards, in which case you could considering writing these details down somewhere instead of taking them for granted as something that's known.  You could even consider having centralized documentation for things that are commonly seen across multiple projects.  Are you finding new scenarios and use cases too late in the game?  You might want to consider having more depth to your discovery phase discussions, or think about using mind maps soon after tickets are completed to catch any gaps.  If the issue is a discussed requirement that never made it into the ticket, it may be more appropriate for the next category.

Specification Not Updated

What it means – A change was made to the requirements that was not updated in the ticket and/or communicated to the team.

Possible causes and solutions – Do you have a lot of offline conversations where decisions are made?  Make it a point at the end of these discussions to set an action item for someone to update the documentation.  Or you can look into ways of automatically documenting them such as chat room integration with your specs.  Know what all of your sources of information are, and be sure there’s consistency in keeping them up to date.

Front End/Style Issues

What it means – The issue is purely cosmetic and not based in functionality.  Examples include issues around font (size, color, etc.), images not appearing correctly, spacing around components, or problems with responsive behavior.

Possible causes and solutions – Consider having QA begin testing the front-end assets as soon as they are complete, before they’re integrated with the back-end, with the aim of getting a jump on bug fixes.  You can also have a designer do a quick review to note any changes that need to be made.  If you’re seeing a lot of display issues immediately after the back-end code is integrated, make sure that your back-end developers have access to the front-end assets and are able to look at the completed product before sending the entirety to QA.

Browser/Device Inconsistency

What it means – A piece of the solution looks or functions properly in some browsers/devices/environments but expresses different behavior in others.

Possible causes and solutions – Is there a specific browser that’s a repeat offender?  Make sure your front-end team smoke tests that one before passing along their deliverables for testing.  If the browser is really problematic, you can have your project manager follow up with the client to see if it is worth supporting or if the effort saved in dropping it would be more valuable spent elsewhere.  You’ll also want to verify that any browser emulators the team may be using are producing accurate results. 

Testing Blocked

What it means – There’s an overarching issue that’s preventing testing.  For example, it could be that QA is unable to add a component to a page or that a page is throwing an error that’s preventing them from proceeding.

Possible causes and solutions – There could be an issue with the build or merging between environments.  An effective way to prevent this is to make sure that everything is sanity checked on the environment where testing will happen before tickets are handed off.  A blocking issue should be obvious enough to be caught quickly.

Cannot Be Reproduced

What it means – The person reviewing the bug isn’t able to recreate the problem.

Possible causes and solutions – There could have been an update that unexpectedly corrected the issue between when the bug was logged and when it was reviewed.  If you’re running into scenarios where things seem to be magically fixed soon after being reported, it could be worth taking note of the bugs but waiting to officially report them until they can be retested a short time later.  Another cause could be that the steps to reproduce aren’t being followed exactly, in which case you might want your reporter and reviewer to review the issues together and make sure everyone is on the same page.


What it means – A piece of the solution that used to function properly is no longer working correctly.

Possible causes and solutions – One piece of code can have unforeseen effects on other pieces of code.  You’ll want to schedule regular regression tests to catch these issues.  Areas that are particularly problematic should be checked more frequently, possibly by developers as soon as they check-in code that affects these parts of the solution.

Will not Fix

What it means – The bug is acknowledged as an issue, but will not be corrected at the current time.

Possible causes and solutions – Some bugs may be considered too low priority to be addressed.  Too many of these can slow down the pace of the project.  If this is happening, you’ll want to set some guidelines for your testers about the level of granularity they should use in their testing.  You may also want to review test cases and get rid of any that will only result in the discovery of trivial issues.

By Design

What it means – This is not an issue, and it’s likely the bug was opened in error.

Possible causes and solutions – Besides bugs that are accidentally opened and quickly closed by the reporter, a common root cause here is a misunderstanding of the tickets or the functionality they represent.  If this happens a lot, review your processes.  Are the specs clear?  If not, do they need more detail or could a handoff meeting help clear things up?  You’ll also want to be sure that QA’s test cases are being reviewed, so that any incorrect tests can be corrected.

You’ll notice that there is no “Other” or “General” category.  This was a purposeful choice on our part as we’ve found that if these options are available, they get overused, diluting our data and making the process less useful.  Omitting these choices forces the team to select the other, more specific options that can then be reviewed and acted upon as necessary.

These categories are ones that we’ve identified over time and with experience testing hundreds of projects and solutions. We’re constantly thinking about ways to update them to collect more accurate data and improve the process.  You may find a different set that will work better for your organization.  Again, the idea is to have a way to easily identify trends in the root causes of issues, so that you’re able to improve your project health and deliver a better, more robust product.

Join the conversation by commenting below to let us know what your experience has been with introducing traceability to your QA processes and if you’ve identified additional categories to add to the list.


Latest Ideas

Take advantage of our expertise with your next project.