How to Measure Product Quality

abertech_solutions_logo_lg

Introduction

Every Engineering organization faces the same question at some point in their existence – “What is the best way to measure the quality of our software product?” I believe that most organizations come to the wrong answer. In this blog I’ll lay out what I believe is the right answer, and how to generate the associated metric. Note that in this blog, I’m equating product quality to product robustness (i.e. defects in product functionality)

Defect Removal Model

Defect removal consists of three levels:

  • Fix: This is the lowest level of sophistication and largely involves fixing incoming defects as quickly as possible so as to provide good customer support and service
  • Detection: This is the next level up where the investment is mainly in catching all “reasonable” software defects in the code prior to being found by the customer. Defects that do make their way out to the customer are fixed quickly
  • Prevention: This is the highest level where the investment is in removing defect generators at the root with tools and techniques such as code review, pre-flight testing, refactoring buggy methods etc. You still invest in defect detection and fixing but the occurrence of defects found should be significantly reduced

Most Common Measure of Product Quality

In my experience, most Engineering and Quality Assurance organizations use “Open defects” as the measure of quality. The rationale is that these are known defects in the product, and thus are defects customers could run into. The lower this measure, then the less likely customers are going to run into issue, hence the quality of the product is better.

I think that using the open defect count as your measure of product quality is flawed for several reasons:

  • It is an internal metric that can be totally manipulated by Engineering without affecting the customer experience. For example, consider two Engineering teams: Both are receiving 10 new defects each day. Engineering team 1 fix some % of the incoming defects. Engineering team 2 fixes every incoming defect. Is the quality of product 2 better than product 1? I’d say “no” because the customers of both teams are still finding and reporting defects at the same rate when using the product.
  • In most software companies, defects come in faster than they can be fixed, so there is always a tradeoff between bug fixing and new feature development. Your product quality may be improving yet due to priorities set by Product Management, the open defect count could still be rising
  • Executives generally want to improve product quality for two reasons: One is provide a better customer experience. The second is to improve internal productivity – the more time your Engineering team spends on fixing defects means less time for developing new functionality. Measuring open defect count doesn’t indicate any improvement in support costs. In the example above, Team 1 is actually more productive at producing new functionality than Team 2 yet their open defect metric is likely to be rising.

A Better Measure of Product Quality

I believe strongly that the correct way to measure product quality is to measure the rate at which customers are filing defects. This metric correctly represents how often your customers are experiencing defects in your product, and the measure is independent of internal influences such as bug fix rates etc.

Metric Calculation

Simply measuring incoming defect rates is not accurate, since we all know that defects have different severities, and a defect found by only one customer is more important than a defect that many customers are tripping over.  So it is important to adjust the incoming defects by:

  • The severity. For example assign a weight of 20 to the sev1, 10 to sev2, 5 to sev3, 2 to sev4, and 1 to sev5
  • Capturing on those defects reported by customers. It should not include internally-found defects.
  • The number of customers that report the defect. Multiply the weight by the number of customers that report the defect

This gives a weighted incoming defect rate that can be measured as:

Quality(forPeriod) = SummationOfDefectsReportedDuringPeriod( SeverityWeightOfDefect * NumCustomersReportingDefect)

Typically, I like to track this metric weekly, but you need to find the cadence that works best for your business. I also calculate a 6 week rolling average to help flatten out the peaks/troughs in the weekly rate and to also provide a better indication of longer-term trends.

Below is an example of what the metric might look like in practice. Blue represents the weekly rate and red indicates the rolling average. By viewing the red line, you can see how quality initiatives put in place during the last year of the program significantly reduced the incoming defect rate.

Quality Metric

Notes:

  • In fast growing companies, I have often heard the argument that the quality metric should also be normalized by the number of customers, since incoming defect rates will likely rise due to the volume of new customers. I don’t believe this is a good practice – with the right quality initiatives in place, it is reasonable to expect that you can reduce incoming defects *while* the number of customers rise. At the very least, the incoming rate should rise much slower than the new customer rate. In the above example, business was growing 30% to 50% year-over-year yet we still brought the quality metric down. You can too!

Conclusion

To correctly measure product quality, you should use a metric that measures the weighted incoming rate of customer-found defects.  This provides the best indication of how your customer perceives the quality of your product and it’s the best indication of the potential support workload being placed on your Engineering team as a result of poor product quality.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>