Version 1.1. May 5, 2009
Abstract
The oldest metric for software quality economic study is that of “cost per defect.” While there may be earlier uses, the metric was certainly used within IBM by the late 1960’s for software; and probably as early as 1950’s for hardware.
As commonly calculated the cost-per-defect metric measures the hours associated with defect repairs and the numbers of defects repaired and then multiplies the results by burdened costs per hour.
The cost-per-defect-metric has developed into an urban legend, with hundreds of assertions in the literature that early defect detection and removal is cheaper than late defect detection and removal by more than 10 to 1. This is true mathematically, but there is a problem with the cost per defect calculations that will be discussed in the article. As will be shown, cost per defect is always cheapest where the greatest numbers of defects are found. As quality improves, cost per defect gets higher until zero defects are encountered, where the cost per defect metric goes to infinity.
More importantly the cost-per-defect metric tends to ignore the major economic value of improved quality: shorter development schedules and reduced development costs outside of explicit defect repairs.
Capers Jones, President, Capers Jones & Associates LLC
Email: CJonesiii@cs.com.
Copyright © 2009 by Capers Jones & Associates LLC. All rights reserved.
INTRODUCTION
The cost-per-defect metric has been in continuous use since the 1970’s for examining the economic value of software quality. Hundreds of journal articles and scores of books include stock phrases, such as “it costs 100 times as much to fix a defect after release as during early development.”
Typical data for cost per defect varies from study to study but resembles the following pattern circa 2009:
Defects found during requirements = $250
Defects found