3 Software product quality metrics The quality of

  • Slides: 16
Download presentation
3. Software product quality metrics The quality of a product: -the “totality of characteristics

3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs” q Metrics of the external quality attributes Ø producer’s perspective: “conformance to requirements” Ø customer’s perspective: “fitness for use” - customer’s expectations

Some of the external attributes cannot be measured: usability, maintainability, installability. Two levels of

Some of the external attributes cannot be measured: usability, maintainability, installability. Two levels of software product quality metrics: § Intrinsic product quality § Customer oriented metrics: Overall satisfaction Satisfaction to specific attributes: capability (functionality), usability, performance, reliability, maintainability, others.

Intrinsic product quality metrics: Ø Reliability: number of hours the software can run before

Intrinsic product quality metrics: Ø Reliability: number of hours the software can run before a failure Ø Defect density (rate): number of defects contained in software, relative to its size. Customer oriented metrics: Ø Customer problems Ø Customer satisfaction

3. 1. Intrinsic product quality metrics Reliability --- Defect density • Correlated but different!

3. 1. Intrinsic product quality metrics Reliability --- Defect density • Correlated but different! • Both are predicted values. • Estimated using static and dynamic models Defect: an anomaly in the product (“bug”) Software failure: an execution whose effect is not conform to software specificatio

3. 1. 1. Reliability Software Reliability: The probability that a program will perform its

3. 1. 1. Reliability Software Reliability: The probability that a program will perform its specified function, for a stated time period, under specified conditions. Usually estimated during system tests, using statistical tests, based on the software usage profile Reliability metrics: MTBF (Mean Time Between Failures) MTTF (Man Time To Failure)

MTBF (Mean Time Between Failures): Ø the expected time between two successive failures of

MTBF (Mean Time Between Failures): Ø the expected time between two successive failures of a system Ø expressed in hours Ø a key reliability metric for systems that can be repaired or restored (repairable sys Ø applicable when several system failures are expected. For a hardware product, MTBF decreases with the its age.

For a software product, MTBF it’s not a function of time! It depends on

For a software product, MTBF it’s not a function of time! It depends on the debugging quality.

MTTF (Man Time To Failure): Ø the expected time to failure of a system

MTTF (Man Time To Failure): Ø the expected time to failure of a system Ø in reliability engineering metric for non-repairable systems Ø non-repairable systems can fail only once; example, a satellite is not repairable. Mean Time To Repair (MTTR): average time to restore a system after a failure When there are no delays in repair: MTBF = MTTF + MTTR Software products are repairable systems! Reliability models neglect the time needed to restore the system after a failure. with MTTR =0 MTBF = MTTF Availability = MTTF / MTBF = MTTF / (MTTF + MTTR)

Is software reliability important? • company's reputation • warranty (maintenance) costs • future business

Is software reliability important? • company's reputation • warranty (maintenance) costs • future business • contract requirements • custom satisfaction

3. 1. 2. Defect rate (density) Ø Number of defects per KLOC or per

3. 1. 2. Defect rate (density) Ø Number of defects per KLOC or per Number of Function Point, in a given time unit Example: “The latent defect rate for this product, during next four years, is 2. 0 defects per KLOC” ØCrude metric: a defect may involve one or more lines of code Lines Of Code -Different counting tools -Defect rate metric has to be completed with the counting method for LOC! -Not recommended to compare defect rates of two products written in different languag

Defect rate for subsequent versions (releases) of a software product: Example: LOC: instruction statements

Defect rate for subsequent versions (releases) of a software product: Example: LOC: instruction statements (executable + data definitions – not comments). Size metrics: Shipped Source Instructions (SSI) New and Changed Source Instructions (CSI) SSI (current release) = SSI (previous release) + CSI (for the current version) - deleted code - changed code (to avoid double count in both SSI and CSI)

Defects after the release of a product: field defects – found by the customer

Defects after the release of a product: field defects – found by the customer (reported problems that required bug fixing) internal defects – found internally Post release defect rate metrics: Ø Total defects per KSSI Ø Field defects per KSSI Ø Release – origin defects (field + internal) per KCSI Ø Released – origin field defects per KCSI Defect rate using number of Function Points: Ø If defects per unit of function is low, then the software should have better quality even though the defects per KLOC value could be higher – when the functions were implemented by fewer lines of code.

Reliability or Defect Rate ? Reliability: Ø often used with safety-critical systems such as:

Reliability or Defect Rate ? Reliability: Ø often used with safety-critical systems such as: airline traffic control systems, avion (usage profile and scenarios are better defined) Defect density: Ø in many commercial systems (systems for commercial use) • there is no typical user profile • development organizations use defect rate for maintenance cost and resource • MTTF is more difficult to implement and may not be representative of all custom

3. 2. Customer Oriented Metrics 3. 2. 1. Customer Problems Metric Customer problems when

3. 2. Customer Oriented Metrics 3. 2. 1. Customer Problems Metric Customer problems when using the product: valid defects, usability problems, unclear documentation, user errors. Problems per user month (PUM) metric: PUM = TNP/ TNM TNP: Total number of problems reported by customers for a time period TNM: Total number of license-months of the software during the period = number of install licenses of the software x number of months in the period

3. 2. 2. Customer satisfaction metrics Often measured on the five-point scale: 1. Very

3. 2. 2. Customer satisfaction metrics Often measured on the five-point scale: 1. Very satisfied 2. Satisfied 3. Neutral 4. Dissatisfied 5. Very dissatisfied 6. IBM: CUPRIMDSO 7. (capability/functionality, usability, performance, reliability, installability, maintain 8. documentation /information, service and overall) Hewlett-Packard: FURPS (functionality, usability, reliability, performance and service)

Metrics: – Percent of completely satisfied customers – Percent of dissatisfied customers – Percent

Metrics: – Percent of completely satisfied customers – Percent of dissatisfied customers – Percent of no satisfied customers Net Satisfaction Index (NSI) Completely satisfied = 100% (all customers are completely satisfied) Satisfied = 75% (all customers are satisfied or 50% are completely satisfied and 50% are neutral)! Neutral = 50% Dissatisfied= 25% Completely dissatisfied=0%(all customers are completely dissatisfied)