As the ISTQB defines it, Quality is “the degree to which a component or system satisfies the stated and implied needs of its various stakeholders”. Through Cunningham‘s and ISTQB’s approaches, any Technical Debt is in fact a lack in Software Quality.
There are many ways where Software Quality is (in)advertently compromised
It is common knowledge that the wide adoption and pervasive presence of software as part of our everyday lives (but also frequently in extremely critical aspects) along with the fast pace of software release (mainly derived from time-to-market decisions for new features but also due to security issues and defect fixing), has led to the thinking of “Quality” as a critical process within the Software Development Life Cycle (SDLC). Nonetheless, this critical “Quality” is often compromised in the name of rushing to push software “out into the wild” by:
- Not giving enough time to define a product’s quality definition, i.e., formalizing its compliance specification,
- Not testing the product enough, i.e., not enough time for tests, not enough testing coverage (more on this later), or when the team too small, inexperienced or without the proper skillset to carry on a proper job
- Neglecting general (or industry standardized) quality processes, best-practices, and quality related deliverables.
These deliberate decisions are erroneous in at least 4 ways:
- They prevent teams from being aware of non-compliant issues of the features being released, which in time turns to new Technical Debt. (See an example in Rutherford’s County Judicial System migration)
- They assume that the current functionality of a product will still work under new releases, potentially overlooking defects, which accumulates Technical Debt from both new and previous requirements (see Ariane 5 Rocket launching failure analysis)
- Depending on the situation, it under or over utilizes the capacity of QA Engineers, which wastes resources or increases cost, respectively, and overall stresses the team (see “Failing at Requirements-Based Software Testing?”),
- They reduce de certainty of a deployment by de-facto dismissing risks and thus reducing the confidence on the release in detriment of the product stability and the company’s reputation (see “What’s really changed three years after Equifax breach”)
It’s a number’s game
One way to know how well a team in doing in assuring the Quality of a Software product (aside from actual customer reviews) is introducing Quality Metrics. Both the industry and the academy are continuously improving the definition of Software Quality Metrics that account for testing being done (or lack of it) as a snapshot of the current software state but (if analyzed through the evolution of the project) they can also give insights on quality trends and help identify and partly explain pain points. This is true for any type SDLC.
However, the Software industry has not really agreed on a standarized and comprehensive set of Quality Metrics and some projects have even developed custom values derived from intericate formulas embedded in easy-to-read dashboards with composite indicators for nearly real-time reporting of such measurements.
Aside from the role that this continuous measurements are playing in new approaches like Site Reliability Engineering (specially regarding Automated Test Execution), it is some times really hard to explain quality metrics themselves to non-technical people, let alone the aforementioned custom and overcomplicated numbers (even with pretty and animated graphs).
To simplify things, every project should at least introduce the following 3 complementary metrics:
- A Testing Coverage metric that reflects the percentage of requirements being covered by at least one test (one would think this metric should be close to 100% in software projects, but you would be surprised)
- An Executed Tests measurement (percentage of tests planned vs tests executed)
- Tests Pass Ratio (percentage of tests that met the expected success criteria vs the executed tests).
These 3 metrics combined should give a team a broad picture of the Quality work. Threre are many more quality-related metrics but please note that for the sake of brevity and simplicity we are purposely leaving out test automation and defect related metrics).
Introducing the concept of Quality Debt
For the purpose of this series of posts, any deviation from the 100% of any of the above 3 metrics will be labeled as “Quality Debt”. While there are other definitions of Quality Debt (i.e. see Hammerslag , D. 2013 ), they mostly focus on defects instead of the quality of software as a whole. It must be noted that Quality Debt should be formerly stated at the end of each Quality Cycle.
By this simple yet grounded definition, Quality Debt may be realized into 4 actionable items or artifacts:
- A defect log: as one might expect, a failed test can be interpreted as the “disagreement between the requirements and the deployed functionality” (or as a noncompliance to pre-defined criteria) and it can be directly translated into a defect which is recorded in a defect log.
- New requirements or backlog items: even when a test is passed, it might reveal previously inadvertent aspects of the functionality which can lead to new requirements or acceptance criteria. Common backlog items are those related to security, privacy, or user experience (especially when they are not considered from the design phases or are afterthoughts).
- Augmented test ware: it is often the case that experienced Quality Engineers identify whole new sets of test scenarios, perhaps related to a new feature or to new data inputs tied to new states, conditions, or expected results. Another example of augmented test ware is the evolution of current test cases into its automated versions or the incorporation of these automated tests into Continuous Integration or Continuous Delivery (CI/CD) pipelines, not just for nightly runs (i.e., pursuing continuous testing) but up to achieving “live” application monitoring.
- Quality Effort Gap Report: this document must contain the results of (at least) the 3 Quality metrics mentioned earlier which helps development managers and stake holders to take informed decisions and better assess the risk of a given deployment based on the differences between what the Quality Engineers envisioned and what actually happened during the quality cycle.
By thinking about Software Technical Debt from the perspective of software quality, a clearer path is revealed for its understanding and measuring, and one can better grasp its implications over various aspects of the project. Quality Debt aims to be more tangible process and easier to estimate than somewhat “abstract” or incomplete requirements or feature definitions (see for example the feature value hypothesis).
In the next post we will address how to tackle these four actionable items to help reduce your Quality Debt.