Clarification of the definitions of SQA and SQC

Further clarification, with examples, of the terms SQA and SQC.

Article Purpose

The purpose of this article is to define the term Software Quality Control, SQC, in relation to Software Quality Assurance, SQA.
This article also positions the SQA role within the continuous process improvement framework, in order to expand on the SQA definition.
Although in other parts of SQA.net these software quality terms are defined and used in their correct context, these terms are so similar that a separate by example definition is warranted.

The formal definitions that are used within this article (and website) are:-

Software Quality Assurance
The function of software quality that assures that the standards, processes and procedures are appropriate for the project and are correctly implemented.

Software Quality Control
The function of software quality that checks that the project follows its standards processes, and procedures, and that the project produces the required internal and external (deliverable) products.

These terms appear similar but a simple example highlights the fundamental difference.
Consider a software project that includes requirements, user Interface design and a SQL database implementation.
The SQA team would produce a quality plan that would specify any standards, processes and procedures that apply to the example project. These might include, by way of example, IEEE xyz specification layout (for the requirements), Motif style guide abc (for the user interface design) and Open SQL standards (for the SQL implementation). All of the standards, processes and procedures that should be followed are identified and documented in the quality plan, this is done by SQA.

When the requirements are produced (in this example) the Software Quality Control team would ensure that the requirements did in fact follow the documented standard (in this case IEEE xyz). The same task, by SQC, would be undertaken for the user interface design and the SQL implementation, that is they both followed the standard identified by SQA. Later the SQA team could make audits to verify that IEEE xyz and not IEEE abc was indeed used as the requirements standard.

In this way a difference between correctly implemented by SQA and followed by SQC can clearly be drawn.



In addition the SQC definition implies software testing, as this is part of the project produces the required internal and external (deliverable) products definition for SQC. The term required refers not only to the functional requirements but also to the non-functional aspects of supportability, performance and usability etc. All of the requirements are Verified or Validated by SQC. For the most part, however, it is the distinction around correctly implemented and followed for standards, processes and procedures that gives the most confusion for the SQA and SQC definitions. Testing is normally clearly identified with SQC, although it is usually only associated with functional requirement testing.

SQA also plays a different role with regard to continuous process improvement and this is discussed in the next section.

The SQA role in continuous process improvement

SQA is also responsible for gathering and presenting software metrics.

For example the Mean Time Between Failure (MTBF) is a common software metric (or measure) that tracks how often the system is failing. This Software Metric is relevant for the reliability software characteristic and, by extension the availability software characteristic.

SQA may gather these metrics from various sources, but note the important pragmatic point of associating an outcome (or effect) with a cause. In this way SQA can measure the value or consequence of having a given standard process, or procedure. Then, in the form of continuous process improvement, feedback can be given to the various process teams (Analysis, Design, Coding etc.) and a process improvement can be initiated.

Consider a User interface standard that made no distinction between an "open new window" and a "replace current window", in that the "Go To.." button did both. If, during usability testing, it was discovered that this lack of distinction caused users to be looking around for the correct Window (or some other issue) the SQA team could provide this information (usability metric) back to the user interface designers and an amendment to the user interface standards could be proposed.

An important aspect of SQAs involvement with process improvement is that they (SQA) only take the measures back to the process owners and it is the process owners (i.e. the design team) that suggest a new standard or procedure. This new standard or procedure is then used in future projects (i.e. the SQA team do not come up with the standards and procedures) in isolation. The use of the appropriate standard or procedure is, however, verified, by SQA.
SQA also takes the relevant measurements in order to give feedback to the value of using the standards processes, and procedures.



In summary SQA not only ensures the use of the identified standards processes, and procedures but also collects software measures that are relevant to the evaluation of the use of the standards processes, and procedures. In this way SQA are assuring that the agreed standards processes, and procedures are used and are measuring the consequences of their use. The value of the standards processes, and procedures is measured only by their consequence thereby making SQA a pragmatic methodology, and not driven from first principles or unchallenged theories or practices.

Evaluating Software, via software measures or metrics, is in itself both an art and a science that SQA personnel should have expertise in.



>No guarantee (or claim) is made regarding the accuracy of this information. Any questions or comments should be sent to:-