Software Quality Assurance Fundamentals
Software Quality Assurance Fundamentals
Software Quality Assurance Fundamentals
1 Definition of Quality, Quality Assurance, Quality Control, Difference between QA and QC,
2 Software Quality Assurance, SQA Planning & Standards
3 SQA Activities
4 Building Blocks of SQA
5 Software Quality factors
6 Software Quality Metrics: Process Metrics & Product Metrics
7 Software Reliability & Reliability Measurement Factors: ROCOF, MTTF, MTTR, MTBF, POFOD,
Availability
(1) Definitions:
Quality: As per IEEE Glossary
Quality Testing:
Failure: Failure is said to occur whenever the external behavior of a system does not confirm
to that prescribed in the system specification.
Quality plan should define the Quality Assessment Process. It should set out which organizational
standards should be applied and where necessary. Define new standard to be used.
Test planning:
Establish test objectives
Design test cases
Write test cases
Test ‘Test cases’
Execute test cases
SQA standards:
Importance of standards:
(i) Encapsulation of best practices – avoid repetitions of past mistakes.
(ii) They are framework for QA process.
(iii) They involved checking compliance to standards
(iv) They provide continuity
Review software engineering activities to verify compliance with the defined process.
Audit designated software work – products to verify the compliance with those defined as a
part of software process.
SQA group that has responsibility for Quality Assurance planning, record keeping, analysis
and reporting.
i) Prepare SQL plan for the project
ii) Participate in the development of the projects and software process.
3|P ag e [email protected]
iii) Review software engineering activities to verify compliance with defined software
process.
iv) Audit / Non conformity reporting
v) Metric calculation
vi) Monitoring and improving process.
vii) Prevention / detection / corrections
viii) Prepare SQA plan
ix) Feasibility study
x) Document review
xi) Performance review
xii) Product evaluation
xiii) Process monitoring
The main objective of testing is to find defects in the requirement design, documentation and code as
early as possible.
The test process should be such that the software product that will be delivered to the customers is
defect less.
Test cases must be written for invalid and un-expected as well as for valid and expected input output
conditions.
i) Management commitment
ii) Customer focus
iii) Process focus
iv) Continuous improvements
v) Benchmarking
vi) Team
vii) Customer – supplier approach
viii) Employee involvement
ix) Training of employees
x) Communications
Testing Technique:
4|P ag e [email protected]
For different types of error (data error, exception handling error, I/O Error, Storage error, control
error) there are different types of testing techniques are used.
S/w
Debug
Configuration
Error
Testing Test Result Evaluation
Test Correct
Configuration
Expected
Result
5|P ag e [email protected]
5 Usability The effort required to learn, operate, prepare input pout and
interpret output of a program.
6 Maintainability The effort required to locate and fix defects in an
operational program.
7 Testability The effort required to test a program to ensure that it
perform its intended functions.
8 Flexibility The effort required to modify an operational program.
9 Portability The effort required to transfer a program from one hardware
or software environment to another.
10 Reusability The extent to which parts of software can be reused in other
applications.
11 Interoperability The effort required to couple one system with another.
Measuring Quality:
Level - 1
Reliability Complexity Usability
(Properties)
1.7 Software Reliability & Reliability Measurement Factors: ROCOF, MTTF, MTTR, MTBF, POFOD,
Availability
Software reliability is defined as the probability of failure free operation of a computer program
in a specified environment for a specified time. It can be measured, directed and estimated. A
measure of software reliability is mean - time between failure i.e.
MTBF = MTTF + MTTR
7|P ag e [email protected]
Reliability is one of the metrics that are used to measure quality. A system without fault is considered
to be highly reliable.
Constructing a correct system is a difficult task. e.g. concept in discussing reliability.
Fault, Failure, Time – (defective block of code = fault)
Three kinds of time intervals are MTTR, MTTF and MTBF.
Time is a key concept in the formulation of reliability.
If the time gap between two successive failures is short, we say that the system is less reliable.
MTTF: Mean Time to Failure
MTTR: Mean Time to Repair
MTBF: Mean Time Between Repair
MTTF Time
MTTR
MTBF
Measuring software reliability is difficult problem because we do not have good understanding of the
nature of software.
This metrics is to be able to determine when the software is approaching failure-free execution. Both
the number of faults found during the testing (before delivery) and the failure reported by users after
delivery are collected, summonsed and analysed to achieve this goal. The failure data collected is
therefore used to calculate failure density.
8|P ag e [email protected]
Software reliability is an important attribute of software quality together which functionality, usability,
performance etc.
Software reliability is hard to achieve because the complexity of software tends to be high, while any
software with high degree of complexity including software will be hard to reach a certain level of
reliability.
Software faults are design faults which are harder to visualize, classify, detect and correct.
Useful 1
Life
A B C Time
Period A, B, C stands for
A Burn in phase
B Useful life phase
C End of life phase
Software reliability however does not show the same characteristics similar as hardware.
Test debug
Upgrade Upgrade Upgrade
Obsolesces
9|P ag e [email protected]
Time
If we project software reliability on the same axis,
There are two major differences between hardware and software curves.
One difference is that in the last phase software does not have increase in failure rate as hardware
does. In this phase software is approaching obsolesces. There is no motivation for any upgrades or
changes to software.
Therefore the failure rate is not change. The 2nd difference is that in the useful life phase. Software will
experience a drastic increase in failure rate. Each time upgrade is made. The failure rate levels are
gradation, partly because of the defects found and fixed after the upgrades. The upgrade in figure 2
implies feature upgrades. The complexity of software is likely to be increased; since the functionality
of software enhances even bug fixes may be a reason for more software failures, if the bug fix includes
other defects into software.
It is the number of failures appearing in a unit time interval. The number of unexpected events over a
specific time of operation. ROCOF is the frequency of occurrence with which unexpected role is likely
to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100 operational time
unit steps. It is also called the failure intensity metric.
POFOD is described as the probability that the system will fail when a service is requested. It is the
number of system deficiency given several systems inputs.
POFOD is the possibility that the system will fail when a service request is made.
A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure
for safety-critical systems. POFOD is relevant for protection systems where services are demanded
occasionally.
Availability (AVAIL)
Availability is the probability that the system is applicable for use at a given time. It takes into account
the repair time & the restart time for the system. An availability of 0.995 means that in every 1000
time units, the system is feasible to be available for 995 of these. The percentage of time that a system
is applicable for use, taking into account planned and unplanned downtime. If a system is down an
average of four hours out of 100 hours of operation, its AVAIL is 96%
10 | P a g e [email protected]