Software Quality Assurance Fundamentals

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Chp-1: Software Quality Assurance Fundamentals

1 Definition of Quality, Quality Assurance, Quality Control, Difference between QA and QC,
2 Software Quality Assurance, SQA Planning & Standards
3 SQA Activities
4 Building Blocks of SQA
5 Software Quality factors
6 Software Quality Metrics: Process Metrics & Product Metrics
7 Software Reliability & Reliability Measurement Factors: ROCOF, MTTF, MTTR, MTBF, POFOD,
Availability

1.1 Definition of Quality, QA, QC, SQA

(1) Definitions:
Quality: As per IEEE Glossary

Product which should meet its specification.

a) Customer Quality requirements


 Efficiency
 Reliability

b) Developer Quality requirements


 Maintainability, reusability
 Reliability, portability
Quality Assurance (QA):

 Auditing & reporting functions of management


 It concentrate on the process of producing quality
 Defect prevention oriented
 This is usually staff function
 Review, audits, inspections
 Done after the product is built
QA = QC over QC
because of Quality Assurance evaluate whether Quality Control is working.

Software Quality Assurance (SQA):


 A set of systematic activities of software development to produce a software
product that is fit to use.
 Software quality assurance is an umbrella activity that applies throughout the
software process.

Quality Control (QC):


1|P ag e [email protected]
It is series of instruction, reviews and tests used throughout the development cycle to
ensure that each work product meets the requirement placed upon in.
 Concentrate on specific product
 Defect detection and correction oriented
 This is usually a line function e.g. software testing at various levels
 Done throughout the life cycle

Quality Testing:

It is assessment of the extents which a test object meets given requirements.


Error: Errors are a part of our daily life. Human makes errors in their thoughts, actions
and in the products that might result from their actions. Errors occur wherever
humans are involved in taking actions and making decisions.
Error is a state of the system, it could lead to failure.
Bug: Due to which program fail to perform its intended function correctly.

Defect: It is bug – roughly say Fault.

Fault: It is adjudged cause of error.

Failure: Failure is said to occur whenever the external behavior of a system does not confirm
to that prescribed in the system specification.

1.2 SQA planning and standards:

Quality plan should define the Quality Assessment Process. It should set out which organizational
standards should be applied and where necessary. Define new standard to be used.

Test planning:
 Establish test objectives
 Design test cases
 Write test cases
 Test ‘Test cases’
 Execute test cases

Software Quality Assurance Plan

Project Name Module Name Developer Reviewer Date

ERP – Dristiee MMS Sunil Vaibhave 13/12/2012

Quality Plan Structure:


 Product introduction
2|P ag e [email protected]
 Product plan
 Process description
 Quality goals
 Risk and Risk Management

SQA standards:

(i) Standards are key to effective quality management


(ii) They may be international, national, organizational or product standards.
(iii) Product standards define characters that all components should exhibit – e.g. common
programming style.
(iv) Process standard defines how the software process should be enacted.

Importance of standards:
(i) Encapsulation of best practices – avoid repetitions of past mistakes.
(ii) They are framework for QA process.
(iii) They involved checking compliance to standards
(iv) They provide continuity

Product Standards Process Standards

 Design review form  Design review conduct


 Requirement documentation  Submission document to project
structure management
 Project plan format  Project plan approval process
 Change request form  Test recording process

1.3 SQA group activities:


 Participation in the development of the project software process description.

 Review software engineering activities to verify compliance with the defined process.

 Audit designated software work – products to verify the compliance with those defined as a
part of software process.

 SQA group that has responsibility for Quality Assurance planning, record keeping, analysis
and reporting.
i) Prepare SQL plan for the project
ii) Participate in the development of the projects and software process.

3|P ag e [email protected]
iii) Review software engineering activities to verify compliance with defined software
process.
iv) Audit / Non conformity reporting
v) Metric calculation
vi) Monitoring and improving process.
vii) Prevention / detection / corrections
viii) Prepare SQA plan
ix) Feasibility study
x) Document review
xi) Performance review
xii) Product evaluation
xiii) Process monitoring

1.4 Building blocks of SQA:

The main objective of testing is to find defects in the requirement design, documentation and code as
early as possible.

The test process should be such that the software product that will be delivered to the customers is
defect less.

All test processes should be traceable to the customer requirements.

Test cases must be written for invalid and un-expected as well as for valid and expected input output
conditions.
i) Management commitment
ii) Customer focus
iii) Process focus
iv) Continuous improvements
v) Benchmarking
vi) Team
vii) Customer – supplier approach
viii) Employee involvement
ix) Training of employees
x) Communications

Testing Technique:

4|P ag e [email protected]
For different types of error (data error, exception handling error, I/O Error, Storage error, control
error) there are different types of testing techniques are used.

S/w
Debug
Configuration
Error
Testing Test Result Evaluation

Test Correct
Configuration
Expected
Result

a) Test Configuration: It includes s/w. requirement specification, design and


source code
b) Test Configuration : It includes Test plan and test procedure

1.5 Software Quality Factors:


 It represents behavioral characters.
 We need to know software quality factors upon which quality and software produced is
evaluated.
 McCall Richards and Walters studied the concept of software quality in terms of two
key concepts as (Quality Evaluation)

Quality Factors Quality Criteria / Metrics


(Like properties or characteristics) (Attributes of quality factors)

Quality Factors Definitions


1 Correctness The extent to which a program satisfies its specifications
and fulfills the user’s mission / objectives.
2 Reliability The extent to which a program can be expected to perform
its intended functions with required precision.
3 Efficiency The amount of computing resources and code requirement
by a program to perform a function.
4 Integrity The extent to which access to software or data by un-
authorized person can control.

5|P ag e [email protected]
5 Usability The effort required to learn, operate, prepare input pout and
interpret output of a program.
6 Maintainability The effort required to locate and fix defects in an
operational program.
7 Testability The effort required to test a program to ensure that it
perform its intended functions.
8 Flexibility The effort required to modify an operational program.
9 Portability The effort required to transfer a program from one hardware
or software environment to another.
10 Reusability The extent to which parts of software can be reused in other
applications.
11 Interoperability The effort required to couple one system with another.

1.6 Software Quality Criteria / Metrics:


Software metrics is any measurement which relates to software system, process or related
documents. A quality criteria is an attribute of quality factors that is related to software
development.
e.g. modularity – is an attribute of the architecture of software system.
How software quality can be measured?
e.g. chair - How would you measure its quality?
- Construction quality – strength of joints
- fit for purpose e.g. comfortable

There is no absolute scale to measure quality.


e.g. we can say ‘A’ is better than ‘B’, but it is usually hard to say how much better.

Measuring Quality:

Level - 1
Reliability Complexity Usability
(Properties)

Level - 2 Mean time to Information Time taken to


failure flow between learn how to
(Quantities
Metrics/criteria)
modules use

Level - 3 Run and count Count Minutes taken


crashes per procedure calls for some user
(Realization
hour task
of Metrics)
6|P ag e [email protected]
Quality Criteria Definitions

1 Hardware The degree to which the software is dependent on the


Independence underlying hardware.
2 Access Audit The ease with which software and data can be checked for
compliance with standards
3 Access Control The provision for control and projection of software and
data.
4 Accuracy The precision of computing and output

5 Error Tolerance The degree to which continuity of operations is ensured


under adverse conditions.
6 Expandability The degree to which storage requirements or software
functions can be expanded.
7 Completeness The degree to where a full implementation of the required
functionalities has been achieved.
8 Training The ease with which new users can use the system.

9 Simplicity The ease with which the software can understand.

10 Operability The ease of operation of the software

Product Metrics /Standards Process metrics / Standards

 Design review form  Design review conduct


 Requirement documentation  Submission document to project
structure management
 Project plan format  Project plan approval process
 Change request form  Test recording process

1.7 Software Reliability & Reliability Measurement Factors: ROCOF, MTTF, MTTR, MTBF, POFOD,
Availability

Software reliability is defined as the probability of failure free operation of a computer program
in a specified environment for a specified time. It can be measured, directed and estimated. A
measure of software reliability is mean - time between failure i.e.
MTBF = MTTF + MTTR

7|P ag e [email protected]
Reliability is one of the metrics that are used to measure quality. A system without fault is considered
to be highly reliable.
Constructing a correct system is a difficult task. e.g. concept in discussing reliability.
Fault, Failure, Time – (defective block of code = fault)
Three kinds of time intervals are MTTR, MTTF and MTBF.
Time is a key concept in the formulation of reliability.
If the time gap between two successive failures is short, we say that the system is less reliable.
MTTF: Mean Time to Failure
MTTR: Mean Time to Repair
MTBF: Mean Time Between Repair

MTTF Time

MTTR
MTBF

Figure shows relationship between MTBF, MTTF, and MTTR.

Measuring software reliability is difficult problem because we do not have good understanding of the
nature of software.

Fault and Failure metrics:

This metrics is to be able to determine when the software is approaching failure-free execution. Both
the number of faults found during the testing (before delivery) and the failure reported by users after
delivery are collected, summonsed and analysed to achieve this goal. The failure data collected is
therefore used to calculate failure density.

8|P ag e [email protected]
Software reliability is an important attribute of software quality together which functionality, usability,
performance etc.

Software reliability is hard to achieve because the complexity of software tends to be high, while any
software with high degree of complexity including software will be hard to reach a certain level of
reliability.

Hardware faults are mostly physical faults

Software faults are design faults which are harder to visualize, classify, detect and correct.

Bathtub curve for software reliability

(Error can occur


without warring)
Burn-in Wear-out

Useful 1

Life

A B C Time
Period A, B, C stands for
A Burn in phase
B Useful life phase
C End of life phase

Software reliability however does not show the same characteristics similar as hardware.

Test debug
Upgrade Upgrade Upgrade

Obsolesces

9|P ag e [email protected]
Time
If we project software reliability on the same axis,

There are two major differences between hardware and software curves.

One difference is that in the last phase software does not have increase in failure rate as hardware
does. In this phase software is approaching obsolesces. There is no motivation for any upgrades or
changes to software.

Therefore the failure rate is not change. The 2nd difference is that in the useful life phase. Software will
experience a drastic increase in failure rate. Each time upgrade is made. The failure rate levels are
gradation, partly because of the defects found and fixed after the upgrades. The upgrade in figure 2
implies feature upgrades. The complexity of software is likely to be increased; since the functionality
of software enhances even bug fixes may be a reason for more software failures, if the bug fix includes
other defects into software.

ROCOF : Rate of occurrence of failure (ROCOF)

It is the number of failures appearing in a unit time interval. The number of unexpected events over a
specific time of operation. ROCOF is the frequency of occurrence with which unexpected role is likely
to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100 operational time
unit steps. It is also called the failure intensity metric.

Probability of Failure on Demand (POFOD)

POFOD is described as the probability that the system will fail when a service is requested. It is the
number of system deficiency given several systems inputs.

POFOD is the possibility that the system will fail when a service request is made.

A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential measure
for safety-critical systems. POFOD is relevant for protection systems where services are demanded
occasionally.

Availability (AVAIL)

Availability is the probability that the system is applicable for use at a given time. It takes into account
the repair time & the restart time for the system. An availability of 0.995 means that in every 1000
time units, the system is feasible to be available for 995 of these. The percentage of time that a system
is applicable for use, taking into account planned and unplanned downtime. If a system is down an
average of four hours out of 100 hours of operation, its AVAIL is 96%

10 | P a g e [email protected]

You might also like