Software Testing - Lect 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Software Testing

Introduction & Fundamentals


What is Quality?
What is Software Testing?
Why testing is necessary?
Who does the testing?
What has to be tested?
When is testing done?
How often to test?
What is cost of Quality?
What are Testing Standards?
Most Common Software problems

• Incorrect calculation
• Incorrect data edits & ineffective data edits
• Incorrect matching and merging of data
• Data searches that yields incorrect results
• Incorrect processing of data relationship
• Incorrect coding / implementation of
business rules
• Inadequate software performance
Most Common Software problems

• Confusing or misleading data


• Software usability by end users &
• Obsolete Software
• Inconsistent processing
• Unreliable results or performance
• Inadequate support of business needs
• Incorrect or inadequate interfaces
• with other systems
• Inadequate performance and security
controls
• Incorrect file handling
Objectives of testing

• Executing a program with the intent of


finding an error.
• To check if the system meets the
requirements and be executed
successfully in the Intended
environment.
• To check if the system is “ Fit for
purpose”.
• To check if the system does what it is
expected to do.
Objectives of testing

• A good test case is one that has a


probability of finding an as yet
undiscovered error.
• A successful test is one that uncovers a
yet undiscovered error.
• A good test is not redundant.
• A good test should be “best of breed”.
• A good test should neither be too
simple nor too complex.
Objectives of testing

• Find bugs as early as possible and make


sure they get fixed.
• To understand the application well.
• Study the functionality in detail to find
where the bugs are likely to occur.
• Study the code to ensure that each and
every line of code is tested.
• Create test cases in such a way that
testing is done to uncover the hidden
bugs and also ensure that the software
is usable and reliable
VERIFICATION & VALIDATION
Verification - typically involves reviews and meeting to
evaluate documents, plans, code, requirements, and
specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meeting.

Validation - typically involves actual testing and takes place


after verifications are completed.

Validation and Verification process continue in a


cycle till the software becomes defects free.
Plan

Software
Development Action Do

Process Cycle
Check
1 2 3 4
PLAN (P): Device a plan. DO (D): Execute the CHECK (C): Check the ACTION (A): Take the
Define your objective and plan. Create the results. Check to necessary and appropriate
determine the strategy conditions and perform determine whether work action if checkup reveals
and supporting the necessary training to is progressing according to that the work is not being
methods required to execute the plan. the plan and whether the performed according to
achieve that objective. results are obtained. plan or not as anticipated.
QA improves the
process that is QC improves the
applied to multiple development of a
products that will specific product or
ever be produced service.
Responsibilities by a process.
of QA and QC QA personnel
should not perform QC personnel may
quality control perform quality
unless doing it to assurance tasks if
validate quality and when required.
control is working.
• SOFTWARE TESTING LIFECYCLE - PHASES

• Requirements study

• Test Case Design and Development

• Test Execution

• Test Closure

• Test Process Analysis


Requirements study

Testing Cycle starts with the


study of client’s requirements.

Understanding of the
requirements is very essential
for testing the product.
Analysis & Planning

• Test objective and coverage


• Overall schedule
• Standards and Methodologies
• Resources required, including
necessary training
• Roles and responsibilities of
the team members
• Tools used
• Component Identification
• Test Specification Design
Test Case Design • Test Specification Review
and Development

• Code Review
• Test execution and
evaluation
Test Execution
• Performance and simulation
Test Closure

• Test summary report


• Project De-brief
• Project Documentation

Test Process Analysis

Analysis done on the reports and improving the


application’s performance by implementing new
technology and additional features.
DIFFERENT LEVELS
OF TESTING
Testing Levels

Unit testing

Integration testing

System testing

Acceptance testing
Unit testing

The most ‘micro’ scale of testing.

Tests done on particular functions or code


modules.

Requires knowledge of the internal program


design and code.

Done by Programmers (not by testers).


Objectives • To test the function of a program or
unit of code such as a program or
module
• To test internal logic
• To verify internal design
• To test path & conditions coverage
Unit testing • To test exception conditions &
error handling
When • After modules are coded
Input • Internal Application Design
• Master Test Plan
• Unit Test Plan
Output • Unit Test Report
Who •Developer

Methods •White Box testing techniques


•Test Coverage techniques

Tools •Debug
•Re-structure
•Code Analyzers
•Path/statement coverage tools
Education •Testing Methodology
•Effective use of tools
• Incremental integration testing

➢ Continuous testing of an application as


and when a new functionality is added.

➢ Application’s functionality aspects are


required to be independent enough to
work separately before completion of
development.

➢ Done by programmers or testers.


Integration Testing

• Testing of combined parts


of an application to
determine their functional
correctness.

• ‘Parts’ can be
• code modules
• individual
applications
• client/server
applications on a
network.
Types of Integration Testing

• Big Bang testing

• Top Down Integration


testing

• Bottom Up Integration
testing
Integration testing

Objectives • To technically verify proper interfacing between


modules, and within sub-systems
When • After modules are unit tested
Input • Internal & External Application Design
• Master Test Plan
• Integration Test Plan
Output • Integration Test report
Who •Developers

Methods •White and Black Box


techniques
•Problem /
Configuration
Management
Tools •Debug
•Re-structure
•Code Analyzers
Education •Testing Methodology
•Effective use of tools
System Testing

Objectives • To verify that the system components perform


control functions
• To perform inter-system test
• To demonstrate that the system performs both
functionally and operationally as specified
• To perform appropriate types of tests relating
to Transaction Flow, Installation, Reliability,
Regression etc.
When • After Integration Testing
Input • Detailed Requirements & External Application
Design
• Master Test Plan
• System Test Plan

Output • System Test Report


Who •Development Team and Users

Methods •Problem
/ Configuration
Management

Tools •Recommended set of tools

Education •Testing Methodology


•Effective use of tools
Objectives • To verify that the system meets the user requirements

When • After System Testing

Input • Business Needs & Detailed Requirements


• Master Test Plan
• User Acceptance Test Plan

Output • User Acceptance Test report

Acceptance Testing
Who Users / End Users

Methods •Black Box techniques


•Problem / Configuration
Management

Tools Compare, keystroke capture & playback,


regression testing

Education •Testing Methodology


•Effective use of tools
•Product knowledge
•Business Release Strategy
TESTING
METHODOLOGIES AND
TYPES
Testing methodologies

Black box testing

White box testing


• Black box testing
• No knowledge of internal design or code required.
• Tests are based on requirements and functionality
• White box testing
• Knowledge of the internal program design and code
required.
• Tests are based on coverage of code
statements,branches,paths,conditions.
BLACK BOX - TESTING TECHNIQUE

Errors in data
Incorrect or structures or
Interface errors
missing functions external database
access

Performance Initialization and


errors termination errors
Black box / Functional testing

BASED ON REQUIREMENTS NOT BASED ON ANY DESIGN OR CODE


AND FUNCTIONALITY KNOWLEDGE OF INTERNAL

COVERS ALL COMBINED TESTS ARE DATA DRIVEN


PARTS OF A SYSTEM
White box testing / Structural testing

Based on knowledge of internal logic of an


• application's code

Based on coverage of code statements,


• branches, paths, conditions

Tests are logic driven


Functional testing
• Black box type testing geared to functional
requirements of an application.
• Done by testers.

System testing
• Black box type testing that is based on overall
requirements specifications; covering all
combined parts of the system.

End-to-end testing
• Similar to system testing; involves testing of a
complete application environment in a situation
that mimics real-world use.
Sanity testing Regression testing
Initial effort to determine if a new Re-testing after fixes or modifications
software version is performing well of the software or its environment.
enough to accept it for a major
testing effort.
Acceptance testing Load testing
Final testing based on Testing an application under heavy
specifications of the end-user loads.
or customer e.g. Testing of a web site under a range
of loads to determine, when the system
response time degraded or fails.
Performance testing
Stress Testing
Testing under unusually heavy loads, Testing how well an
heavy repetition of certain actions or application complies to
inputs, input of large numerical values, performance requirements.
large complex queries to a database etc.
Term often used interchangeably with
‘load’ and ‘performance’ testing.
Install/uninstall testing Recovery testing Compatibility testing
Testing of full,partial or upgrade Testing how well a system Testing how well software
install/uninstall process. recovers from crashes, HW performs in a particular
failures or other problems. HW/SW/OS/NW environment.
Exploratory testing / ad-hoc
testing
• Informal SW test that is not
based on formal test plans or
test cases; testers will be
learning the SW in totality
as they test it.

Comparison testing
• Comparing SW strengths
and weakness to competing
products.
• Alpha testing
• Testing done when development is nearing
• completion; minor design changes may still be made as
a result of such testing.

• Beta-testing
• Testing when development and testing are essentially
completed and final bugs and problems need to be
found before release.
Mutation testing

• To determining if a set of test data or test cases is


useful, by deliberately introducing various bugs.
• Re-testing with the original test data/cases to determine
if the bugs are detected.

You might also like