Software Development Lifecycle Models
Software Development Lifecycle Models
Software Development Lifecycle Models
1. (Spring 1997) Discuss the activities (steps or actions) that performed during the
requirements engineering phase of the software life cycle. Be sure to address the
outputs from each activity.
Concept phase:
The initial phase of a software development project, in which the user needs are
described and evaluated through documentation
Input: Customer consultation
Output: Customer requirements
Requirements phase:
The period of time in the software life cycle during which the requirements for a
software product are defined and documented
Input: Customer requirements
Output: Requirements specification
Design:
The process of defining the architecture, components, interfaces, and other
characteristics of a system or component.
Input: Requirement document
Output: Design description
Implementation:
All design decisions should have been made. The period of time in the software life
cycle during which a software product is created from design documentation.
Input: Design description
Output: Source code and executable program
Test:
The period of time in the software life cycle during which the components of a
software product are evaluated and integrated, and the software product is
evaluated to determine whether or not requirements have been satisfied.
Input: Test plan
Output: Test summary report
1
Input: Software system (including documentation)
Output: Customer acceptance and feedback
Retirement:
The period of time in the software life cycle during which support for a software
product is terminated.
There are four common development tasks associated with all software lifecycle
models. These tasks are requirements (analysis and specification, design,
implementation and testing). Explain how these tasks are accomplished in the linear
sequential, incremental and spiral models (for each task, address when it begins,
when it ends, and whether or not the task is revisited). (Exam1 Fall 99)
Compare and contrast the linear sequential model with the incremental and spiral
life cycle models.
Pros:
-- Easy to identify which stage development is in
-- Each development team is working with "full" information
-- Good for systems in which all requirements are known at the outset
Cons:
-- Customer doesn't see the program until late in the life cycle
-- Requirements must be complete at the outset of the project
-- Doesn't accommodate change easily
♦ Prototyping Model:
Full system requirements are not always available at the outset of a project. Customer
may be unsure of some functionality. Developer may be unsure of the “best”
solution. There may be some human-computer interaction issues. The
prototyping model is design for these situations.
(Picture)
In most projects, the first system built is barely usable. It may be too slow, too big,
awkward to use, or all three. There is no alternative but to start again, smarting
but smarter, and build a redesigned version... The management question here is
whether to plan in advance to deliver a throwaway to customers.
Mock-ups can be story-boards (paper drawing), interactive (hyper-text, web pages),
or partially functional (visual tools).
Pros:
-- good for eliciting requirements for systems where customer is unsure of what
should/can be built
-- customer and developer have a baseline to work against
-- customers get early interaction with system
-- reduced likelihood of problems due to miscommunication
Cons:
-- the real system should then be built using a more formal life cycle model
-- customer may expect "look and feel" of the prototype
-- customer may want to have prototype delivered rather than waiting for full, well
engineered version
-- quality can suffer
3
(picture)
Pros:
-- can deliver a full product in a short time period
-- promotes code reuse
Cons:
-- can require more people
-- requires a system that can be properly modularized
-- can fail if reusable components are not available
♦ Evolutionary:
• Incremental:
It combines elements of the linear sequential model and prototyping.
Requirements of the system are prioritized. The first increment is the delivery
of the core product, providing only the highest priority requirements.
Subsequent iterations expand on this core (enhancement maintenance).
(picture)
Pros:
-- delivers a usable working product quickly
-- keeps all teams working
Cons:
-- can "design yourself into a corner"
-- works best with an even distribution of different priority features
-- requires good planning and design
• Spiral:
It incorporates iterative nature of prototyping with aspects of the linear sequential
model. Typical framework activities or task regions: customer
communication, planning, risk analysis, engineering, construction & release,
and customer evaluation. Development proceeds in increments. Each
increment contains all task regions. Early increment produces paper model or
prototype. Late increments produce more complete versions. All work
products are produced this way.
Picture (page 40, textbook)
Pros:
-- includes risk analysis
-- customer communication at all stages
-- entry points into the process for maintenance activities
Cons:
-- May require unrealistic overhead for small projects
-- requires considerable risk assessment expertise
-- hasn't been used as widely as linear sequential
• Component Assembly:
Essentially a spiral model. Relies on object technologies to provide reusable
software components. Uses object-oriented analysis to identify candidate
classes. Existing classes are used or new classes are created.
(Picture)
Pros:
Highly reuse of code if available
Process will be fast with reusable code
Reusable codes have high reliability
Cons:
have to decide which classes exist, otherwise, create new ones
• Concurrent Development:
Concurrent process model represents the major activities, tasks, and their states.
Each activity can be represented by a state chart. Defines a series of events
that trigger transitions from state to state for each activity. Often used as the
paradigm for development of client/server applications.
(Picture)
Pros:
-- Allows management to assess the status of a project in an evolutionary model.
5
-- Has been used in the development of client/server applications.
Cons:
More of a state chart for activities than a development model.
3. Discuss the activities (steps or actions) that are performed during the
requirements engineering phase of the software life cycle. Be sure to address the
outputs from each activity. (Spring 97)
2. (Fall 1997) There are five principal phases to the linear sequential or "waterfall"
life cycle model. Please discuss each of these phases, addressing the activities that
are performed during each phase. Also identify the inputs and outputs
associated with each phase. Compare and contrast the linear sequential model
with the incremental and spiral life cycle models.
The five principal phase to the linear sequential or "waterfall" life cycle model:
Design:
The process of defining the architecture, components, interfaces, and other
characteristics of a system or component.
Input: Requirement document
Output: Design description
Implementation/Code Generation:
All design decisions should have been made. The period of time in the software life
cycle during which a software product is created from design documentation.
Input: Design description
Output: Source code and executable program
Testing:
The period of time in the software life cycle during which the components of a
software product are evaluated and integrated, and the software product is
evaluated to determine whether or not requirements have been satisfied.
Input: Test plan
Output: Test summary report
• Maintenance
The period of time in the software life cycle during which a software product is
employed in its operational environment, monitored for satisfactory performance,
and modified as necessary to correct problems or to respond to changing
requirements.
Capability Maturity Model
Five levels to CMM, the key activities are required at each level, and practices needed to
advance to the next level. Level one is the least mature level and level 5 is the most
mature level. An organization can’t skip a level or stage in its maturation.
Level 1 -- Initial:
• The key practices:
The software process is ad hoc (code & fix) or just chaotic. Few processes are
defined, fewer are followed. Requirements and Design may be nonexistent.
Success depends on individual effort
• Practices needed to advance to Level 2:
-- Project management. It is focused on control of commitments (schedule, resources,
cost).
-- Management oversight. It means review and approval of all major development
plans prior to commitment.
-- Quality assurance. It is responsible for making sure that work is being done the way
it is supposed to be done.
-- Change control. It applies to both management documents and software work
products. This includes changes to requirements documents, design and code.
Level 2 -- Repeatable:
• The key practices:
Basic project management processes are in place to track cost, schedule and
functionality. Process discipline exists so that previous successes can be repeated
for similar projects. Plans and commitments are made in a controlled manner.
Everything depends on prior experience with similar work. Process tends to be
abandoned when schedule or budget gets tight. The organization has achieved a
stable process with a repeatable level of statistical control by initiating rigorous
project management of commitment, costs, schedules, and changes.
7
• Practices needed to advance to Level 3:
-- Establish a process group. A process group is a group of people that focus
exclusively on improving the software process.
-- Establish a software development process architecture. Software development
process architecture is synonymous to a development life cycle. It describes the
development phases and associated work products.
-- Introduce a family of software engineering methods and technologies. Software
engineering methods and technologies include design and code inspections,
formal design methods, configuration management tools, testing methods, and
prototyping and modern languages.
Level 3 -- Defined
• The key practices:
Process for both management and engineering activities are documented and
standardized. All projects use a documented and approved version of the process
for developing and maintaining systems. It includes all activities defined for level
2. The organization has defined the process as a basis for consistent
implementation and better understanding. At this point advanced technology can
usefully be introduced.
• Practices needed to advance to level 4:
-- Establish a set of process measurements.
-- Create a process database to maintain a history.
-- Allocate resources to maintain this information and to train project members to use
it.
Level 4 -- Managed:
• The key practices:
Detailed measures of software process and product quality are collected. Both process
and product are quantitatively understood. Quality control is based on
measurement. It includes activities defined for level 3. The organization has
initiated comprehensive measurement and analysis procedures. This is when the
most significant quality improvement begins.
• Practices needed to advance to level 5:
-- Automated gathering of process data to avoid error and omission.
-- Use of process data to analyze and modify the process to prevent problems and
improve efficiency.
Level 5 -- Optimizing:
• The key practices:
Continuous process improvement based on quantitative feedback. Testing of
innovative ideas and technologies. It includes all activities defined for level 4.
The organization now has a foundation for continuous improvement and
optimization of the process.
1. Why are there five levels to the Capability Maturity Model (CMM)? (exam2 Fall
99)
Testing is the development phase that follows implementation and precedes installation
and maintenance. Th purpose of this phase is to assess the correctness of the system that
was produced.
9
• System test
These four levels of testing reflect back to four different software development products:
• system requirements ----system test
• software requirements ----validation test
• design documentation ----integration test
• source code --- -unit test
Unit testing
--Unit testing focuses on the source code of a specific software unit(module).
--The purpose of unit testing is to verify that a particular software unit performs its
assigned function correctly.
--This is the lowest level of testing.
Integration Testing
Integration testing follows unit testing and it combines the tested units into subsystems
and verifies that, when the units work together, the subsystem operates properly.
Validation Testing
Once the entire system is integrated, using the design as a blueprint, it must be validated.
At this point, the goal is to determine if the system that was constructed satisfies the
user's requirements. The input to validation testing is the software requirements
document.
System Testing
If the software is to become part of a larger system, system testing must be performed.
The purpose of system testing is to verify and validate that the software product operates
correctly within the confines of the larger system.
Regression Testing is to re-test the changed modules at the unit level, and then
integration testing is performed to ensure that the sub-system still works (usually
bottom-up).
Statistical Testing ( traditional ) is the name for one approach to finding out how
many faults are likely to be present in a software system. The first step in
statistical testing is to randomly inject errors into the software prior to testing it. The
software is then tested and faults are found and reported. The intentionally injected
faults and actual faults are categorized. A ratio is calculated that represents how
many real faults are found per injected fault. This is used to predict the number of
real faults.
Mean Time To Failure is a metric hold over from hardware testing, where a
hardware device can fail as an artifact of stress from the physical environment as
well as design limitations. Software does not fail as an artifact of time, failure is
instead related to how the software is used. The software failure may occur: when
software transitions among its modules at blinding speed; in different level of
granularity when time must be measured in; in modules that are not themselves
faulty.
Black box testing is the term applied to the process of testing a software system, or
subsystem, without looking at its internal representation. Testing involves providing
the software some inputs, collecting the outputs, and verifying that the outputs are
correct.
White Box Testing ( clear box testing) involves looking at the internal construction
of the software. It involves testing all statements, decisions, function calls, etc.
Gray Box Testing is the term used to describe everything between white box and
black box testing. Primarily it involves using some amount of white box
information to guide black box testing. Software measurements can be used to
decide how much black box testing to perform on various subsystems.
11
Object-Oriented Testing Strategies
• Object-Oriented Testing
Assessing the needs of OO Testing requires that three things be done
1) The definition of testing must be broadened to include error discovery techniques
applied to OOA and OOD Models.
2) The strategy for unit and integration testing must change significantly
3) The design of test cases must account for the unique characteristics of OO
software.
3. (Spring 1998) Contrast structural analysis and design methods and object-
oriented analysis and design methods. Focus your discussion on the different
views (models) of the problem domain and the ability of each of these methods to
express these views.
13
Entity-Relationship Diagram(ERD)
The Object-Oriented analysis:
• The intent of object-oriented analysis is to define all class (and the relationships and
behavior associated with them) that are relevant to the problem to be solved.
• Object-oriented analysis focuses on encapsulation, classification and inheritance of
objects.
• Tools:
Data Model -- Data model consists of data object, attributes and relationships. Data
object is described by attributes. Relationships can be represented using Entity-
Relationship Diagram (ERD).
* Cohesion module:
performs single well-defined task, have little interaction with other modules when
it comes to accomplishing its specified task, becomes a self-contained unit that
can be reused wherever its particular service is needed.
Types: coincidental, logical, temporal, procedural, communicational (highest).
* Coupling:
A measure of the interconnection between module, determining the dependencies
that a module has on other modules, indicates how difficult the module is to
understand, reuse, and modify. Types: data, stamp, control, external, common,
content (highest).
* A well design module strives to achieve high cohesion and low coupling for easier
reuse, understand and maintain.
Object-oriented design:
Object-oriented design (OOD) transforms the analysis model created using OOA into a
design model that serves as a blueprint for software construction. OOD draws on four
important design concepts: abstraction, information hiding, functional independence, and
modularity.
There are four design layers: subsystem layer, class and object layer, message layer, and
responsibility layer.
1. Subsystem layer represents each of the subsystems that enable the software to achieve
its customer-defined requirements.
2. The class and object layer contains the class hierarchies that enable the use of
generalization and specifications; it contains the design representations of each
object.
3. The message layer contains the details of object communications and establishes the
external and internal interfaces.
4. The responsibilities layer contains the data structure and algorithmic design for all
attributes and operations for each object.
15
subsystem, class & object, message, and responsibilities layers. The Modeling
components for OOD methods include: representation of hierarchy of modules,
specification of data definitions, specification of procedural logic, indication of end-
to-end processing sequences, representation of object states and transitions, definition
of classes and hierarchies, assignment of operations to classes, detailed definition of
operations, specification of message connections, identification of exclusive services.
The design issues of OOD include decomposability, composability, understandability,
continuity, and protection. The OOD models include Booch's, Coad & Yourdon's,
Jacobson's, Rambaugh's, and Wirfs-Brock's. The four design components of OOD are
problem domain, Human interaction, Task management, and data management.