Documentation UDBI

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 102

Index

1. INTRODUCTION
2. IDENTIFICATION OF NEED
3. FEASIBILITY STUDY
4. SOFTWARE ENGINEERING PARADIGM APPLIED
5. SOFTWARE AND HARDWARE SPECIFICATIONS
6. SYSTEM DESIGN
7. CODE EFFICIENCY
8. CODE OPTIMIZATION
9. SYSTEM TESTING & IMPLEMENTATION
10. SYSTEM SECURITY MEASURES
11. COST ESTIMATION OF THE PROJECT
12. SCREEN LAYOUT

13. PERT CHART, GANTT CHART


14. CONCLUSION
15. .BIBLIOGRAPHY

1. INTRODUCTION & OBJECTIVES


Database Interface is an industry-standard tool for application
development. Using Database Interface, developers can interact with any
back end software (i.e. Oracle, MS-SQL Server, MS-Access, My-SQL etc).
The database interface can be used to build, test, and debug PL/SQL
packages, procedures, triggers, and functions. Database Interface users
can create and edit database objects such as tables, views, indexes,
constraints, and users. Database Interface's SQL Editor provides an easy
and efficient way to write and test scripts and queries, and its powerful
data grids provide an easy way to view and edit data related to any
DBMS/RDBMS tool.
The requirements of a database application developer will vary from
project to project. On a large team where DBAs manage the DDL, a
developer may spend 90% of development time coding and testing
SELECT queries to issue from 3GL or 4GL application code. In such an
environment, a developer might be concerned only with viewing the DDL
and database code. On smaller teams, a developer might be responsible
for maintenance of the development schema, movement of test data
between schemas, writing procedure code, populating tables from legacy
sources, and more. Database Interface facilitates all of these needs.
For example if you are working with oracle, you don't have to be a
PL/SQL expert to access database objects with Database Interface. You
can view the Oracle Dictionary, tables, indexes, stored procedures, and

more - all through a multi-tabbed browser. Database Interface utilizes


direct Oracle OCI calls for full access to the Oracle API.
Advanced editing features save time and increase productivity.
Code can be created from shortcuts and code templates. You can even
create your own code templates.
Use Database Interface to
Create, browse, or alter objects (tables, views, indexes, etc.)
including Oracle8 TYPE objects
Graphically build, execute, and tune queries
Edit and Debug PL/SQL and profile "stored procedures" including
functions, packages, and triggers
Search for objects
Find and fix database problems with constraints, triggers, extents,
indexes, and grants
Project Advantages:

Flexibility. The end-user should be able to use any major


commercial (or open-source) database on the market. Moreover, the
end-user should be able to customize the look-and-feel of the frontend; this includes the ability to internationalize the application.

Maintenance. Both external documentation (like this) and inline


java docs should explain how things work. I've tried to write clean

Java code so that developers who build on database interface will


have a clear, easily extensible starting point.

Ease of Use. Installation is simple; you just have to create a .war


file and drop it in the right directory/folder. The Java search engine
interface is text-based, and intended to provide easy access to
search operations.

2. i. IDENTIFICATION OF NEED
To clearly identify the need of this application, I will try to exemplify
one situation.
A project manager is dealing with an enterprise application, which is
in the development phase. This application has got different faces such as
a desktop interface, a web interface and mobile interface. Initially all these
applications are not integrated. Once these entire interfaces are working
well, they are going to be integrated. In such occasion the database which
is being used must be same.

But while designing these individual

modules, creating a sample database for each module and perform the
test, it is not possible practically. So for that reason, there must be a kind
of tool, using which we can carry the database to any another location and
use the database there very easily.
The present application will fulfill the requirement. The current
application can be used just as a brief case of database.

2. ii. PRELIMINARY INVESTIGATION


The present application can be differentiated into the following
modules, which are closely integrated with one another.
1.

Structure: It gives the list of all tables which are present in the
current/selected user. This module is used to browse and view
the structure of an existing database object.

2.

Properties: The properties module enables us to modify data


types, size and constraints assigned to each field.

3.

SQL: Using this module, we can use and implement different


kinds of SQL statements. The result of the SQL statement will be
displayed immediately after executing the statement.

4.

Import: This module gives an exclusive feature of importing the


existing SQL scripts to the existing schema.

5.

Export: This module gives the feature of exporting existing


schema object(s) to an SQL file. Here the facility is there to
export only tables, only single user or the entire schema.

6.

Operations: This module will be used to create new database


objects.

7.

Search: Search module provides searching facilities for searching


for the given column names, data or both options.

3. FEASIBILITY STUDY
All projects are feasible given unlimited resources and infinite
time! Unfortunately, the development of computer-based system or
product is more likely plagued by a scarcity of resources and difficult
delivery dates. It is both necessary and prudent to evaluate the feasibility
of a project at the earliest possible time. Months or years of effort,
thousands or millions of dollars, and untold professional embarrassment
can be averted if an ill-conceived system is recognized early in the
definition phase.
Feasibility and risk analysis are related in many ways. If project risk
is great the feasibility of producing quality software is reduced. During
product engineering, however, we concentrate our attention on four
primary areas of interest:
Technical Feasibility
This application in going to be used in an Internet environment
called www (World Wide Web). So, it is necessary to use a technology that
is capable of providing the networking facility to the application. We can
deploy and used it in any operating system.
GUI is developed using HTML to capture the information from the
customer. HTML is used to display the content on the browser. It uses
TCP/IP protocol. It is an interpreted language. It is very easy to develop a
page/document using HTML some RAD (Rapid Application Development)
tools are provided to quickly design/develop our application. So many
7

objects such as button, text fields, and text area etc are provide to capture
the information from the customer.
We can use this application in any OS. They can have their own
security and transactional advantages. But are the responsible for
selecting suitable and secured OS, which is suitable to our application.
Economical Feasibility
Economic analysis is the most frequently used technique for
evaluating the effectiveness of a proposed system. It is more commonly
known as cost/benefit analysis. The procedure is to determine the benefits
and savings that are expected from a proposed system and compare them
with costs. If benefits outweigh cost, a decision is taken to design and
implement the system. Otherwise, further justification or alternative in the
proposed system will have to be made if it is to have a chance of being
approved.
Automation system that is developed technically and that is used on
installation is good investment for organization. The financial benefit must
equal or exceed that cost. The amount being spent on system study,
processing hardware, developing software is reasonable considering the
loss of revenue owing to the prevailing loopholes in the system. Benefits in
the form of reduced costs like client side with minimum configuration,
system compatibility of any hardware and timing effective manner of
allocating systems and registering complaints.

Operational Feasibility:
In our application front end is developed using GUI. So it is very
easy to the customer to enter the necessary information. But customer has
some knowledge on using web applications before going to use our
application.

4. SOFTWARE ENGINEERING PARADIGM APPLIED


DESIGN SPECIFICATION
Design of software involves conceiving planning out and specifying
the externally observable characteristics of the software product. We have
data design, architectural design and user interface design in the design
process. These are explained in the following section. The goals of design
process it to provide a blue print for implementation, testing, and
maintenance activities.
DATA DESIGN
The primary activity during data design is to select logical
representations of data objects identified during requirement analysis and
software analysis. A data dictionary explicitly on the elements of the data
structure. A data dictionary should be established and used to define both
data and program design.
DESIGN METHODOLOGY
The two basic modern design strategies employed in software design are
1.

Top Down Design

2.

Bottom Up Design

Top Down Design is basically a decomposition process, which


focuses on the flow of control. At later stages it concern itself with the
code production. The first step is to study the overall aspects of the tasks
at hand and to break it into a number of independent modules.

10

The

second step is to break each one of these modules further into


independent sub-modules. The process is
Repeated one to obtain modules, which are small enough to group
mentally and to code in a straightforward manner. One important feature
is that at each level the details of the design at the lower level are hidden.
Only the necessary data and control that must be called back and forth
over the interface are defined.
In a bottom-up design one first identifies and investigates parts of
design that are most difficult and necessary designed decision are made
the reminder of the design is tailored to fit around the design already
chose for crucial part. It vaguely represents a synthesis process explained
in previous section.
One storage point of the top-down method is that it postpones
details of the decision until the last stage of the decision. It allows making
small design changes when the design is half way through.

There is

danger that the specifications will be incompatible and this will not be
discovered until late in the design process. By contrast the bottom-up
strategy first focuses on the crucial part so that feasibility of the design is
tested at early stage.
In mixing top-down and bottom-up design it often appears that we
start in the middle of the problem and work our way both up and down
there.

In a complex problem, it is often difficult to decide how to

modularize the various procedures in such cases one might consider a list

11

of system inputs and decide what functions are necessary to process


these inputs. This is called back to front design. Similarly one can start
with the required outputs and work backwards evolving so called frontback design. We have applied both the top down and bottom up approach
in our design approach.

DATABASE DESIGN
Databases are normally implemented by using a package called a
Data Base Management System (DBMS).

Each particular DBMS has

somewhat unique characteristics, and so such, general techniques for the


design of database are limited.

One of the most useful methods of

analyzing the data required by the system for the data dictionary has
developed from research into relational database, particularly the work of
E.F.Codd. This method of analyzing data is called Normalization.
Unnormalized data are converted into normalized data by three stages.
Each stage has a procedure to follow.
NORMALIZATION
The first stage is normalization is to reduce the data to its first
normal form, by removing repeating items showing them as separate
records but including in them the key fields of the original record.
The next stage of reduction to the second normal form is to check
that the record, which one is first normal form, all the items in each record
are entirely dependent on the key of the record. If a data item is not
dependent on the key of the record, but on the other data item, then it is
12

removed with its key to form another record.

This is done until each

record contains data items, which are entirely dependent on the key of
their record.
The final stage of the analysis, the reduction of third normal form
involves examining each record, which one is in second normal form to
see whether any items are mutually dependent. If there are any item there
are removed to a separate record leaving one of the items behind in the
original record and using that as the key in the newly created record.
BUSINESS MODELING:
The information flow among business function is modeled in a way
that answers the following questions: what information drives the business
process? What information is generated? What generate it? Where does
the information go? Who process it?
DATA MODELING:
The information flow defined as a process of the business modeling
is refined into a set of data objects that are needed to support the
business.

The characteristics 9called attributes0 of each object are

identified and relationships between these objects are defined.


PROCESS MODELING:
The data objects defined in the data-modeling phase are
transformed to achieve the information flow necessary to implement a
business function.

Processing description are created for addition,

modifying, deleting, or retrieving a data object.

13

THE LINEAR SEQUENTIAL MODEL:


The linear sequential model for software engineering some times
called the classic model or the water fall model, the linear sequential
suggests a systematic, sequential approach to software development that
begins at eth system level and process through analysis, design, coding,
testing, and maintenance.
The linear sequential model is the oldest and the most widely used
paradigm for software engineering.

Modeled after the conventional

engineering cycle, the linear sequential model encompasses the following


activities:
1) SYSTEM/INFORMATION ENGINEERING AND MODELLING:
Because software is always part of a larger system (or business), work
begins by establishing requirements for all system elements and then
allocating some subset of these requirements to software. This system
view is essential when software must interface with other elements
such as hardware, people, and databases.
System

engineering

and

analysis

encompasses

requirements

gathering at the system level with a small amount of top-level analysis


and design.

Information engineering encompasses requirements

gathering at the strategic business level and at the strategic business


level and at the business area level.
2) SOFTWARE REQUIREMENTS ANALYSIS:
The requirements gathering process is intensified and focused
specifically on software. To understand the nature of the programs to
14

be built, the software Engineer must under stand the information


domain for the software, as well as required function, behavior,
performance, and inter facing. Requirements for the both the system
and the software are documented and reviewed with the customer.
3) DESIGN:
Software design is actually a multi step process that focuses on four
distinct attributes of a program: data structure, software architecture,
interface representations, and procedural detail. The design process
translates requirements into a representation of the software that can
be assessed for quality before code generation begins.

Like

requirements the design is documented and becomes part of the


software configuration.
4) CODE GENERATION:
The design must be translated into a machine-readable form.

The

code generation step performs this task. If design is performed in a


detailed

manner,

code

generation

can

be

accomplished

mechanistically.
5) TESTING:
Once code has been generated, program testing process focuses on
the logical internals of the software, assuring that all statements have
been tested, and on the functional externals that is, conducting tests to
uncover errors and ensure that defined input will produce actual results
that agree with required results.

15

6) MAINTENANCE:
Software will undoubtedly undergo change after it is delivered to the
customer. Change will occur because errors have been encountered,
because the software must be adapted to accommodate changes in its
external environment (e.g., a change required because of a new
operating system or peripheral devices), or because the customer
requires

functional

or

performance

enhancement.

Software

maintenance reapplies each of the preceding phases to an existing


program rather than a new one
.
.

16

5. SOFTWARE AND HARDWARE SPECIFICATIONS


The following hardware using for developing the project
(i) Hardware requirement for developing
1. Pentium III Processor with @ 800 Mhz Speed
2. 128 RAM
3. 10 GB HDD
4. 14 Color Monitor
5. 1.44 MB FDD
6. CD ROM drive
7. 105 keys keyboard
8. 4 Serial + 4 Parallel ports
9. Mouse
10.

Printer

11.Ethernet card
12.

Internet Modem

(ii)

Software requirement for developing

1. Windows 2000 Operating system


2. J2SE 1.4
3. Tomcat Web Server 5.0.25

17

INTERNET TERMINOLOGY
What is Internet?
The Internet is a worldwide network of computer networks. People
use the Internet to send electronic mail, participate in discussion forums,
search authority that controls or regulates the Internet Currently there are
more than 30 million people use the Internet and the number is growing at
a rate of one million new user per month.
What is Intranet?
An internal network owned and managed by a company or
organization uses the same kinds of software that you would use to
explore the Internet, but only for internal use.

An Internet enables a

company to share its resources with it employees without confidential


information being made available to everyone with Internet access.
What is web Browser?
A web browser is a program run on a client workstation used to
navigate the World Wide Web.
What is WWW (World Wide Web)?
WWW is a collection of resources (make up of Hypertext, graphics,
sound files, etc) located on globally networked web/internet servers that
can be accessed on the Internet by using HTTP, FTP, Telnet, Gopher and
some other tools.

18

What is TCP-IP?
This is the suite of protocols that defines the Internet. Originally
designed for the Unix Operating system. TCP/IP software is now available
for every major kind of computer operating system. TCP/IP stacks is
required for computers want to access the Internet.
What is URL (Uniform Resource Locator)?
The standard way to give the address of any resource on the
Internet that is part of the World Wide Web (WWW). A URL looks like this:
The most common way to use a URL is to enter into a WWW browser
program, such as Internet Explorer.
Java and Java Script:
Although the names are almost the same Java is not the same as
Java Script.

These are two different techniques for Internet

programming. Java is a programming language; JavaScript is a scripting


language (as the name implies). The difference is that we can create
real programs with Java. But often we just want to make a nice effect
without having to bother about real programming. So JavaScript is meant
to be easy to understand and easy to use. JavaScript authors should not
have to care too much about programming.
We could say that JavaScript is rather an extension to HTML than a
separate computer language. Of course this is not the official definition
but I think this makes it easier to understand the difference between Java
and JavaScript, which share the same name and syntax.

19

Advantages of Java:
Creation of Java:
James Gosling conceived Java. Patrick Naughton, Chris Warth, Ed
Frank and Mike Sheridan at Sun Micro Systems Incorporation in 1991. It
took 18 months to develop the first working version. This language was
initially called OAK in 1992 and public announcement of Java in 1995,
many more contributed to the design and evolution of the language.
Java is Portable:
One of the biggest advantages Java offers is that it is portable. An
application written in Java will run on all the major platforms.

Any

computer with a Java based browser can run the applications or applets
written in the Java programming language. A programmer no longer has
to write one program to run on a Macintosh, another program to run on a
Windows machine, still another to run on a Unix machine and so on. In
other words, with Java, developers write their programs only once.
The virtual machine is what gives Java a cross platform capabilities.
Rather than being complied into machine language, which is different for
each operating systems and computer architecture, Java code is
compiled into byte codes. With other languages, the program code is
complied into a language that the computer can understand.

The

problem is that other computers with different machine instruction set


cannot understand that language.

Java code, on the other hand is

complied into byte codes rather than a machine language. These byte
codes go to the Java virtual machine, which executes them directly or
20

translates them into the language that is understood by the machine


running it.
In summary, these means that with the JDBC API extending Java, a
programmer writing Java code can access all the major relational
databases on any platform that supports the Java virtual machine.
Java is Object Oriented:
Java is Object Oriented, which makes program design focus on
what you are dealing with rather than on how you are going to do
something. This makes it more useful for programming in sophisticated
projects because one can break the things down into understandable
components.

A big benefit is that these components can then be

reused.
Object oriented languages use the paradigm of classes. In simplest
term, a class includes both the data and the functions to operate on the
data. You can create an instance of a class, also called an object,
which will have all the data members and functionality of its class.
Because of this, you can think of a class as being like template, with
each object being a specific instance of a particular type of class.
The class paradigm allows one to encapsulate data so that specific
data

values

are

those

using

the

data

cannot

see

function

implementation. Encapsulation makes it possible to make the changes


in code without breaking other programs that use that code.

If for

example the implementation of a function is changed, the change is

21

invisible to another programmer who invokes that function, and it does


not affect his/her program, except hopefully to improve it.
Java includes inheritance, or that ability to derive new classes from
existing classes. The derived class, also called subclass, inherits all the
data and the function of the existing class, referred to as the parent
class. A subclass can add new data members to those inherited form
the parent class. As far as methods are concerned, the subclass can
reuse the inherited methods, as it is, or change them, or even add its
own new methods.
Java Makes It Easy:
In addition to being portable and object oriented, Java facilitates
writing correct code. Programmers spend less time writing Java code
and a lot less time debugging it.

In fact, developers have reported

slashing development time by as much as two thirds.


Java automatically takes care of allocating and the reallocating
memory, a huge potential source of errors. If an object is no longer being
used (has no reference to it), then it is automatically removed from
memory, or Garbage Collected by a low priority daemon thread called
Garbage Collector.
Javas no pointer support eliminates big source errors. By using
object references instead of memory pointers, problems with pointer
arithmetic are eliminated, and problems with inadvertently accessing the
wrong memory address are greatly reduced.

22

Javas strong typing cuts down on runtime errors, because Java


enforces strong type checking, many errors are caught when code is
complied. Dynamic binding is possible and often very useful, but static
binding with strict type checking is used when possible.
Java keeps code simple by having just one way to do something
instead of having several alternatives, as in some languages. Java also
stays lean by not including multiple inheritance, which eliminates the
errors and ambiguity that arise when you create a subclass that inherits
from two or more classes. To replace capabilities, multiple inheritance
provides, Java lets you add functionality to a class throw the use of
interfaces.
Java is Extensible:
A big plus for Java is the fact it can be extended. It was purposely
written to be lean with the emphasis on doing what it does very well,
instead of tying to do everything from the beginning, it was return so
that extending it is very easy. The java platform includes an extensive
class library so that programmers can use already existing classes, as it
is, create subclasses to modify existing classes, or implement to
augment the capabilities of classes.
Java is Secure:
It is important that a programmer not be able to write subversive
code for applications or applets. This is especially true with the Internet
being used more and more extensively for services such as electronic

23

commerce and electronic distribution of software and multimedia


content.
The way memory is allocated and laid out.

In java an objects

location in memory is not determined until the runtime, as opposed to C


and C++. As the result, a programmer cannot look at a class definition
and figure out how it might be laid out in memory. Also since, Java has
no pointers, a programmer cannot forge pointers to memory.
The Java Virtual Machine (JVM) doesnt trust any incoming code
and subjects it to what is called Byte Code Verification. The byte code
verifier, part if the virtual machine, checks that
The format of incoming code is correct
Incoming code doesnt forge pointers.
It doesnt violate access restrictions.
It access objects as what they are
The Java byte code loader, another part of the JVM, checks whether
classes loaded during program execution are local of from across a
network. Imported classes cannot be substituted for built in classes, and
built in classes cannot accidentally reference classes bring in over a
network.
The Java Security manager allows user to restrict entrusted Java
applets so that they cannot access the local network, local files and other
resources.

24

Java Performs Well:


Java performance is better than one might expect. Javas many
advantages, such as having built in security and being interpreted as well
as complied, do have a cost attached to them. However, various
optimizations have been built, in, and the byte code interpreter can run
very fast the cost it doesnt to do any checking. AS a result, Java has
done quite respectably in performance tests. Its performance numbers
for interpreted byte codes are usually more than adequate to run
interactive graphical end user applications.
For situations that require unusually high performance, byte codes
can be translated on the fly generating the final machine code for the
particular CPU on which the application is running at run time. Java
offers good performance with the advantages of high-level languages but
without the disadvantages of C and C++. In the world of design trade-off,
you can think of Java as providing a very attractive middle ground.
Java is Robust:
The multi plat formed environment of the WEB places extraordinary
demands on a program, because it must execute reliably in a variety of
systems. Thus the ability to create robust programs was given a high
priority in the design of Java. To gain reliability, Java restricts you in a
few key areas to force you to find your mistakes early in program
developments. At the same time, Java frees you from having to worry
about many of the most common causes of programming errors.
Because Java is strictly typed language, it checks your code at compile
25

time. However, it also checks your code at run time. In fact, many hard
to track down bugs that often turn up in hard to reproduce runtime
situations are simply impossible to create in Java. Knowing that what
you have written will behave in a predictable way under diverse
conditions is a key feature of Java.
Java is Multithreaded:
Multithreading is simply the ability of a program to do more than one
thing at a time. For example an application could be faxing a document
at the same time it is printing another document. Or a program could
process new inventory figures while it maintains a feed for current
prices.

Multithreading is particularly important in multimedia: a

multimedia program might often be running a movie, running a audio


track and display in text all at the same time.
Java Scales Well:
Java platform is designed to scale well, from portable consumer
electronic devices to powerful desktop and server machines.

The

virtual machine takes a small footprint and Java byte code is optimized
to be small and compact. As a result, Java accommodates the need for
low storage and for low bandwidth transmission over the Internet. In the
addition the Java operating system offers a standalone Java platform
that eliminates host operating system overhead while still supporting the
full Java Platform API. This makes Java ideal for low cost network
computers whose sole purpose is to access the Internet.

26

Java and Internet:


The Internet helped catapult Java to the forefront of programming
and Java in turn has had a profound effect on the Internet. The reason
is simple: Java expands the universe of objects that can move about
freely in cyberspace. In a network, there are two broad categories of
objects transmitted between the Server and your Personal Computer:
passive information and dynamic, active programs like an object that
can be transmitted to your computer, which is a dynamic, self-executing
program.

Such a program would be an active agent ton the client

computer, yet the server would initiate it. As desirable as dynamic,


networked programs are, they also present serious problems in the
areas of security and portability.

Prior to Java cyberspace was

effectively closed to half the entities that now live there.

Java

addresses these concerns and doing so, has opened the door to an
exiting a new form of program.
The rise of server-side Java applications is one of the latest and
most exciting trends in Java programming. It was first hyped as a
language for developing elaborate client-side web content in the form of
applets. Now, Java is coming into its own as a language ideally suited
for server-side development. Businesses in particular have been quick
to recognize Javas potential on the server-Java is inherently suited for
large client/server applications. The cross platform nature of Java is
extremely useful for organizations that have a heterogeneous collection
of servers running various flavors of the Unix of Windows operating
systems.

Javas modern, object-oriented, memory-protected design

27

allows developers to cut development cycles and increase reliability. In


addition, Javas built-in support for networking and enterprise API
provides access to legacy data, easing the transition from older
client/server systems.
Java

Servlets

are

key

component

of

server-side

java

development. A Servlets is a small, plug gable extension to a server


that enhances the servers functionality. Servlets allow developers to
extend and customize and Java enabled server a web server, a mail
server, an application server, or any custom server with a hitherto
unknown degree of portability, flexibility and ease.
JAVA SERVER PAGE (JSP):
Java Server Pages is a simple, yet powerful technology for creating and
maintaining dynamic-content web pages. Based on the Java programming
language, Java Server Pages offers proven portability, open standards, and a
mature re-usable component model.
PORTABILITY:
Java Server Pages files can be run on any web server or web-enabled
application server that provides support for them. Dubbed the JSP engine, this
support involves recognition, translation and management of the Java Server
Pages lifecycle and its interaction with associated components.
The JSP engine for a particular server might be built-in or might be
provided through a 3rd party add-on. As long as the server on which you plan to
execute the Java Server Pages supports the same specification level as that to
which the file was written, no change should be necessary as you move your files

28

from server to server. Note, however, that instructions for the setup and
configuration of the files may differ between files.
COMPOSITION:
It was mentioned earlier that the Java Server Pages architecture could
include reusable Java components. The architecture also allows for the
embedding of a scripting language directly into the Java Server Pages file. The
components current supported include Java Beans and Serves. As the default
scripting language, Java Server Pages use the Java Programming language. This
means that scripting on the server side can take advantage of the full set of
capabilities that the Java programming language offers.
PROCESSING:
A Java Server Pages file is essentially an HTML document with JSP
scripting or tags. It may have associated components in the form of class, .jar,
or .ser files- -or it may not. The use of components is not required.
The Java Server Pages file has a .jsp extension to identify it to the server
as a Java Server Pages file. Before the page is served, the Java Server Pages
syntax is parsed and processed into a servlet on the server side. The servlet that
is generated, outputs real content in straight HTML for responding to the
customer. Because it is standard HTML, the dynamically generated response
looks no different to the customer browser than a static response.
ACCESS MODELS:
A Java Server Pages file may be accessed in at least two different ways:
A client request comes directly into a Java Server Page.

Request

Browser

JSP

Response 29

Bean
Bean

In this scenario, suppose the page accessed reusable Java Bean


components that perform particular well-defined computations like accessing
a database. The result of the Beans computations, called result sets is stored
within the Bean as properties. The page uses such Beans to generate
dynamic content and present it back to the client. A request comes through a
servlet.
SERVLET

JDBC

Bean
Browser

Result
Bean

Database

Request

Response

JSP
The servlet generates the dynamic content. To handle the response to
the client, the servlet creates a Bean and stores the dynamic content
(sometimes called the result set) in the Bean. The servlet then invokes a Java
Server Page that will present the content along with the Bean containing the
generated from the servlet.
There are two APIs to support this model of request processing using
Java Server Pages. One API facilitates passing context between the invoking
servlet and the Java Server Page. The other API lets the invoking servlet
specify which Java Server Page to use.

30

In both of the above cases, the page could also contain any valid Java
code. The Java Server Pages architecture separation of content from
presentation- -it does not mandate it.
JDBC requires that the SQL statements be passed as Strings to Java
methods. For example, our application might present a menu of database
tasks from which to choose. After a task is selected, the application presents
prompts and blanks for filling information needed to carry out the selected
task. With the requested input typed in, the application then automatically
invokes the necessary commands.
In this project we have implemented three-tier model, commands are
sent to a middle tier of services, which then send SQL statements to the
database. The database process the SQL statements and sends the results
back to the middle tier, which then sends them to the user. JDBC is important
to allow database access from a Java middle tier.
What Is JDBCTM?
JDBCTM is a JavaTM API for executing SQL statements. (As a point of
interest, JDBC is a trademarked name and is not an acronym; nevertheless,
JDBC is often thought of as standing for "Java Database Connectivity".) It
consists of a set of classes and interfaces written in the Java programming
language. JDBC provides a standard API for tool/database developers and
makes it possible to write database applications using a pure Java API.
Using JDBC, it is easy to send SQL statements to virtually any relational
database. In other words, with the JDBC API, it isn't necessary to write one
program to access a Sybase database, another program to access an Oracle
database, another program to access an Informix database, and so on. One can

31

write a single program using the JDBC API, and the program will be able to send
SQL statements to the appropriate database. And, with an application written in
the Java programming language, one also doesn't have to worry about writing
different applications to run on different platforms. The combination of Java and
JDBC lets a programmer write it once and run it anywhere.
Java being robust, secure, easy to use, easy to understand, and
automatically downloadable on a network, is an excellent language basis for
database applications. What is needed is a way for Java applications to talk to a
variety of different databases. JDBC is the mechanism for doing this.
JDBC extends what can be done in Java. For example, with Java and the
JDBC API, it is possible to publish a web page containing an applet that uses
information obtained from a remote database. Or an enterprise can use JDBC to
connect all its employees (even if they are using a conglomeration of Windows,
Macintosh, and UNIX machines) to one or more internal databases via an
intranet. With more and more programmers using the Java programming
language, the need for easy database access from Java is continuing to grow.
MIS managers like the combination of Java and JDBC because it makes
disseminating information easy and economical. Businesses can continue to use
their installed databases and access information easily even if it is stored on
different database management systems. Development time for new applications
is short. Installation and version control are greatly simplified. A programmer can
write an application or an update once, put it on the server, and everybody has
access to the latest version. And for businesses selling information services,
Java and JDBC offer a better way of getting out information updates to external
customers.

32

CONNECTION:
A connection object represents a connection with a database. A
connection session includes the SQL statements that are executed and the
results that are returned over the connection. A single application can have one
or more connections with a single database, or it can have connections with
many different databases.
OPENING A CONNECTION:
The standard way to establish a connection with a database is to call the
method DriverManager.getConnection. This method takes a string containing a
URL. The Driver Manager class, referred to a the JDBC management layer,
attempts to locate a driver than can connect to the database represented Driver
classes, and when the method get Connection is called, it checks with each
driver in the list until it finds one that can connect uses this URL to actually
establish the connection.
<Sub

protocol>-usually

the

driver

or

the

database

connectivity

mechanism, which may be supported by one or more drivers. A prominent


example of a sub protocol name is oracle, which has been reserved for URLs
that specify thin-style data source names.
<Sub name>- a way to identify the database. The sub names can vary,
depending on the sub protocol, and it can have a sub name with any internal
syntax the driver writer chooses. The point of a sub name is to give enough
information to locate the database.
SENDING STATEMENT:
Once a connection is established, it is used to pass SQL statements to its
underlying database. JDBC does not put any restrictions on the kinds of SQL

33

statements that can be sent; this provides a great deal of flexibility, allowing the
use of database-specific statements or even non-SQL statements. It requires,
however, that the user be responsible for making sure that the underlying
database can process the SQL statements being sent and suffer the
consequences if it cannot.
DRIVER MANAGER:
The Driver Manager class is the management layer of JDBC, working
between the user and the drivers. It keeps track of the drivers that are available
and handles establishing a connection between a database and the appropriate
driver. It addition, the driver manager class attends to things like driver login time
limits and the printing of log and tracing messages. The only method in this class
that

general

programmer

needs

to

use

directly

is

DriverManager.getConnection. As its name implies, this method establishes a


connection to a database.
ORACLE 8i:
INTRODUCTION TO ORACLE:
Any programming environment used to create containers, to manage
human data, in the conceptualization as a Data Management System.
Traditionally, the block of human data being managed is called a Database.
Hence, in very simple terms, these programming environments can the
conceptualized as Database Management Systems, in short DBM systems.
All Databases Management Systems (that is, Oracle is DBMS) allow
users to create containers for data stories and management. These containers
are called cells. The minimum information that has to be given to Oracle for a
suitable container to be constructed, which can hold free from human data, is,

34

1.

The cell name

2.

The cell length

Another name that programming environments use for a Cell is Field.


These can the conceptualized as follows.
BASIC DATABASE CONCEPTS:
A database is a corporate collection of data with some inherent meaning,
designed, built and populated with data for a specific purpose. A database stores
data that is useful to us. This data is only a part of the entire data available in the
world around us.
To be able to successfully design and maintain databases we have to do
the following:
Identify which part of the worlds data is of interest to us.
Identify what specific objects in that part of the worlds data are of interest
to us.
Identify a relationship between the objects.
Hence the objects, their attributes and the relationship between them that
are of interest to us are still owed in the database that is designed, built and
populated with data for a specific purpose.
CHARACTERISTICS OF A DATABASE MANAGEMENT SYSTEM:

It represents a complex relationship between data.

Keeps a tight control of debtor redundancy.

Enforces user-defined rules to ensure integrity of table data.

Has a centralized data dictionary for the storage of information pertaining to


data and its manipulation.

Ensures that data can be shared across applications.

35

Enforces data access authorization has automatic, intelligent backup and


recovery procedures for data.

Have different interfaces via which users can manipulate data.

RELATIONAL DATABASE MANAGEMENT:


A relational database management system uses only its relational
capabilities to manage the information stored in its databases.
INFORMATION REPRESENTATION:
All information stored in a relational database is represented only by data
item values, which are stored in the tables that make up the database.
Associations between data items are not logically represented in any other way,
such as the use of pointers from one table to the other.
LOGICAL ACCESSIBILITY:
Every data item value stored in relational database is accessible by
stating the nature of the table it is stored in, the name of the column under which
it is stored and the value of the primary key that defines the row in which it is
stored.
REPRESENTATION OF NULL VALUES:
The database management system has a consistent method for
representing null values. For example, null values for numeric data must be
distinct from zero or any other numeric and for the character data it must be
different from a string of blanks or any other character value.
CATALOGUE FACILITIES:
The logical description of the relation database is represented in the same
manner as ordinary data. This is done so that the facilities of the relation

36

database management system itself can be used to maintain database


description.
DATA LANGUAGE:
The relational database management system may support many types of
languages for describing data and accessing the database. However, there must
be at least one language that uses ordinary character strings to support the
definition of data, the definition of views, the manipulation of data, constraints on
data integrity, information concerning authorization and the boundaries for
recovery of units.
VIEW UPDATABILITY:
Any view that can be defined using combination of basic tables, that are
theoretically updateable, these capital of being updated by the relational
database management system.
INSERT, UPDATE AND DELETE:
Any operand that describes the results of a single retrieval operation is
capable of being applied to an insert update or delete operation as well.
PHYSICAL DATA INDEPENDENCE:
Changes made to physical storage representation or access methods do
not require changes to be made to application programmers.
LOGICAL DATA INDEPENDENCE:
Changes made to tables, that do not modify any data stored in that table,
do not require changes to be made to application programmers.

37

INTEGRITY CONSTRAINTS:
Constraints that apply to entity integrity and referential integrity are
specifiable by the data language implemented by the database management
system and not by the statements coded into the application program.
DATABASE DISTRIBUTION:
The data language implemented by the relation database management
system supports the ability to distribute the database without requiring changes
to be made to application programmers. This facility must be provided in the data
language, whether or not the database management system itself supports
distributed databases.
NON- SUBVERSION:
If the relational database management system supports facilities that
allow application programmers to operate on the tables a row at a time, an
application programmer using this type access is prevented from by passing
entity integrity or referential integrity constraints that are defined for the
database.

38

6. SYSTEM DESIGN

6. I. Data Flow Diagrams

Context level Diagram

MS-SQL
Server

Driver
URL
Login ID
Password

Data
Base
Interface

MY-SQL

ORACLE

MSAccess

39

Level 1 DFD

Driver
URL
Login ID
Password

Dive
r?

Oracle
Driver

MY-SQL
Driver

DAT

DAT

ID & Pass A

ID & Pass A

word

word

ORACLE

MY-SQL

40

DFD for STRUCTURE Module


Driver
URL
Login ID
Password

JDBC

TABLE NAME

Display the
Structure

41

ID & Pass word

List of
Tables

DB

DFD for PROPERTIES Module


Driver
URL
Login ID
Password

JDBC

TABLE NAME

No. Of Records
per page

Fields to be
displayed

Display the
Records

42

ID & Pass word

List of
Tables

DB

DFD for SQL Module


Driver
URL
Login ID
Password

JDBC

SQL Query

Display the
Results

43

ID & Pass word

List of
Tables

DB

DFD for IMPORT Module


Driver
URL
Login ID
Password

JDBC

Select the Source


Data Base

Select the Source


Table

Source

DB

44

ID & Pass word

List of
Tables

DB

DFD for EXPORT Module


Driver
URL
Login ID
Password

JDBC

Select the Table

Select the
Export Format

Exported
Table details

45

ID & Pass word

List of
Tables

DB

DFD for Creation of Table


Driver
URL
Login ID
Password

ID & Pass word

DBI

List of
Tables

Choose Create Table

Specify the
Fields & sizes

Table Created

46

DB

DFD for Search Module

Driver
URL
Login ID
Password

DBI

Search data

Specify the
tables

Results

47

ID & Pass wor

List of
Tables

6. ii. UML ANALYSIS

UML-diagrams
Use-case-diagram

Stucture

Properties

SQL

User

Select Driver
Export
Import

Operations

48

Search

Sequential-diagram

LoginValidate

: User

Structure

Properties

SQL

Export

Import

Operations

Search

(driver,url,username,pwd)
(validate())

Login Validate

Select Table
Name

: User
(driver,url,userid,pwd)

Structure

Table Structure

(validate())

(select table name)


(click on structure)

(display table structure)

49

Login Validate

Browse

Display Table
Settings

: User

(driver,url,userid,pwd)

(validate())

(select browse)

: User

(select column to be displayed)

Login Validate

SQL

Enter query

(driver,url,userid,pwd)
(validate())

(select SQL)

(click on run)

50

: User

Login Validate

import

browse

(driver,url,userid,pwd)
(validate())

(select import)

: User

(import from sql file)

Login Validate

Export

Export Table

(driver,url,userid,pwd)
(validate())

(select export)

(Export table name, format)

51

: User

Login Validate

Operation

Create Table

(driver,url,userid,pwd)
(validate())

(select Operation)

: User

Login Validate

(enter table name, field properties )

Search

Search
Database

(driver,url,userid,pwd)
(validate())

(select search)

(enter keyword,select search options)

52

Class diagram

GUIComponent

Menu

InputScreen

Options

DataStore

Report
<<uses>>
<<instantiates>>

DataManipulator

Desc tab

Create Tab

Edit Form

Renametab

Browse Form

Empty column

DBOperations

53

List DB

State-diagram
( id,pwd )

( id,pwd )
( ok )
unauthenticated
validation
authenticated

( Driver,username,pwd)
Validation ( valid ) DB Menu

unauthenticated
( invalid )

54

7.CODE EFFICIENCY
MEASURES OF CODE EFFICIENCY
The code is designed with the following characteristics in mind.
1. Uniqueness: The code structure must ensure that only one value of
the code with a single meaning is correctly applied to a give entity or
attribute.
2. Expandability: The code structure are designed for in a way that it
must allow for growth of its set of entities or attributes, thus
providing sufficient space for the entry of new items with in each
classification.
3. Conciseness:

The code requires the fewest possible number of

positions to include and define each item.


4. Uniform size and format:

Uniform size and format is highly

desirable in mechanized data processing system. The addition of


prefixes and suffixes to the root code should not be allowed
especially as it is incompatible with the uniqueness requirement.
5. Simplicity: The codes are designed in a simple manner to
understand and simple to apply.
6. Versatility: The code allows modifying easily to reflect necessary
changes in conditions, characteristics and relationship of the
encoded entities.

Such changes must result in a corresponding

change in the code or coding structure.

55

7. Sortability: Reports are most valuable for user efficiency when


sorted and presented in a predetermined format or order. Although
data must be sorted and collaged, the representative code for the
date does not need to be in a sortable form if it can be correlated
with another code that is sortable.
8. Stability: Codes that do not require to be frequently updated also
promote use efficiency. Individual code assignments for a given
entity should be made with a minimal likelihood of change either in
the specific code or in the entire coding structure.
9. Meaningfulness: Code is meaningful. Code value should reflect the
characteristics of the coded entities, such as mnemonic features
unless such a procedures results in inconsistency and inflexibility.
10. Operability: The code is adequate for present and anticipated data
processing both for machine and human use.

Care is taken to

minimize the clerical effort and computer time required for


continuing the operation.

56

8.CODE OPTIMIZATION
Introduction:
A good program is not the one that solves the intended problem
alone but the one that does it efficiently. An ideal compiler should produce
target code that is as good as can be written by hand crafted meticulously
to run on the target machine in the most efficient manner both in terms of
time of execution and memory requirements. The reality however is that
this goal is achieved only in limited, cases and that too with difficulty.
Nonetheless, the code produced by straight forward compiling algorithms
can often be made more space and time efficient. This is accomplished by
applying transformations on the produced code. These transformations
aiming at optimization of compiled code are known as code optimization
and compilers that apply code improving transformations are called
optimizing compilers.
The

optimization

may

be

machine

dependent

or

machine

independent. A machine independent optimization is a set of program


transformations that improve the target code without taking into
consideration any properties of the target machine. Machine dependent
optimizations, such as register allocation and utilization of special machine
instruction sequences, on the other hand, depend on the target machine.
The overall performance of a program can be effectively improved if
we can identify the frequently executed parts of a program and then make
these parts as efficient as much as possible. According to Pareto principle,

57

most programs spend ninety per cent of their execution time in ten percent
of the code. While the actual percentages may vary, it is often the case
that a small fraction of a program accounts for most of the running time.
Profiling the run-time execution of a program on representative input data
accurately identifies the heavily traveled regions of a program.
Unfortunately, a compiler does not have the benefit of sample input data,
so it must make best guess as to where the program hot spots are.
In practice, the program's inner loops are good candidates for
improvement. In a language that emphasizes control constructs like while
and for statements, the loops may be evident from the syntax of the
program; in general, a process called contra/flow analysis identifies loops
in the flow graph of a program.
The best technique for deciding what transformations are worthwhile
to put into a compiler is to collect statistics about the source programs and
evaluate the benefit of a given set of optimizations on a representative
sample of real source programs.
Organization for an optimizing compiler
There are often levels at which a program can be improved
algorithm level, source program level, intermediate level or target level.
Since the techniques needed to analyze and transform a program do not
change significantly with the level, this chapter concentrates on the
transformation of intermediate code using the organization shown below:
In the code optimizer programs are represented by flow graphs, in

58

which edges indicate the flow of control and nodes represent basic blocks.
Unless otherwise specified a program means a single procedure.
Sources of Optimization
Let us see some of the most useful code-improving transformations.
If looking only at can perform it the statements in a basic block are called
local otherwise, it is called global. Many transformations can be performed
at both the local and global levels. Local transformations are usually
performed first.
Function-preserving Transformations
There are a number of ways in which a compiler can improve a
program without changing the function it computes. Common subexpression elimination, copy propagation, dead code elimination, and
constant folding are common examples of such function-preserving
transformations.
Frequently, a program will include several calculations of the same
value, such as an offset in an array. The programmer cannot avoid some
of these duplicate calculations because they lie below the level of detail
accessible within the source language.
Common sub-expressions
An occurrence of an expression E is called a common subexpression if E was previously computed, and the values of variables in E
have not changed since the previous computation. We can avoid
recomputing the expression if we can use the previously computed value.
Removing such command sub-expressions may optimize the code.
COPY PROPAGATION
The idea behind the copy-propagation transformation is to use g for

59

f, wherever possible after the copy statement f: =g.


Dead-Code
A variable is live at a point in a program if its value can be used
subsequently; otherwise, it is dead at that point. A related idea is dead or
useless code, statements that compute values that never get used. While
the programmer is unlikely to introduce any dead code intentionally, it may
appear as the result of previous transformations. Deducing at compile time
that the value of an expression is a constant and using the constant
instead is known as constant folding.
LOOP OPTIMIZATIONS
Loops are very important place for optimizations where programs
tend to spend the bulk of their time. If we decrease the number of
instructions in an inner loop, even if we increase the amount of code
outside that loop, the running time of a program may be improved
considerably. Three important techniques for loop optimization are - code
motion, which moves code outside a loop; induction-variable elimination,
which we apply to eliminate loop indices from the inner loops; and
reduction in strength, which replaces an expensive operation by a cheaper
one, such as a multiplication by an addition. Some of the loop optimization
techniques are discussed below:

Code Motion
Code motion is an important modification that decreases the amount of
code in a loop. It takes an expression and transforms it yielding the same
result independent of the number of times a loop is executed (i.e. a loop-

60

invariant computation) and places the expression before the loop. The
assumption made here is that an entry for the loop exists. For instance,
evaluation of a loop-invariant computation in the following while-statement:
Clearly the code motion technique has reduced the number of
computations in this form.
Induction Variables And Reduction In Strength
Code motion may not be applicable to all the situations. Loops are
usually processed from inside to outside. For example, consider the
following loops:
Note that the values of j and t2 remain in lock step; every time the
value of j decreases by 1, that of t2 decreases by 5 because 5* j is
assigned to t2. Such identifiers are called induction variables.
By the process of induction variable elimination in a loop, it may be
possible to get rid of all but one when more than one induction variables
are present.

Basic Blocks Optimization


Many

of

the

structure-preserving

transformations

can

be

implemented by constructing a directed-cyclic-graph (DAG) for a basic


block. There is a node in the DAG for each of the initial values of the
variables appearing in the basic block, and there is a node n associated
with each statement s within the block. The children of n are those nodes
corresponding to statements that are the last definitions prior to s of the
operands used by s. Node n is labeled by the operator applied at s, and
61

also attached to n is the list of variables for which it is the definition within
the block. We also note those nodes, if any, whose are live on exit from
the block; these are the output nodes.

62

9.SYSTEM TESTING & IMPLEMENTATION


SOFTWARE TESTING TECHNIQUES:
Software testing is a critical element of software quality assurance
and represents the ultimate review of specification, designing and coding.
TESTING OBJECTIVES:
1. Testing is process of executing a program with the intent of finding an
error.
2. A good test case design is one that has a probability of finding an as
yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.
These above objectives imply a dramatic change in view port.
Testing cannot show the absence of defects, it can only show that
software errors are present.
TEST CASE DESIGN:
Any engineering product can be tested in one of two ways:
1. White Box Testing: This testing is also called as glass box testing. In
this testing, by knowing the specified function that a product has been
designed to perform test can be conducted that demonstrates each
function is fully operation at the same time searching for errors in each
function. It is a test case design method that uses the control structure

63

of the procedural design to derive test cases. Basis path testing is a


white box testing.
Basis Path Testing:
i.

Flow graph notation

ii.

Cyclomatic Complexity

iii.

Deriving test cases

iv.

Graph matrices

Control Structure Testing:


i.

Condition testing

ii.

Data flow testing

iii.

Loop testing

2. Black Box Testing: In this testing by knowing the internal operation of a


product, tests can be conducted to ensure that all gears mesh, that is
the internal operation performs according to specification and all
internal components have been adequately exercised. It fundamentally
focuses on the functional requirements of the software.

The steps involved in black box test case design are:


i.

Graph based testing methods

ii.

Equivalence partitioning

iii.

Boundary value analysis

iv.

Comparison testing

64

SOFTWARE TESTING STRATEGIES:


A software testing strategy provides a road map for the software
developer. Testing is a set of activities that can be planned in advance and
conducted systematically. For this reason a template for software testing a
set of steps into which we can place specific test case design methods
should be defined for software engineering process. Any software testing
strategy should have the following characteristics:
1. Testing begins at the module level and works outward toward the
integration of the entire computer based system.
2. Different testing techniques are appropriate at different points in time.
3. The developer of the software and an independent test group conducts
testing.
4. Testing and Debugging are different activities but debugging must be
accommodated in any testing strategy.
Unit Testing: Unit testing focuses verification efforts in smallest unit of
software design (module).
1. Unit test considerations
2. Unit test procedures
Integration Testing: Integration testing is a systematic technique for
constructing the program structure while conducting tests to uncover
errors associated with interfacing. There are two types of integration
testing:

65

1. Top-Down Integration: Top down integration is an incremental approach


to construction of program structures. Modules are integrated by
moving down wards throw the control hierarchy beginning with the
main control module.
2. Bottom-Up Integration: Bottom up integration as its name implies,
begins construction and testing with automatic modules.
3. Regression Testing: In this contest of an integration test strategy,
regression testing is the re execution of some subset of test that have
already been conducted to ensure that changes have not propagate
unintended side effects.
VALIDATION TESTING:
At the culmination of integration testing, software is completely
assembled as a package; interfacing errors have been uncovered and
corrected, and a final series of software tests validation testing may
begin. Validation can be fined in many ways, but a simple definition is that
validation succeeds when software functions in a manner that can be
reasonably expected by the customer.
Reasonable expectation is defined in the software requirement
specification a document that describes all user-visible attributes of the
software. The specification contains a section titled Validation Criteria.
Information contained in that section forms the basis for a validation
testing approach.

66

VALIDATION TEST CRITERIA:


Software validation is achieved through a series of black-box tests
that demonstrate conformity with requirement. A test plan outlines the
classes of tests to be conducted, and a test procedure defines specific test
cases that will be used in an attempt to uncover errors in conformity with
requirements. Both the plan and procedure are designed to ensure that all
functional requirements are satisfied; all performance requirements are
achieved; documentation is correct and human-engineered; and other
requirements are met.
After each validation test case has been conducted, one of two
possible conditions exist: (1) The function or performance characteristics
conform to specification and are accepted, or (2) a deviation from
specification is uncovered and a deficiency list is created. Deviation or
error discovered at this stage in a project can rarely be corrected prior to
scheduled completion. It is often necessary to negotiate with the customer
to establish a method for resolving deficiencies.
CONFIGURATION REVIEW:
An important element of the validation process is a configuration
review. The intent of the review is to ensure that all elements of the
software configuration have been properly developed, are catalogued, and
have the necessary detail to support the maintenance phase of the
software life cycle. The configuration review sometimes called an audit.

67

Alpha and Beta Testing:


It is virtually impossible for a software developer to foresee how the
customer will really use a program. Instructions for use may be
misinterpreted; strange combination of data may be regularly used; and
output that seemed clear to the tester may be unintelligible to a user in the
field.
When custom software is built for one customer, a series of
acceptance tests are conducted to enable the customer to validate all
requirements. Conducted by the end user rather than the system
developer, an acceptance test can range from an informal test drive to a
planned and systematically executed series of tests. In fact, acceptance
testing can be conducted over a period of weeks or months, thereby
uncovering cumulative errors that might degrade the system over time.
If software is developed as a product to be used by many
customers, it is impractical to perform formal acceptance tests with each
one. Most software product builders use a process called alpha and beta
testing to uncover errors that only the end user seems able to find.
A customer conducts the alpha test at the developers site. The
software is used in a natural setting with the developer looking over the
shoulder of the user and recording errors and usage problems. Alpha
tests are conducted in controlled environment.
The beta test is conducted at one or more customer sites by the end
user of the software. Unlike alpha testing, the developer is generally not

68

present. Therefore, the beta test is a live application of the software in an


environment that cannot be controlled by the developer. The customer
records all problems that are encountered during beta testing and reports
these to the developer at regular intervals. As a result of problems
reported during bets test, the software developer makes modification and
then prepares for release of the software product to the entire customer
base.
IMPLEMENTATION:
Implementation is the process of having systems personnel check
out and put new equipment into use, train users, install the new app
Depending on the size of the organization that will be involved in
using the application and the risk associated with its use, systems
developers may choose to test the operation in only one area of the firm,
say in one department or with only one or two persons. Sometimes they
will run the old and new systems together to compare the results. In still
other situation, developers will stop using the old system one-day and
begin using the new one the next. As we will see, each implementation
strategy has its merits, depending on the business situation in which it is
considered. Regardless of the implementation strategy used, developers
strive to ensure that the systems initial use in trouble-free.
Once installed, applications are often used for many years.
However, both the organization and the users will change, and the
environment will be different over weeks and months. Therefore, the

69

application will undoubtedly have to be maintained; modifications and


changes will be made to the software, files, or procedures to meet
emerging user requirements. Since organization systems and the
business environment undergo continual change, the information systems
should keep pace. In this sense, implementation is ongoing process.
Evaluation of the system is performed to identify its strengths and
weakness. The actual evaluation can occur along any of the following
dimensions.
Operational Evaluation: assessment of the manner in which the
system functions, including ease of use, response time, suitability of
information formats, overall reliability, and level of utilization.
Organization Impact: Identification and measurement of benefits to
the organization in such areas as financial concerns operational efficiency,
and competitive impact. Includes impact on internal and external
information flows.
User Manager Assessment: Evaluation of the attitudes of senior
and user mangers within the organization, as well as end-users.
Development Performance: Evaluation of the development
process in accordance with such yardsticks as overall development time
and effort, conformance to budgets and standards, and other project
management criteria. Includes assessment of development methods and
tools.

70

Unfortunately system evaluation does not always receive the


attention it merits. Where properly managed however, it provides a great
deal of information that can improve the effectiveness of subsequent
application efforts.

71

10. SYSTEM SECURITY MEASURES


Security in software engineering a broad topic. This script limits its
scope to defining and discussing software security, software reliability,
developer responsibility, and user responsibility.
COMPUTER SYSTEMS ENGINEERING
Software security applies information security principles to software
development. Information security is commonly defined as "the protection
of information systems against unauthorized access to or modification of
information, whether in storage, processing or transit, and against the
denial of service to authorized users of the provision of service to
unauthorized users, including those measures necessary to detect,
document, and counter such threats."
Many questions regarding security are related to the software life
cycle itself. In particular, the security of code and software processes must
be considered during the design and development phase. In addition,
security must be preserved during operation and maintenance to ensure
the integrity of a piece of software.
The mass of security functionality employed by today's networked
world, might deceive us into believing that our jobs as secure system
designers are already done. However, computers and networks are
incredibly insecure. The lack of security stems from two fundamental
problems. Systems, which are theoretically secure, may not be secure in
practice. Furthermore, systems are increasingly complex; complexity

72

provides more opportunities for attacks. It is much easier to prove that a


system is insecure than to demonstrate that one is secure to prove
insecurity, one simply exploits certain system vulnerability. On the other
hand, proving a system secure requires demonstrating that all possible
exploits can be defended against (a very daunting, if not impossible, task).
GOOD PRACTICE
Security requires more managing and mitigating risk than it does
technology. When developing software one must first determine the risks
of a particular application. For example, today's typical web site may be
subject to a variety of risks, ranging from defacement, to distributed denial
of service (DDoS, described in detail later) attacks, to transactions with the
wrong party.
Once the risks are identified, identifying appropriate security
measures becomes tractable. In particular, when defining requirements, it
is important to consider how the application will be used, who will be using
the application, etc. With that knowledge, one can decide whether or not to
support complex features like auditing, accounting, no repudiation, etc.
Another potentially important issue is how to support naming. The rise of
distributed systems has made naming increasingly important. Naming is
typically handled by rendezvous: a principal exporting a name advertises it
somewhere, and someone wishing to use that name searches for it
(phone books and directories are examples). For example, in a system
such as a resource discovery system, both the resources and the

73

individuals using those resources must be named. Often there are


tradeoffs with respect to naming: while naming can provide a level of
indirection, it also can create additional problems if the names are not
stable. Names can allow principals to play different roles in a particular
system, which can also be useful.

74

11. COST ESTIMATION OF THE PROJECT


For a given set of requirements it is desirable to know how much it
will cost to develop the software to satisfy the given requirements, and
how much time development will take. These estimates are needed before
development is initiated. The primary reason for cost and schedule
estimation is to enable the client or developer to perform a cost-benefit
analysis and for project monitoring and control. Automation more practical
use of these estimates is in bidding for software projects, where the
developers must give cost estimates, to a potential client for the
development contract.
For a software development project, detailed and accurate cost and
schedule estimates are essential prerequisites for managing the project.
Otherwise, even simple questions like is the project late, are there cost
overruns and when is the project likely to complete cannot be answered.
Cost and schedule estimates are also required to determine the staffing
level for a project during different phases. It can be safely said that cost
and schedule estimates are fundamental to any form of project
management and generally always required for a project.
Cost in a project is due to the requirements for software, hardware,
and human resources. Hardware resources are such things as the
computer time, terminal time, and memory required for the project,
whereas software resources include the tools and compilers needed
during development. The bulk of the cost of software development is due
75

to the human resources needed, and most cost estimation procedures


focus on this aspect. Most cost estimates are determined in terms of
person-months (PM). By properly including the Overheads (i.e. the cost
of hardware, software, office space etc,) in the dollar cost of the personmonth, besides including the direct cost of the person-month, most costs
for a project can be incorporated by using PM as the basic measure.
Estimates can be based in subjective opinion of some person or
determined through the user of models. Though there are approaches to
structure the opinions of persons for achieving a consensus on the cost
estimate it is generally accepted that it is important to have a more
scientific approach to estimate though the user of models.
Uncertainties in cost estimation:
One can perform cost estimation at any point in the software life
circle. As the cost of the project depends on the nature and characteristics
of the project, at any point, the accuracy of the estimate will depend on the
among or reliable information we have about the final product. Clearly,
when the product is delivered, the cost can be accurately determined, as
all the data about the project and the resources spent be fully known by
then. This is cost estimation with complete knowledge about be fully
known by then. This is cost estimation with complete knowledge about the
project. On the other extreme is the point when the project is being
initiated or during the feasibility study. At this time

76

12. SCREEN LAYOUT

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

13. PERT CHART, GANTT CHART


PERT CHART:
Program Evaluation Review Technique, PERT can be both a cost
and a time management system. PERT is organized by events and
activities or tasks. PERT has several advantages over bar charts and is
likely to be used with more complex projects. One advantage of PERT is
that it is scheduling device that also shows graphically which tasks must
be completed before others are begun.
Also, by displaying the various task paths, PERT enables the
calculation of a critical path. Each path consists of combinations of tasks,
which must be completed. PERT controls time and cost during the project
and also facilitates finding the right balance between completing a project
on time and completing it within the budget.
PERT CHART
20 JAN 2006

START

28 JAN 2006

I/O DESIGN

12 FEB 2006
INTEGRATION
AND TESTING

ANALYSIS
23 JAN 2006

2 FEB 2006

CODING
FINISH
8 FEB 2006
15 FEB 2006
WRITE
99MANUAL

Gantt Chart ( Bar Chart ):


A Bar Chart is perhaps the simplest form of formal project
management. The bar chart is also known as Gantt Chart. It is used
almost exclusively for scheduling purposes and therefore controls only the
time of projects.
Gantt Charts are a project control technique that can be used for
several purposes, including scheduling, budgeting and resource planning.
A Gantt Chart is a Bar Chart, with each bar representing an activity. The
bars are drawn against a time line. The length of each bar is proportional
to the length of time planned for the activity.
GANTT CHART

Jan 15, 06

Jan 23, 06

Jan 28, 06

Feb 5, 06

Feb 15, 06

START
ANALYSIS
I /O DESIGN

CODING

WRITE MANUAL

INTEGRATION
AND
TESTING

TESTING

SLACK TIME, i.e., the LATEST TIME by which a task must be finished
White part of the bar shows the length of time each task is estimated to take
100

14. CONCLUSION
This application can be extended so that it will give maximum
performance including the security related aspects, resulting releasing of
this application as a product in the open market. As it has got very user
friendly look and feel, it will be definitely succeeded in the market as a
product.

101

15. BIBLIOGRAPHY
The Knowledge required for developing this project is extracted from
the following books.
The Knowledge required for developing this project is extracted from
the following books.
1.

The Java2 Complete Referece By Patric Naughton

2.

Java2 Tutorial From Sun Micro Systems

3.

Core Java2 Volume I Fundamentals; SunSoft Press; S.


Horstmann & Gary Cornell

4.

Core Java2 Volume II Advanced Features SunSoft Press; S.


Horstmann & Gary Cornell

5.

Professional Java Server Programming; Wrox publication; 12


Authors

6.

Java Foundation Classes; Orielly publications

7.

SYSTEM ANALYSIS AND DESIGN James A. Senn

8.

SOFTWARE ENGINEERING Roger S. Pressman

102

You might also like