Laboratory Manual Data Warehousing and Mining Lab: Department of Computer Science and Engineering
Laboratory Manual Data Warehousing and Mining Lab: Department of Computer Science and Engineering
Laboratory Manual Data Warehousing and Mining Lab: Department of Computer Science and Engineering
B.TECH
(IV YEAR – I SEM)
(2016-17)
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING
Vision
To acknowledge quality education and instill high patterns of
discipline making the students technologically superior and ethically
strong which involves the improvement in the quality of life in
human race.
Mission
To achieve and impart holistic technical education using the best of
infrastructure, outstanding technical and teaching expertise to
establish the students into competent and confident engineers.
Evolving the center of excellence through creative and innovative
teaching learning practices for promoting academic achievement to
produce internationally accepted competitive and world class
professionals.
PROGRAMME EDUCATIONAL OBJECTIVES
(PEOs)
2. To facilitate the graduates with the technical skills that prepare them for
immediate employment and pursue certification providing a deeper
understanding of the technology in advanced areas of computer science
and related fields, thus encouraging to pursue higher education and research
based on their interest.
3. To facilitate the graduates with the soft skills that include fulfilling the mission,
setting goals, showing self-confidence by communicating effectively, having
a positive attitude, get involved in team-work, being a leader, managing
their career and their life.
2. Proble m analysis : Identify , for mulate, review resea rch literature, and ana
lyze complex engineering proble ms rea ching substantia ted conclusions using
fir st principles o f mathematic s, natura l science s, and enginee ring sciences .
3. Des ign / d e velop ment o f solution s: Design solutions for co mplex engineering
problems and design system components or processes tha t m eet
the specified needs with a ppropria te consideration for the public hea lth
a nd safety, a nd the cultural, societal, and environmental considerations.
4. Con duct inve stigations o f co mp le x p roble ms: Us e research -based
knowledge
and resea rch methods including design of experiments, ana lysis a
nd interpretation of data, and synthesis of the in for ma tion to provide
valid conclusions.
5. Mod ern too l usa ge : C reate, select , and apply appropriate
techniques, resources, a nd modern engineering and IT tools including
prediction a nd modeling to complex engineering activities with an
understanding of the limitations.
6. The eng ineer a nd soc iety : Apply reasoning infor med by the contex tual
knowledge to assess societa l, health, safety, lega l and cultural issues and
the consequent responsibilities relevant to the profe ssional engineerin g
practice.
7. Environ m ent and su stainab ility : Understand the impa ct of the
professional engineering solutions in societal a nd environmental
contexts, and demonstrate the knowledge o f, and need for sustainable
development .
8. Ethics : Apply ethical principles and commit to profes siona l ethics a nd
responsibilities and nor ms o f the en gineering pra
ctice.
9. Ind ividu al and tea m wo rk : Function effectively as an individual, and a s
a me mbe r or lea der in diverse tea ms, a nd in multidisciplinary settings .
10. Co m municat ion : Communicate e ffectively on complex engineering activities
with the engineering community and with society at large, such a s, being
able to comprehend a nd write effective reports and design documentation,
ma ke effective presentations, and give a nd receive clear instructions.
11. Project mana ge men t and finance : De monstrate knowledge
and understa nding of the engineering and ma nagement principles and
apply these to one’s own work , a s a me mber and leader in a team, to
mana ge projects a nd in multi disciplinary environments .
12. Life - lon g learn ing : Reco gnize the need for, and ha ve the preparation
and ability to enga ge in independent and life-long learning in the
broadest contex t of technologica l change.
MALLA REDDY COLLEGE OF ENGINEERING &
TECHNOLOGY
Maisammaguda, Dhulapally Post, Via Hakimpet, Secunderabad – 500100
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
1. Students are advised to come to the laboratory at least 5 minutes before (to
the starting time), those who come after 5 minutes will not be allowed into the
lab.
2. Plan your task properly much before to the commencement, come prepared to the
lab with the synopsis / program / experiment details.
3. Student should enter into the laboratory with:
a. Laboratory observation notes with all the details (Problem statement, Aim,
Algorithm, Procedure, Program, Expected Output, etc.,) filled in for the lab
session.
b. Laboratory Record updated up to the last session experiments and other utensils
(if any) needed in the lab.
c. Proper Dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN, and occupy the
computer system allotted to you by the faculty.
5. Execute your task in the laboratory, and record the results / output in the
lab observation note book, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, must
maintain the discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high end branded
systems, which should be utilized properly.
8. Students / Faculty must keep their mobile phones in SWITCHED OFF mode
during the lab sessions.Misuse of the equipment, misbehaviors with the staff and
systems etc., will attract severe punishment.
9. Students must take the permission of the faculty in case of any urgency to go out ;
if anybody found loitering outside the lab / class without permission during
working hours will be treated seriously and punished appropriately.
10. Students should LOG OFF/ SHUT DOWN the computer system before he/she
leaves the lab after completing the task (experiment) in all aspects. He/she must
ensure the system / seat is kept properly.
COURSE OBJECTIVES:
1. Learn how to build a data warehouse and query it (using open source tools like
Pentaho Data Integration Tool, Pentaho Business Analytics).
2. Learn to perform data mining tasks using a data mining toolkit (such as open source
WEKA).
3. Understand the data sets and data preprocessing.
4. Demonstrate the working of algorithms for data mining tasks such association rule mining,
classification, clustering and regression.
5. Exercise the data mining techniques with varied input values for different parameters.
6. To obtain Practical Experience Working with all real data sets.
7. Emphasize hands-on experience working with all real data sets.
COURSE OUTCOMES:
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 P09 PO10 PO11
COURSE OUTCOMES
MRCET Page 1
Data Warehousing and Mining Lab Department of CSE
MRCET Page 2
Data Warehousing and Mining Lab Department of CSE
MRCET Page 3
Data Warehousing and Mining Lab Department of CSE
MRCET Page 4
Data Warehousing and Mining Lab Department of CSE
A. Build Data Warehouse/Data Mart (using open source tools like Pentaho Data
Integration Tool, Pentaho Business Analytics; or other data warehouse tools like
Microsoft-SSIS,Informatica,Business Objects,etc.,)
So now we are going to create the 3 tables in HireBase database: Customer, Van, and Hire. Then
we populate them.
MRCETr 3
Data Warehousing and Mining Lab Department of CSE
Customer table:
Van table:
Hire table:
MRCET Page 4
Data Warehousing and Mining Lab Department of CSE
-- Create database
create database HireBase
go
use HireBase
go
-- Populate Customer
truncate table Customer
go
MRCET page 6
Data Warehousing and Mining Lab Department of CSE
declare @i int, @si varchar(10), @DaysFrom1stJan int, @CustomerId int, @RegNo int, @mi int
set @i = 1
while @i <= 1000
begin
set @si = right('000'+convert(varchar(10), @i),4) -- string of i
set @DaysFrom1stJan = (@i-1)%200 --The Hire Date is derived from i modulo 200
set @CustomerId = (@i-1)%100+1 --The CustomerId is derived from i modulo 100
set @RegNo = (@i-1)%20+1 --The Van RegNo is derived from i modulo 20
set @mi = (@i-1)%3+1 --i modulo 3
insert into Hire (HireId, HireDate, CustomerId, RegNo, NoOfDays, VanHire, SatNavHire,
Insurance, DamageWaiver, TotalBill)
values ('H'+@si, DateAdd(d, @DaysFrom1stJan, '2011-01-01'),
left('N0'+CONVERT(varchar(10),@CustomerId),3), 'Reg'+CONVERT(varchar(10), @RegNo),
@mi, @mi*100, @mi*10, @mi*20, @mi*40, @mi*170)
set @i += 1
end
go
So now we are going to create the 3 dimension tables and 1 fact table in the data warehouse:
DimDate, DimCustomer, DimVan and FactHire. We are going to populate the 3 dimensions but
we’ll leave the fact table empty. The purpose of this article is to show how to populate the fact
table using SSIS.
MRCET Pa ge 7
Data Warehousing and Mining Lab Department of CSE
Date Dimension:
Customer Dimension:
Van Dimension:
MRCET page
8
Data Warehousing and Mining Lab Department of CSE
And then we do it. This is the script to create and populate those dim and fact tables:
declare @i int, @Date date, @StartDate date, @EndDate date, @DateKey int,
@DateString varchar(10), @Year varchar(4),
@Month varchar(7), @Date1 varchar(20)
set @StartDate = '2006-01-01'
set @EndDate = '2016-12-31'
set @Date = @StartDate
insert into DimVan (RegNo, Make, Model, [Year], Colour, CC, Class)
select * from HireBase.dbo.Van
go
MRCET page 10
Data Warehousing and Mining Lab Department of CSE
A.(ii). Design multi-demesional data models namely Star, Snowflake and Fact
Constellation schemas for any one enterprise (ex. Banking,Insurance, Finance,
Healthcare, manufacturing, Automobiles,sales etc).
Ans: Schema
Definition
Multidimensional schema is defined using Data Mining Query Language (DMQL). The two
primitives, cube definition and dimension definition, can be used for defining the data warehouses
and data marts.
Star
Schema
• The following diagram shows the sales data of a company with respect to the four
dimensions, namely time, item, branch, and location.
• There is a fact table at the center. It contains the keys to each of four dimensions.
• The fact table also contains the attributes, namely dollars sold and units sold.
MRCET Page
1111
Data Warehousing and Mining Lab Department of CSE
Snowflake Schema
• Unlike Star schema, the dimensions table in a snowflake schema is normalized. For
example, the item dimension table in star schema is normalized and split into two
dimension tables, namely item and supplier table.
• Now the item dimension table contains the attributes item_key, item_name, type, brand,
and supplier-key.
• The supplier key is linked to the supplier dimension table. The supplier dimension table
contains the attributes supplier_key and supplier_type.
MRCET Page
1212
Data Warehousing and Mining Lab Department of CSE
• A fact constellation has multiple fact tables. It is also known as galaxy schema.
• The following diagram shows two fact tables, namely sales and shipping.
• The shipping fact table has the five dimensions, namely item_key, time_key, shipper_key,
from_location, to_location.
• The shipping fact table also contains two measures, namely dollars sold and units sold.
• It is also possible to share dimension tables between fact tables. For example, time, item,
and location dimension tables are shared between the sales and shipping fact table.
MRCET Page
1313
Data Warehousing and Mining Lab Department of CSE
A.(iii) Write ETL scripts and implement using data warehouse tools.
Ans:
ETL comes from Data Warehousing and stands for Extract-Transform-Load. ETL covers a process
of how the data are loaded from the source system to the data warehouse. Extraction–
transformation–loading (ETL) tools are pieces of software responsible for the extraction of data
from several sources, its cleansing, customization, reformatting, integration, and insertion into a
data warehouse.
Building the ETL process is potentially one of the biggest tasks of building a warehouse; it is
complex, time consuming, and consumes most of data warehouse project’s implementation efforts,
costs, and resources.
Building a data warehouse requires focusing closely on understanding three main
areas:
1. Source Area- The source area has standard models such as entity relationship
diagram.
2. Destination Area- The destination area has standard models such as star schema.
MRCET page14
Data Warehousing and Mining Lab Department of CSE
3. Mapping Area- But the mapping area has not a standard model till now.
MRCET page15
Data Warehousing and Mining Lab Department of CSE
Abbreviations
• ETL-extraction–transformation–loading
• DW-data warehouse
• DM- data mart
• OLAP- on-line analytical processing
• DS-data sources
• ODS- operational data store
• DSA- data staging area
• DBMS- database management system
• OLTP-on-line transaction processing
• CDC-change data capture
• SCD-slowly changing dimension
• FCME- first-class modeling elements
• EMD-entity mapping diagram
• DSA-data storage area
ETL Process:
Extract
The Extract step covers the data extraction from the source system and makes it accessible for
further processing. The main objective of the extract step is to retrieve all the required data from
the source system with as little resources as possible. The extract step should be designed in a way
that it does not negatively affect the source system in terms or performance, response time or any
kind of locking.
• Update notification - if the source system is able to provide a notification that a record has been
changed and describe the change, this is the easiest way to get the data.
• Incremental extract - some systems may not be able to provide notification that an update has
occurred, but they are able to identify which records have been modified and provide an extract of
such records. During further ETL steps, the system needs to identify changes and propagate it
down. Note, that by using daily extract, we may not be able to handle deleted records properly.
• Full extract - some systems are not able to identify which data has been changed at all, so a full
extract is the only way one can get the data out of the system. The full extract requires keeping a
copy of the last extract in the same format in order to be able to identify changes. Full extract
handles deletions as well.
MRCET Page
1515
Data Warehousing and Mining Lab Department of CSE
Transform
The transform step applies a set of rules to transform the data from the source to the target. This
includes converting any measured data to the same dimension (i.e. conformed dimension) using the
same units so that they can later be joined. The transformation step also requires joining data from
several sources, generating aggregates, generating surrogate keys, sorting, deriving new calculated
values, and applying advanced validation rules.
Load
During the load step, it is necessary to ensure that the load is performed correctly and with as little
resources as possible. The target of the Load process is often a database. In order to make the load
process efficient, it is helpful to disable any constraints and indexes before the load and enable
them back only after the load completes. The referential integrity needs to be maintained by ETL
tool to ensure consistency.
ETL as scripts that can just be run on the database.These scripts must be re-runnable: they should
be able to be run without modification to pick up any changes in the legacy data, and automatically
work out how to merge the changes into the new schema.
1. INSERT rows in the new tables based on any data in the source that hasn’t already been created in
the destination
2. UPDATE rows in the new tables based on any data in the source that has already been inserted in
the destination
3. DELETE rows in the new tables where the source data has been deleted
Now, instead of writing a whole lot of INSERT, UPDATE and DELETE statements, I thought
“surely MERGE would be both faster and better” – and in fact, that has turned out to be the case.
By writing all the transformations as MERGE statements, I’ve satisfied all the criteria, while
also making my code very easily modified, updated, fixed and rerun. If I discover a bug or a
change
in requirements, I simply change the way the column is transformed in the MERGE statement, and
re-run the statement. It then takes care of working out whether to insert, update or delete each row.
My next step was to design the architecture for my custom ETL solution. I went to the dba with the
following design, which was approved and created for me:
1. create two new schemas on the new 11g database: LEGACY and MIGRATE
2. take a snapshot of all data in the legacy database, and load it as tables in the LEGACY schema
3. grant read-only on all tables in LEGACY to MIGRATE
4. grant CRUD on all tables in the target schema to MIGRATE.
MRCET Page
1616
Data Warehousing and Mining Lab Department of CSE
LEGACY.BMS_PARTIES(
par_first_name VARCHAR2(100) ,
par_last_name VARCHAR2(100),
VARCHAR2(250), created_by
In the new model, we have a new table that represents the same kind of information:
NEW.TBMS_PARTY(
first_name VARCHAR2(50),
surname VARCHAR2(100),
MRCET page 17
Data Warehousing and Mining Lab Department of CSE
Data Warehousing and Mining Lab Department of CSE
db_created_on VARCHAR2(50),
db_modified_by DATE,
db_modified_on
This was the simplest transformation you could possibly think of – the mapping from one to the
other is 1:1, and the columns almost mean the same thing.
MIGRATE.TBMS_PARTY(
first_name VARCHAR2(50),
surname VARCHAR2(100),
date_of_birth DATE,
business_name VARCHAR2(300),
db_created_by VARCHAR2(50),
MRCET Page 18
Data Warehousing and Mining Lab Department of CSE
db_created_on DATE,
db_modified_by VARCHAR2(50),
db_modified_on DATE,
deleted CHAR(1))
The second step is the E and T parts of “ETL”: I query the legacy table, transform the data right
there in the query, and insert it into the intermediary table. However, since I want to be able to re•
run this script as often as I want, I wrote this as a MERGE statement:
USING (
par_id AS party_id,
CASE par_domain
END AS party_type_code,
par_first_name AS first_name,
par_last_name AS surname,
par_dob AS date_of_birth,
par_business_name AS business_name,
MRCET page 19
Data Warehousing and Mining Lab Department of CSE
created_by AS db_created_by,
creation_date AS db_created_on,
last_updated_by AS db_modified_by,
last_update_date AS db_modified_on
FROM LEGACY.BMS_PARTIES s
SELECT null
FROM MIGRATE.TBMS_PARTY d
OR (d.db_modified_on IS NULL
) src
ON (src.OLD_PAR_ID = dest.OLD_PAR_ID)
party_id = src.party_id ,
party_type_code = src.party_type_code ,
first_name = src.first_name ,
Data Warehousing and Mining Lab Department of CSE
surname = src.surname ,
date_of_birth = src.date_of_birth ,
business_name = src.business_name ,
db_created_by = src.db_created_by ,
db_created_on = src.db_created_on ,
db_modified_by = src.db_modified_by ,
A.(iv) Perform Various OLAP operations such slice, dice, roll up, drill up and pivot.
Online Analytical Processing Server (OLAP) is based on the multidimensional data model. It
allows managers, and analysts to get an insight of the information through fast, consistent, and
interactive access to information.
• Roll-up
• Drill-down
• Slice and dice
• Pivot (rotate)
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways:
MRCET Page
2121
Data Warehousing and Mining Lab Department of CSE
• Initially the concept hierarchy was "street < city < province < country".
• On rolling up, the data is aggregated by ascending the location hierarchy from the level of
city to the level of country.
• When roll-up is performed, one or more dimensions from the data cube are removed.
Drill-down
Drill-down is the reverse operation of roll-up. It is performed by either of the following ways:
MRCET Page
2222
Data Warehousing and Mining Lab Department of CSE
• Drill-down is performed by stepping down a concept hierarchy for the dimension time.
• Initially the concept hierarchy was "day < month < quarter < year."
• On drilling down, the time dimension is descended from the level of quarter to the level of
month.
• When drill-down is performed, one or more dimensions from the data cube are added.
• It navigates the data from less detailed data to highly detailed data.
Slice
The slice operation selects one particular dimension from a given cube and provides a new sub-
cube. Consider the following diagram that shows how slice works.
MRCET Page
2323
Data Warehousing and Mining Lab Department of CSE
• Here Slice is performed for the dimension "time" using the criterion time = "Q1".
Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider
the following diagram that shows the dice operation.
MRCET page 24
Data Warehousing and Mining Lab Department of CSE
The dice operation on the cube based on the following selection criteria involves three
dimensions.
MRCET Page
2525
Data Warehousing and Mining Lab Department of CSE
A. (v). Explore visualization features of the tool for analysis like identifying trends etc.
Ans:
Visualization Features:
WEKA’s visualization allows you to visualize a 2-D plot of the current working relation.
Visualization is very useful in practice, it helps to determine difficulty of the learning problem.
WEKA can visualize single attributes (1-d) and pairs of attributes (2-d), rotate 3-d visualizations
(Xgobi-style). WEKA has “Jitter” option to deal with nominal attributes and to detect “hidden”
data points.
• Access To Visualization From The Classifier, Cluster And Attribute Selection Panel Is
Available From A Popup Menu. Click The Right Mouse Button Over An Entry In The
Result List To Bring Up The Menu. You Will Be Presented With Options For Viewing Or
Saving The Text Output And --- Depending On The Scheme --- Further Options For
Visualizing Errors, Clusters, Trees Etc.
MRCET Page
2626
Data Warehousing and Mining Lab Department of CSE
Select a square that corresponds to the attributes you would like to visualize. For example, let’s
choose ‘outlook’ for X – axis and ‘play’ for Y – axis. Click anywhere inside the square that
corresponds to ‘play on the left and ‘outlook’ at the top
MRCET Page
2727
Data Warehousing and Mining Lab Department of CSE
In the visualization window, beneath the X-axis selector there is a drop-down list,
‘Colour’, for choosing the color scheme. This allows you to choose the color of points based on
the attribute selected. Below the plot area, there is a legend that describes what values the colors
correspond to. In your example, red represents ‘no’, while blue represents ‘yes’. For better
visibility you should change the color of label ‘yes’. Left-click on ‘yes’ in the ‘Class colour’
box and select lighter color from the color palette.
To the right of the plot area there are series of horizontal strips. Each strip represents an
attribute, and the dots within it show the distribution values of the attribute. You can choose
what axes are used in the main graph by clicking on these strips (left-click changes X-axis, right-
click changes Y-axis).
The software sets X - axis to ‘Outlook’ attribute and Y - axis to ‘Play’. The instances are spread
out in the plot area and concentration points are not visible. Keep sliding ‘Jitter’, a random
displacement given to all points in the plot, to the right, until you can spot concentration points.
The results are shown below. But on this screen we changed ‘Colour’ to temperature. Besides
‘outlook’ and ‘play’, this allows you to see the ‘temperature’ corresponding to the
‘outlook’. It will affect your result because if you see ‘outlook’ = ‘sunny’ and ‘play’ = ‘no’ to
explain the result, you need to see the ‘temperature’ – if it is too hot, you do not want to play.
Change ‘Colour’ to ‘windy’, you can see that if it is windy, you do not want to play as well.
Selecting Instances
Sometimes it is helpful to select a subset of the data using visualization tool. A special
case is the ‘UserClassifier’, which lets you to build your own classifier by interactively
selecting instances. Below the Y – axis there is a drop-down list that allows you to choose a
selection method. A group of points on the graph can be selected in four ways [2]:
MRCET Page
2828
Data Warehousing and Mining Lab Department of CSE
attributes of the point. If more than one point will appear at the same location, more than
one set of attributes will be shown.
MRCET Page
2929
Data Warehousing and Mining Lab Department of CSE
3. Polygon. You can select several points by building a free-form polygon. Left-click on the
graph to add vertices to the polygon and right-click to complete it.
4. Polyline. To distinguish the points on one side from the once on another, you can build a
polyline. Left-click on the graph to add vertices to the polyline and right-click to finish.
MRCET Page
300
Data Warehousing and Mining Lab Department of CSE
1. Download the software as your requirements from the below given link.
http://www.cs.waikato.ac.nz/ml/weka/downloading.html
2. The Java is mandatory for installation of WEKA so if you have already Java on your
machine then download only WEKA else download the software with JVM.
3. Then open the file location and double click on the file
4. Click Next
MRCET Page
3131
Data Warehousing and Mining Lab Department of CSE
5. Click I Agree.
MRCET Page
3232
Data Warehousing and Mining Lab Department of CSE
6. As your requirement do the necessary changes of settings and click Next. Full and
Associate files are the recommended settings.
MRCET Page
3333
Data Warehousing and Mining Lab Department of CSE
8. If you want a shortcut then check the box and click Install.
9. The Installation will start wait for a while it will finish within a minute.
MRCET Page
3434
Data Warehousing and Mining Lab Department of CSE
11. Hurray !!!!!!! That’s all click on the Finish and take a shovel and start Mining. Best of
Luck.
MRCET Page
3535
Data Warehousing and Mining Lab Department of CSE
This is the GUI you get when started. You have 4 options Explorer, Experimenter,
KnowledgeFlow and Simple CLI.
B.(ii)Understand the features of WEKA tool kit such as Explorer, Knowledge flow interface,
Experimenter, command-line interface.
Ans: WEKA
The Weka GUI Chooser (class weka.gui.GUIChooser) provides a starting point for
launching Weka’s main GUI applications and supporting tools. If one prefers a MDI (“multiple
document interface”) appearance, then this is provided by an alternative launcher called “Main”
MRCET Page
3636
Data Warehousing and Mining Lab Department of CSE
(class weka.gui.Main). The GUI Chooser consists of four buttons—one for each of the four major
Weka applications—and four menus.
• Explorer An environment for exploring data with WEKA (the rest of this Documentation
deals with this application in more detail).
• Experimenter An environment for performing experiments and conducting statistical tests
between learning schemes.
• Knowledge Flow This environment supports essentially the same functions as the Explorer but
with a drag-and-drop interface. One advantage is that it supports incremental learning.
• SimpleCLI Provides a simple command-line interface that allows direct execution of WEKA
commands for operating systems that do not provide their own command line interface.
MRCET Page
3737
Data Warehousing and Mining Lab Department of CSE
1. Explorer
At the very top of the window, just below the title bar, is a row of tabs. When the Explorer
is first started only the first tab is active; the others are grayed out. This is because it is
necessary to open (and potentially pre-process) a data set before starting to explore the data.
The tabs are as follows:
Once the tabs are active, clicking on them flicks between different screens, on which the
respective actions can be performed. The bottom area of the window (including the status box, the
log button, and the Weka bird) stays visible regardless of which section you are in. The Explorer
can be easily extended with custom tabs. The Wiki article “Adding tabs in the Explorer”
explains this in detail.
2.Weka Experimenter:-
The Weka Experiment Environment enables the user to create, run, modify, and analyze
experiments in a more convenient manner than is possible when processing the schemes
individually. For example, the user can create an experiment that runs several schemes against a
series of datasets and then analyze the results to determine if one of the schemes is (statistically)
better than the other schemes.
MRCET Page
3838
Data Warehousing and Mining Lab Department of CSE
The Experiment Environment can be run from the command line using the Simple CLI. For
example, the following commands could be typed into the CLI to run the OneR scheme on the Iris
dataset using a basic train and test process. (Note that the commands would be typed on one line
into the CLI.) While commands can be typed directly into the CLI, this technique is not
particularly convenient and the experiments are not easy to modify. The Experimenter comes in
two flavors’, either with a simple interface that provides most of the functionality one needs for
experiments, or with an interface with full access to the Experimenter’s capabilities. You can
choose between those two with the Experiment Configuration Mode radio buttons:
• Simple
• Advanced
Both setups allow you to setup standard experiments, that are run locally on a single machine,
or remote experiments, which are distributed between several hosts. The distribution of
experiments cuts down the time the experiments will take until completion, but on the other hand
the setup takes more time. The next section covers the standard experiments (both, simple and
advanced), followed by the remote experiments and finally the analyzing of the results.
MRCET Page
3939
Data Warehousing and Mining Lab Department of CSE
3. Knowledge Flow
Introduction
The Knowledge Flow provides an alternative to the Explorer as a graphical front end to
WEKA’s core algorithms.
The Knowledge Flow presents a data-flow inspired interface to WEKA. The user can select
WEKA components from a palette, place them on a layout canvas and connect them together in
order to form a knowledge flow for processing and analyzing data. At present, all of WEKA’s
classifiers, filters, clusterers, associators, loaders and savers are available in the Knowledge
Flow along with some extra tools.
The Knowledge Flow can handle data either incrementally or in batches (the Explorer
handles batch data only). Of course learning from data incremen- tally requires a classifier that can
MRCET Page
4040
Data Warehousing and Mining Lab Department of CSE
be updated on an instance by instance basis. Currently in WEKA there are ten classifiers that can
handle data incrementally.
The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters, clusterers,
etc., but without the hassle of the CLASSPATH (it facilitates the one, with which Weka was
started). It offers a simple Weka shell with separated command line and output.
MRCET Page
4141
Data Warehousing and Mining Lab Department of CSE
Commands
• Break
Stops the current thread, e.g., a running classifier, in a friendly manner kill stops the current
thread in an unfriendly fashion.
• Cls
Clears the output area
Lists the capabilities of the specified class, e.g., for a classifier with its.
• option:
• exit
• help [<command>]
MRCET Page
4242
Data Warehousing and Mining Lab Department of CSE
Invocation
In order to invoke a Weka class, one has only to prefix the class with ”java”. This
command tells the Simple CLI to load a class and execute it with any given parameters. E.g., the
J48 classifier can be invoked on the iris dataset with the following command:
Command redirection
Note: the > must be preceded and followed by a space, otherwise it is not recognized as redirection,
but part of another parameter.
Command completion
Commands starting with java support completion for classnames and filenames via Tab
(Alt+BackSpace deletes parts of the command again). In case that there are several matches, Weka
lists all possible matches.
weka.classifiers
weka.clusterers
• Classname completion
MRCET Page
4343
Data Warehousing and Mining Lab Department of CSE
• Filename Completion
In order for Weka to determine whether a the string under the cursor is a classname or a
filename, filenames need to be absolute (Unix/Linx: /some/path/file;Windows: C:\Some\Path\file)
or relative and starting with a dot (Unix/Linux:./some/other/path/file; Windows:
.\Some\Other\Path\file).
MRCET Page
4444
Data Warehousing and Mining Lab Department of CSE
MRCET Page 45
Data Warehousing and Mining Lab Department of CSE
An ARFF (= Attribute-Relation File Format) file is an ASCII text file that describes a list of
instances sharing a set of attributes.
ARFF files are not the only format one can load, but all files that can be converted with
Weka’s “core converters”. The following formats are currently supported:
• ARFF (+ compressed)
• C4.5
• CSV
• libsvm
• binary serialized instances
• XRFF (+ compressed)
Overview
ARFF files have two distinct sections. The first section is the Header information, which is
followed the Data information. The Header of the ARFF file contains the name of the relation, a
list of the attributes (the columns in the data), and their types.
2. Sources:
MRCET Page 46
Data Warehousing and Mining Lab Department of CSE
@RELATION iris
@ATTRIBUTE sepal length NUMERIC
@ATTRIBUTE sepal width NUMERIC
@ATTRIBUTE petal length NUMERIC
@ATTRIBUTE petal width NUMERIC
@ATTRIBUTE class {Iris-setosa, Iris-versicolor, Iris-irginica} The Data of the ARFF file looks
like the following:
@DATA
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
The ARFF Header section of the file contains the relation declaration and at•
tribute declarations.
The relation name is defined as the first line in the ARFF file. The format is: @relation
<relation-name>
where <relation-name> is a string. The string must be quoted if the name includes spaces.
MRCET Page 47
Data Warehousing and Mining Lab Department of CSE
Attribute declarations take the form of an ordered sequence of @attribute statements. Each
attribute in the data set has its own @attribute statement which uniquely defines the name
of that attribute and it’s data type. The order the attributes are declared indicates the
column position in the data section of the file. For example, if an attribute is the third one
declared then Weka expects that all that attributes values will be found in the third comma
delimited column.
where the <attribute-name> must start with an alphabetic character. If spaces are to be
included in the name then the entire name must be quoted.
• numeric
• integer is treated as numeric
• real is treated as numeric
• <nominal-specification>
• string
• date [<date-format>]
• relational for multi-instance data (for future use)
where <nominal-specification> and <date-format> are defined below. The keywords numeric,
real, integer, string and date are case insensitive.
Numeric attributes
MRCET Page 48
Data Warehousing and Mining Lab Department of CSE
Nominal attributes
String attributes
String attributes allow us to create attributes containing arbitrary textual values. This is very
useful in text-mining applications, as we can create datasets with string attributes, then
write Weka Filters to manipulate strings (like String- ToWordVectorFilter). String
attributes are declared as follows:
Date attributes
Date attribute declarations take the form: @attribute <name> date [<date-format>] where
<name> is the name for the attribute and <date-format> is an optional string specifying how
date values should be parsed and printed (this is the same format used by
SimpleDateFormat). The default format string accepts the ISO-8601 combined date and
time format: yyyy-MM-dd’T’HH:mm:ss. Dates must be specified in the data section as the
corresponding string representations of the date/time (see example below).
Relational attributes
The ARFF Data section of the file contains the data declaration line and the actual instance
lines.
The @data declaration is a single line denoting the start of the data segment in the file. The
format is:
@data
Each instance is represented on a single line, with carriage returns denoting the end of the
instance. A percent sign (%) introduces a comment, which continues to the end of the line.
Attribute values for each instance are delimited by commas. They must appear in the order
that they were declared in the header section (i.e. the data corresponding to the nth
@attribute declaration is always the nth field of the attribute).
@data 4.4,?,1.5,?,Iris-setosa
Values of string and nominal attributes are case sensitive, and any that contain space or the
comment-delimiter character % must be quoted. (The code suggests that double-quotes are
acceptable and that a backslash will escape individual characters.)
MRCET Page
5050
An example follows:
Data Warehousing @relation
and Mining LabLCCvsLCSH @attribute LCC string @attribu
Department of CSEte LCSH
string
@data
MRCET Page
5151
Data Warehousing and Mining Lab Department of CSE
Dates must be specified in the data section using the string representation specified in the
attribute declaration.
For example:
@RELATION Timestamps
@ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss" @DATA
"2001-04-03 12:12:12"
"2001-05-03 12:59:55"
Relational data must be enclosed within double quotes ”. For example an instance of the
MUSK1 dataset (”...” denotes an omission):
MUSK-188,"42,...,30",1
MRCET Page
5252
Data Warehousing and Mining Lab Department of CSE
• contact-lens.arff
• cpu.arff
• cpu.with-vendor.arff
• diabetes.arff
• glass.arff
• ionospehre.arff
• iris.arff
• labor.arff
• ReutersCorn-train.arff
MRCET Page
5353
Data Warehousing and Mining Lab Department of CSE
• ReutersCorn-test.arff
• ReutersGrain-train.arff
• ReutersGrain-test.arff
• segment-challenge.arff
• segment-test.arff
• soybean.arff
• supermarket.arff
• vote.arff
• weather.arff
• weather.nominal.arff
MRCET Page
5454
Data Warehousing and Mining Lab Department of CSE
MRCET Page 54
Data Warehousing and Mining Lab Department of CSE
1. outlook
2. temperature
3. humidity
4. windy
5. play
MRCET Page 55
Data Warehousing and Mining Lab Department of CSE
1. sunny
2. overcast
3. rainy
MRCET page
5656
Data Warehousing and Mining Lab Department of CSE
MRCET Page
5757
Data Warehousing and Mining Lab Department of CSE
sunny,hot,high,FALSE,no
sunny,hot,high,TRUE,no
overcast,hot,high,FALSE,yes
rainy,mild,high,FALSE,yes
rainy,cool,normal,FALSE,yes
rainy,cool,normal,TRUE,no
overcast,cool,normal,TRUE,yes
sunny,mild,high,FALSE,no
sunny,cool,normal,FALSE,yes
rainy,mild,normal,FALSE,yes
sunny,mild,normal,TRUE,yes
overcast,mild,high,TRUE,yes
overcast,hot,normal,FALSE,yes
rainy,mild,high,TRUE,no
MRCET Page
5858
Data Warehousing and Mining Lab Department of CSE
A. Explore various options in Weka for Preprocessing data and apply (like Discretization
Filters, Resample filter, etc.) n each dataset.
MRCET Page
5959
Ans:
Data Warehousing and Mining Lab Department of CSE
Preprocess Tab
1. Loading Data
The first four buttons at the top of the preprocess section enable you to load data into
WEKA:
1. Open file.... Brings up a dialog box allowing you to browse for the data file on the local file
system.
2. Open URL.... Asks for a Uniform Resource Locator address for where the data is stored.
3. Open DB.... Reads data from a database. (Note that to make this work you might have to edit the
file in weka/experiment/DatabaseUtils.props.)
4. Generate.... Enables you to generate artificial data from a variety of Data Generators. Using the
Open file... button you can read files in a variety of formats: WEKA’s ARFF format, CSV
format, C4.5 format, or serialized Instances format. ARFF files typically have a .arff extension,
CSV files a .csv extension, C4.5 files a .data and .names extension, and serialized Instances objects
a .bsi extension.
MRCET Page
6060
Current Relation: Once some data has been loaded, the Preprocess panel shows a variety of
information. The Current relation box (the “current relation” is the currently loaded data,
which can be interpreted as a single relational table in database terminology) has three entries:
1. Relation. The name of the relation, as given in the file it was loaded from. Filters (described
below) modify the name of a relation.
Below the Current relation box is a box titled Attributes. There are four buttons, and
beneath them is a list of the attributes in the current relation.
MRCET Page
6060
The list has three columns:
1. No.. A number that identifies the attribute in the order they are specified in the data file.
2. Selection tick boxes. These allow you select which attributes are present in the relation.
3. Name. The name of the attribute, as it was declared in the data file. When you click on different
rows in the list of attributes, the fields change in the box to the right titled Selected attribute.
This box displays the characteristics of the currently highlighted attribute in the list:
1. Name. The name of the attribute, the same as that given in the attribute list.
3. Missing. The number (and percentage) of instances in the data for which this attribute is missing
(unspecified).
4. Distinct. The number of different values that the data contains for this attribute.
5. Unique. The number (and percentage) of instances in the data having a value for this attribute
that no other instances have.
Below these statistics is a list showing more information about the values stored in this
attribute, which differ depending on its type. If the attribute is nominal, the list consists of each
possible value for the attribute along with the number of instances that have that value. If the
attribute is numeric, the list gives four statistics describing the distribution of values in the data—
the minimum, maximum, mean and standard deviation. And below these statistics there is a
coloured histogram, colour-coded according to the attribute chosen as the Class using the box
above the histogram. (This box will bring up a drop-down list of available selections when
clicked.) Note that only nominal Class attributes will result in a colour-coding. Finally, after
pressing the Visualize All button, histograms for all the attributes in the data are shown in a
separate window.
Returning to the attribute list, to begin with all the tick boxes are unticked.
MRCET Page
6161
They can be toggled on/off by clicking on them individually. The four buttons above can
also be used to change the selection:
PREPROCESSING
4. Pattern. Enables the user to select attributes based on a Perl 5 Regular Expression. E.g., .* id
selects all attributes which name ends with id.
Once the desired attributes have been selected, they can be removed by clicking the Remove
button below the list of attributes. Note that this can be undone by clicking the Undo button, which
is located next to the Edit button in the top-right corner of the Preprocess panel.
The preprocess section allows filters to be defined that transform the data in various
ways. The Filter box is used to set up the filters that are required. At the left of the Filter
box is a Choose button. By clicking this button it is possible to select one of the filters in
WEKA. Once a filter has been selected, its name and options are shown in the field next to
the Choose button. Clicking on this box with the left mouse button brings up a
GenericObjectEditor dialog box. A click with the right mouse button (or Alt+Shift+left
click) brings up a menu where you can choose, either to display the properties in a
GenericObjectEditor dialog box, or to copy the current setup string to the clipboard.
MRCET Page
6262
The GenericObjectEditor Dialog Box
The GenericObjectEditor dialog box lets you configure a filter. The same kind
of dialog box is used to configure other objects, such as classifiers and clusterers
(see below). The fields in the window reflect the available options.
Right-clicking (or Alt+Shift+Left-Click) on such a field will bring up a popup menu, listing the
following options:
1. Show properties... has the same effect as left-clicking on the field, i.e., a dialog appears
allowing you to alter the settings.
2. Copy configuration to clipboard copies the currently displayed configuration string to the
system’s clipboard and therefore can be used anywhere else in WEKA or in the console. This is
rather handy if you have to setup complicated, nested schemes.
3. Enter configuration... is the “receiving” end for configurations that got copied to the
clipboard earlier on. In this dialog you can enter a class name followed by options (if the class
supports these). This also allows you to transfer a filter setting from the Preprocess panel to a
Filtered Classifier used in the Classify panel.
MRCET Page
6363
Left-Clicking on any of these gives an opportunity to alter the filters settings. For example,
the setting may take a text string, in which case you type the string into the text field provided. Or
it may give a drop-down box listing several states to choose from. Or it may do something else,
depending on the information required. Information on the options is provided in a tool tip if you
let the mouse pointer hover of the corresponding field. More information on the filter and its
options can be obtained by clicking on the More button in the About panel at the top of the
GenericObjectEditor window.
Applying Filters
Once you have selected and configured a filter, you can apply it to the data by pressing the
Apply button at the right end of the Filter panel in the Preprocess panel. The Preprocess panel will
then show the transformed data. The change can be undone by pressing the Undo button. You can
also use the Edit...button to modify your data manually in a dataset editor. Finally, the Save...
button at the top right of the Preprocess panel saves the current version of the relation in file
formats that can represent the relation, allowing it to be kept for future use.
MRCET Page
6464
The following screenshot shows the effect of discretization
MRCET Page
6565
Data Warehousing and Mining Lab Department of CSE
B.Load each dataset into Weka and run Aprior algorithm with different support and
confidence values. Study the rules generated.
Ans:
MRCET Page
6666
Data Warehousing and Mining Lab Department of CSE
MRCET Page
6767
Data Warehousing and Mining Lab Department of CSE
Association Rule:
An association rule has two parts, an antecedent (if) and a consequent (then). An antecedent is an
item found in the data. A consequent is an item that is found in combination with the antecedent.
Association rules are created by analyzing data for frequent if/then patterns and using the
criteriasupport and confidence to identify the most important relationships. Support is an indication
of how frequently the items appear in the database. Confidence indicates the number of times the
if/then statements have been found to be true.
In data mining, association rules are useful for analyzing and predicting customer behavior. They
play an important part in shopping basket data analysis, product clustering, catalog design and store
layout.
• Support count: The support count of an itemset X, denoted by X.count, in a data set T is the
number of transactions in T that contain X. Assume T has n transactions.
• Then,
MRCET Page
6868
Data Warehousing and Mining Lab Department of CSE
(X ∪Y
support =
).count
n
(X ∪Y
confidence =
).count
X .count
MRCET Page
6969
C.Apply different discretization filters on numerical attributes and run the Aprior
association rule algorithm. Study the rules generated. Derive interesting insights and observe
the effect of discretization in the rule generation process.
MRCT Page
69
Output : === Run information ===
MRCET Page
7070
Unit – III Demonstrate performing classification on data sets.
Classification Tab
Selecting a Classifier
At the top of the classify section is the Classifier box. This box has a text fieldthat gives the
name of the currently selected classifier, and its options. Clicking on the text box with the left
mouse button brings up a GenericObjectEditor dialog box, just the same as for filters, that you can
use to configure the options of the current classifier. With a right click (or Alt+Shift+left click) you
can once again copy the setup string to the clipboard or display the properties in a
GenericObjectEditor dialog box. The Choose button allows you to choose one of the classifiers that
are available in WEKA.
Test Options
The result of applying the chosen classifier will be tested according to the options that are
set by clicking in the Test options box. There are four test modes:
1. Use training set. The classifier is evaluated on how well it predicts the class of the instances it
was trained on.
2. Supplied test set. The classifier is evaluated on how well it predicts the class of a set of
instances loaded from a file. Clicking the Set... button brings up a dialog allowing you to choose
the file to test on.
3. Cross-validation. The classifier is evaluated by cross-validation, using the number of folds that
are entered in the Folds text field.
4. Percentage split. The classifier is evaluated on how well it predicts a certain percentage of the
data which is held out for testing. The amount of data held out depends on the value entered in the
% field.
1. Output model. The classification model on the full training set is output so that it can be
viewed, visualized, etc. This option is selected by default.
MRCET Page
7171
2. Output per-class stats. The precision/recall and true/false statistics for each class are output.
This option is also selected by default.
3. Output entropy evaluation measures. Entropy evaluation measures are included in the output.
This option is not selected by default.
4. Output confusion matrix. The confusion matrix of the classifier’s predictions is included in
the output. This option is selected by default.
5. Store predictions for visualization. The classifier’s predictions are remembered so that they
can be visualized. This option is selected by default.
Note that in the case of a cross-validation the instance numbers do not correspond to the location in
the data!
predictions, e.g., an ID attribute for tracking misclassifications, then the index of this attribute can
be specified here. The usual Weka ranges are supported,“first” and “last” are therefore valid
indices as well (example: “first-3,6,8,12-last”).
8. Cost-sensitive evaluation. The errors is evaluated with respect to a cost matrix. The Set...
button allows you to specify the cost matrix used.
9. Random seed for xval / % Split. This specifies the random seed used when randomizing the
data before it is divided up for evaluation purposes.
10. Preserve order for % Split. This suppresses the randomization of the data before splitting into
train and test set.
11. Output source code. If the classifier can output the built model as Java source code, you can
specify the class name here. The code will be printed in the “Classifier output” area.
MRCET Page
7272
attribute, which is the target for prediction. Some classifiers can only learn nominal classes; others
can only learn numeric classes (regression problems) still others can learn both.
By default, the class is taken to be the last attribute in the data. If you want
to train a classifier to predict a different attribute, click on the box below the Test options box to
bring up a drop-down list of attributes to choose from.
Training a Classifier
Once the classifier, test options and class have all been set, the learning process is started by
clicking on the Start button. While the classifier is busy being trained, the little bird moves around.
You can stop the training process at any time by clicking on the Stop button. When training is
complete, several things happen. The Classifier output area to the right of the display is filled with
text describing the results of training and testing. A new entry appears in the Result list box. We
look at the result list below; but first we investigate the text that has been output.
A. Load each dataset into Weka and run id3, j48 classification algorithm, study the classifier
output. Compute entropy values, Kappa ststistic.
Ans:
MRCET Page
7373
Output:
=== Run information ===
Scheme:weka.classifiers.trees.J48 -C 0.25 -M 2
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
petalwidth
class
Test mode:evaluate on training data
Number of Leaves : 5
MRCET Page
7474
Correctly Classified Instances 147 98 %
Incorrectly Classified Instances 3 2 %
Kappa statistic 0.97
K&B Relative Info Score 14376.1925 %
K&B Information Score 227.8573 bits 1.519 bits/instance
Class complexity | order 0 237.7444 bits 1.585 bits/instance
Class complexity | scheme 16.7179 bits 0.1115 bits/instance
Complexity improvement (Sf) 221.0265 bits 1.4735 bits/instance
Mean absolute error 0.0233
Root mean squared error 0.108
Relative absolute error 5.2482 %
Root relative squared error 22.9089 %
Total Number of Instances 150
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 49 1 | b = Iris-versicolor
0 2 48 | c = Iris-virginica
MRCET Page
7575
The Classifier Output Text
The text in the Classifier output area has scroll bars allowing you to browse
the results. Clicking with the left mouse button into the text area, while holding Alt
and Shift, brings up a dialog that enables you to save the displayed output
in a variety of formats (currently, BMP, EPS, JPEG and PNG). Of course, you
can also resize the Explorer window to get a larger display area.
The output is
1. Run information. A list of information giving the learning scheme options, relation name,
instances, attributes and test mode that were involved in the process.
MRCET Page
7676
2. Classifier model (full training set). A textual representation of the classification model that was
produced on the full training data.
3. The results of the chosen test mode are broken down thus.
4. Summary. A list of statistics summarizing how accurately the classifier was able to predict the
true class of the instances under the chosen test mode.
5. Detailed Accuracy By Class. A more detailed per-class break down of the classifier’s
prediction accuracy.
6. Confusion Matrix. Shows how many instances have been assigned to each class. Elements show
the number of test examples whose actual class is the row and whose predicted class is the column.
7. Source code (optional). This section lists the Java source code if one
chose “Output source code” in the “More options” dialog.
B.Extract if-then rues from decision tree gentrated by classifier, Observe the confusion
matrix and derive Accuracy, F- measure, TPrate, FPrate , Precision and recall values. Apply
cross-validation strategy with various fold levels and compare the accuracy results.
Ans:
A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal
node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node
holds a class label. The topmost node in the tree is the root node.
The following decision tree is for the concept buy_computer that indicates whether a customer at a
company is likely to buy a computer or not. Each internal node represents a test on an attribute.
Each leaf node represents a class.
MRCET Page
7777
The benefits of having a decision tree are as follows −
IF-THEN Rules:
Rule-based classifier makes use of a set of IF-THEN rules for classification. We can express a
rule in the following from −
Points to remember −
• The antecedent part the condition consist of one or more attribute tests and these tests are
logically ANDed.
MRCET Page
7878
Note − We can also write rule R1 as follows:
Rule
Extraction
Here we will learn how to build a rule-based classifier by extracting IF-THEN rules from a
decision tree.
Points to remember −
• One rule is created for each path from the root to the leaf node.
• The leaf node holds the class prediction, forming the rule consequent.
Some of the sequential Covering Algorithms are AQ, CN2, and RIPPER. As per the general
strategy the rules are learned one at a time. For each time rules are learned, a tuple covered by the
rule is removed and the process continues for the rest of the tuples. This is because the path to
each leaf in a decision tree corresponds to a rule.
Note − The Decision tree induction can be considered as learning a set of rules
simultaneously.
The Following is the sequential learning Algorithm where rules are learned for one class at a time.
When learning a rule from a class Ci, we want the rule to cover all the tuples from class C only
and no tuple form any other class.
Input:
D, a data set class-labeled tuples,
Att_vals, the set of all attributes and their possible values.
MRCET Page
7979
Output: A Set of IF-THEN rules.
Method:
Rule_set={ }; // initial set of rules learned is empty
repeat
Rule = Learn_One_Rule(D, Att_valls, c);
remove tuples covered by Rule form D;
until termination condition;
• The Assessment of quality is made on the original set of training data. The rule may
perform well on training data but less well on subsequent data. That's why the rule pruning
is required.
• The rule is pruned by removing conjunct. The rule R is pruned, if pruned version of R has
greater quality than what was assessed on an independent set of tuples.
FOIL is one of the simple and effective method for rule pruning. For a given rule R,
Note − This value will increase with the accuracy of R on the pruning set. Hence, if the
FOIL_Prune value is higher for the pruned version of R, then we prune R.
MRCET Page
8080
4. Click on open file button.
5. Choose WEKA folder in C drive.
6. Select and Click on data option button.
7. Choose iris data set and open file.
8. Click on classify tab and Choose decision table algorithm and select cross-validation
folds value-10 test option.
9. Click on start button.
Output:
=== Run information ===
Scheme:weka.classifiers.rules.DecisionTable -X 1 -S "weka.attributeSelection.BestFirst -D
1 -N 5"
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
petalwidth
class
Test mode:10-fold cross-validation
Decision Table:
MRCET Page
8181
Time taken to build model: 0.02 seconds
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 44 6 | b = Iris-versicolor
0 5 45 | c = Iris-virginica
MRCET Page
8282
C.Load each dataset into Weka and perform Naïve-bayes classification and k-Nearest
Neighbor classification, Interpret the results obtained.
Ans:
‹ Steps for run Naïve-bayes and k-nearest neighbor Classification algorithms in WEKA
MRCET Page
8383
Output: Naïve Bayes
Scheme:weka.classifiers.bayes.NaiveBayes
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
petalwidth
class
Test mode:evaluate on training data
Class
Attribute Iris-setosa Iris-versicolor Iris-virginica
(0.33) (0.33) (0.33)
===============================================================
sepallength
mean 4.9913 5.9379 6.5795
std. dev. 0.355 0.5042 0.6353
weight sum 50 50 50
precision 0.1059 0.1059 0.1059
sepalwidth
mean 3.4015 2.7687 2.9629
std. dev. 0.3925 0.3038 0.3088
weight sum 50 50 50
precision 0.1091 0.1091 0.1091
petallength
MRCET Page
8484
mean 1.4694 4.2452 5.5516
std. dev. 0.1782 0.4712 0.5529
weight sum 50 50 50
precision 0.1405 0.1405 0.1405
petalwidth
mean 0.2743 1.3097 2.0343
std. dev. 0.1096 0.1915 0.2646
weight sum 50 50 50
precision 0.1143 0.1143 0.1143
MRCET Page
8585
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 48 2 | b = Iris-versicolor
0 4 46 | c = Iris-virginica.
Scheme:weka.classifiers.lazy.IBk -K 1 -W 0 -A "weka.core.neighboursearch.LinearNNSearch -A
\"weka.core.EuclideanDistance -R first-last\""
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
MRCET Page
8686
petalwidth
class
Test mode:evaluate on training data
MRCET Page
8787
TP Rate FP Rate Precision Recall F-Measure ROC Area Class
1 0 1 1 1 1 Iris-setosa
1 0 1 1 1 1 Iris-versicolor
1 0 1 1 1 1 Iris-virginica
Weighted Avg. 1 0 1 1 1 1
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 50 0 | b = Iris-versicolor
0 0 50 | c = Iris-virginica
MRCET Page
8888
D.Plot RoC Curves.
Ans:
Scheme:weka.classifiers.trees.J48 -C 0.25 -M 2
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
petalwidth
class
Test mode:evaluate on training data
Number of Leaves : 5
MRCET Page
9191
Size of the tree : 9
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 49 1 | b = Iris-versicolor
0 2 48 | c = Iris-virginica
Naïve-bayes:
=== Run information ===
Scheme:weka.classifiers.bayes.NaiveBayes
Relation: iris
Instances: 150
Attributes: 5
sepallength
sepalwidth
petallength
petalwidth
class
Test mode:evaluate on training data
=== Classifier model (full training set) ===
Naive Bayes Classifier
Class
Attribute Iris-setosa Iris-versicolor Iris-virginica
(0.33) (0.33) (0.33)
===============================================================
sepallength
mean 4.9913 5.9379 6.5795
std. dev. 0.355 0.5042 0.6353
weight sum 50 50 50
precision 0.1059 0.1059 0.1059
sepalwidth
mean 3.4015 2.7687 2.9629
std. dev. 0.3925 0.3038 0.3088
weight sum 50 50 50
precision 0.1091 0.1091 0.1091
petallength
mean 1.4694 4.2452 5.5516
std. dev. 0.1782 0.4712 0.5529
weight sum 50 50 50
precision 0.1405 0.1405 0.1405
petalwidth
mean 0.2743 1.3097 2.0343
std. dev. 0.1096 0.1915 0.2646
weight sum 50 50 50
precision 0.1143 0.1143 0.1143
a b c <-- classified as
50 0 0 | a = Iris-setosa
0 50 0 | b = Iris-versicolor
0 0 50 | c = Iris-virginica
Unit – IV demonstrate performing clustering on data sets Clustering Tab
Selecting a Clusterer
By now you will be familiar with the process of selecting and configuring objects. Clicking
on the clustering scheme listed in the Clusterer box at the top of the
Cluster Modes
The Cluster mode box is used to choose what to cluster and how to evaluate
the results. The first three options are the same as for classification: Use training set, Supplied test
set and Percentage split (Section 5.3.1)—except that now the data is assigned to clusters instead of
trying to predict a specific class. The fourth mode, Classes to clusters evaluation, compares how
well the chosen clusters match up with a pre-assigned class in the data. The drop-down box below
this option selects the class, just as in the Classify panel.
An additional option in the Cluster mode box, the Store clusters for visualization tick box,
determines whether or not it will be possible to visualize the clusters once training is complete.
When dealing with datasets that are so large that memory becomes a problem it may be helpful to
disable this option.
Ignoring Attributes
Often, some attributes in the data should be ignored when clustering. The Ignore attributes
button brings up a small window that allows you to select which attributes are ignored. Clicking on
an attribute in the window highlights it, holding down the SHIFT key selects a range
of consecutive attributes, and holding down CTRL toggles individual attributes on and off. To
cancel the selection, back out with the Cancel button. To activate it, click the Select button. The
next time clustering is invoked, the selected attributes are ignored.
Learning Clusters
The Cluster section, like the Classify section, has Start/Stop buttons, a result text area and a
result list. These all behave just like their classification counterparts. Right-clicking an entry in the
result list brings up a similar menu, except that it shows only two visualization options: Visualize
cluster assignments and Visualize tree. The latter is grayed out when it is not applicable.
A.Load each dataset into Weka and run simple k-means clustering algorithm with different
values of k(number of desired clusters). Study the clusters formed. Observe the sum of
squared errors and centroids, and derive insights.
Ans:
Output:
kMeans
======
Number of iterations: 7
Within cluster sum of squared errors: 62.1436882815797
Missing values globally replaced with mean/mode
Cluster centroids:
Cluster#
Attribute Full Data 0 1
(150) (100) (50)
=================================================================
=
sepallength 5.8433 6.262 5.006
sepalwidth 3.054 2.872 3.418
petallength 3.7587 4.906 1.464
petalwidth 1.1987 1.676 0.244
class Iris-setosa Iris-versicolor Iris-setosa
Clustered Instances
0 100 ( 67%)
1 50 ( 33%)
B.Explore other clustering techniques available in Weka.
WEKA’s visualization allows you to visualize a 2-D plot of the current working relation.
Visualization is very useful in practice, it helps to determine difficulty of the learning problem.
WEKA can visualize single attributes (1-d) and pairs of attributes (2-d), rotate 3-d visualizations
(Xgobi-style). WEKA has “Jitter” option to deal with nominal attributes and to detect “hidden”
data points.
Access To Visualization From The Classifier, Cluster And Attribute Selection Panel Is Available
From A Popup Menu. Click The Right Mouse Button Over An Entry In The Result List To Bring
Up The Menu. You Will Be Presented With Options For Viewing Or Saving The Text Output And
--- Depending On The Scheme --- Further Options For Visualizing Errors, Clusters, Trees Etc.
In the visualization window, beneath the X-axis selector there is a drop-down list,
‘Colour’, for choosing the color scheme. This allows you to choose the color of points based on
the attribute selected. Below the plot area, there is a legend that describes what values the colors
correspond to. In your example, red represents ‘no’, while blue represents ‘yes’. For better
visibility you should change the color of label ‘yes’. Left-click on ‘yes’ in the ‘Class colour’
box and select lighter color from the color palette.
MRCET page10
010010
Selecting Instances
Sometimes it is helpful to select a subset of the data using visualization tool. A special
case is the ‘UserClassifier’, which lets you to build your own classifier by interactively
selecting instances. Below the Y – axis there is a drop-down list that allows you to choose a
selection method. A group of points on the graph can be selected in four ways [2]:
attributes of the point. If more than one point will appear at the same location, more than
one set of attributes will be shown.
MRCET Page
1011011
3. Polygon. You can select several points by building a free-form polygon. Left-click on
the graph to add vertices to the polygon and right-click to complete it.
MRCET Page
1021021
Data Warehousing and Mining Lab Department of CSE
4. Polyline. To distinguish the points on one side from the once on another, you can
build a polyline. Left-click on the graph to add vertices to the polyline and right-click
to finish.
MRCET Page
1031031
Data Warehousing and Mining Lab Department of CSE
Regression:
Regression is a data mining function that predicts a number. Age, weight, distance, temperature,
income, or sales could all be predicted using regression techniques. For example, a regression
model could be used to predict children's height, given their age, weight, and other factors.
A regression task begins with a data set in which the target values are known. For example, a
regression model that predicts children's height could be developed based on observed data for
many children over a period of time. The data might track age, height, weight, developmental
milestones, family history, and so on. Height would be the target, the other attributes would be
the predictors, and the data for each child would constitute a case.
In the model build (training) process, a regression algorithm estimates the value of the target as a
function of the predictors for each case in the build data. These relationships between predictors
and target are summarized in a model, which can then be applied to a different data set in which
the target values are unknown.
Regression models are tested by computing various statistics that measure the difference
between the predicted values and the expected values. See "Testing a Regression Model".
Regression modeling has many applications in trend analysis, business planning, marketing,
financial forecasting, time series prediction, biomedical and drug response modeling, and
environmental modeling.
You do not need to understand the mathematics used in regression analysis to develop quality
regression models for data mining. However, it is helpful to understand a few basic concepts.
The goal of regression analysis is to determine the values of parameters for a function that cause
the function to best fit a set of data observations that you provide. The following equation
expresses these relationships in symbols. It shows that regression is the process of estimating the
value of a continuous target (y) as a function (F) of one or more predictors (x1 , x2 , ..., xn), a set
of parameters (θ1 , θ2 , ..., θn), and a measure of error (e).
MRCET Page
1041041
Data Warehousing and Mining Lab Department of CSE
y = F(x,θ) + e
The process of training a regression model involves finding the best parameter values for the
function that minimize a measure of the error, for example, the sum of squared errors.
There are different families of regression functions and different ways of measuring the
error.
Linear Regression
The simplest form of regression to visualize is linear regression with a single predictor. A linear
regression technique can be used if the relationship between x and y can be approximated with a
straight line, as shown in Figure 4-1.
In a linear regression scenario with a single predictor (y = θ2x + θ1), the regression parameters
(also called coefficients) are:
The slope of the line (θ2) — the angle between a data point and the regression line
and
The y intercept (θ1) — the point where x crosses the y axis (x = 0)
MRCET Page
1051051
Data Warehousing and Mining Lab Department of CSE
Nonlinear Regression
Often the relationship between x and y cannot be approximated with a straight line. In this case,
a nonlinear regression technique may be used. Alternatively, the data could be preprocessed to
make the relationship linear.
In Figure 4-2, x and y have a nonlinear relationship. Oracle Data Mining supports nonlinear
regression via the gaussian kernel of SVM. (See "Kernel-Based Learning".)
Multivariate Regression
Multivariate regression refers to regression with multiple predictors (x1 , x2 , ..., xn). For purposes
of illustration, Figure 4-1and Figure 4-2 show regression with a single predictor. Multivariate
regression is also referred to as multiple regression.
Regression Algorithms
MRCET Page
1061061
Data Warehousing and Mining Lab Department of CSE
Generalized Linear Models (GLM) is a popular statistical technique for linear modeling.
Oracle Data Mining implements GLM for regression and classification. See Chapter 12,
"Generalized Linear Models"
Support Vector Machines (SVM) is a powerful, state-of-the-art algorithm for linear and
nonlinear regression. Oracle Data Mining implements SVM for regression and other
mining functions. See Chapter 18, "Support Vector Machines"
Note:
Both GLM and SVM, as implemented by Oracle Data Mining, are particularly suited for mining
data that includes many predictors (wide data).
The Root Mean Squared Error and the Mean Absolute Error are statistics for evaluating the
overall quality of a regression model. Different statistics may also be available depending on the
regression methods used by the algorithm.
The Root Mean Squared Error (RMSE) is the square root of the average squared distance of a
data point from the fitted line.Figure 4-3 shows the formula for the RMSE.
MRCET Page
1071071
Data Warehousing and Mining Lab Department of CSE
The Mean Absolute Error (MAE) is the average of the absolute value of the residuals. The MAE
is very similar to the RMSE but is less sensitive to large errors. Figure 4-4 shows the formula for
the MAE.
A.Load each dataset into Weka and build Linear Regression model. Study the cluster
formed. Use training set option. Interpret the regression model and derive patterns and
conclusions from the regression results.
Ans:
Output:
MRCET Page
1081081
Data Warehousing and Mining Lab Department of CSE
Relation: labor-neg-data
Instances: 57
Attributes: 17
duration
wage-increase-first-year
wage-increase-second-year
wage-increase-third-year
cost-of-living-adjustment
working-hours
pension
standby-pay
shift-differential
education-allowance
statutory-holidays
vacation
longterm-disability-assistance
contribution-to-dental-plan
bereavement-assistance
contribution-to-health-plan
class
duration =
MRCET Page
1091091
Data Warehousing and Mining Lab Department of CSE
0.4689 * cost-of-living-adjustment=tc,tcf +
0.6523 * pension=none,empl_contr +
1.0321 * bereavement-assistance=yes +
0.3904 * contribution-to-health-plan=full +
0.2765
B.Use options cross-validation and percentage split and repeat running the Linear
Regression Model. Observe the results and derive meaningful results.
Output: cross-validation
vacation
longterm-disability-assistance
contribution-to-dental-plan
bereavement-assistance
contribution-to-health-plan
class
duration =
0.4689 * cost-of-living-adjustment=tc,tcf +
0.6523 * pension=none,empl_contr +
1.0321 * bereavement-assistance=yes +
0.3904 * contribution-to-health-plan=full +
0.2765
contribution-to-dental-plan
bereavement-assistance
contribution-to-health-plan
class
Test mode: split 66.0% train, remainder test
C.Explore Simple linear regression techniques that only looks at one variable.
Description: The business of banks is making loans. Assessing the credit worthiness of an
applicant is of crucial importance. You have to develop a system to help a loan officer
decide whether the credit of a customer is good. Or bad. A bank’s business rules
regarding loans must consider two opposing factors. On th one han, a bank wants to
make as many loans as possible.
Interest on these loans is the banks profit source. On the other hand, a bank can not afford
to make too many bad loans. Too many bad loans could lead to the collapse of the bank.
The bank’s loan policy must involved a compromise. Not too strict and not too lenient.
To do the assignment, you first and foremost need some knowledge about the world of credit.
You can acquire such knowledge in a number of ways.
1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try
to represent her knowledge in a number of ways.
2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on
finance. Translate this knowledge from text from to production rule form.
3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which
can be used to judge the credit worthiness of a loan applicant.
4. Case histories: Find records of actual cases where competent loan officers correctly
judged when and not to. Approve a loan application.
Actual historical credit data is not always easy to come by because of confidentiality
rules. Here is one such data set. Consisting of 1000 actual cases collected in Germany.
In spite of the fact that the data is German, you should probably make use of it for this
assignment(Unless you really can consult a real loan officer!)
There are 20 attributes used in judging a loan applicant( ie., 7 Numerical attributes and 13
Categoricl or Nominal attributes). The goal is the classify the applicant into one of two
categories. Good or Bad.
1. Checking_Status
2. Duration
3. Credit_history
4. Purpose
5. Credit_amout
6. Savings_status
7. Employment
8. Installment_Commitment
9. Personal_status
10. Other_parties
11. Residence_since
12. Property_Magnitude
13. Age
14. Other_payment_plans
15. Housing
16. Existing_credits
17. Job
18. Num_dependents
19. Own_telephone
20. Foreign_worker
21. Class
1. List all the categorical (or nominal) attributes and the real valued attributes
separately.
3.Click on invert.
4.Then we get all categorial attributes selected
5. Click on remove
6. Click on visualize all.
1. Checking_Status
2. Credit_history
3. Purpose
4. Savings_status
5. Employment
6. Personal_status
7. Other_parties
8. Property_Magnitude
9. Other_payment_plans
10. Housing
11. Job
12. Own_telephone
13. Foreign_worker
1. Duration
2. Credit_amout
3. Installment_Commitment
4. Residence_since
5. Age
6. Existing_credits
7. Num_dependents
2. What attributes do you think might be crucial in making the credit assessment? Come
up with some simple rules in plain English using your selected attributes.
Ans) The following are the attributes may be crucial in making the credit assessment.
1. Credit_amount
2. Age
3. Job
4. Savings_status
5. Existing_credits
6. Installment_commitment
7. Property_magnitude
3. One type of model that you can create is a Decision tree . train a Decision tree using
the complete data set as the training data. Report the model obtained after training.
We created a decision tree by using J48 Technique for the complete dataset as the training
Output:
MRCET Page
120120
Data Warehousing and Mining Lab Department of CSE
MRCET Page
121121
Data Warehousing and Mining Lab Department of CSE
Output:
If we used our above model trained on the complete dataset and classified credit as good/bad for
each of the examples in that dataset. We can not get 100% training accuracy only 85.5% of
examples, we can classify correctly.
5. Is testing on the training set as you did above a good idea? Why or why not?
Ans)It is not good idea by using 100% training data set.
6. One approach for solving the problem encountered in the previous question is using
cross-validation? Describe what is cross validation briefly. Train a decision tree again
using cross validation and report your results. Does accuracy increase/decrease? Why?
Output:
Cross-Validation Definition: The classifier is evaluated by cross validation using the number of
folds that are entered in the folds text field.
In Classify Tab, Select cross-validation option and folds size is 2 then Press Start Button, next
time change as folds size is 5 then press start, and next time change as folds size is 10 then press
start.
i) Fold Size-10
Stratified cross-validation ===
=== Summary ===
MRCET Page
122122
Data Warehousing and Mining Lab Department of CSE
a b <-- classified as
588 112 | a = good
183 117 | b = bad
MRCET Page
123123
Data Warehousing and Mining Lab Department of CSE
a b <-- classified as
596 104 | a = good
163 137 | b = bad
a b <-- classified as
MRCET Page
124124
Data Warehousing and Mining Lab Department of CSE
624 76 | a = good
203 97 | b = bad
Note: With this observation, we have seen accuracy is increased when we have folds size is 5
and accuracy is decreased when we have 10 folds.
7. Check to see if the data shows a bias against “foreign workers” or “personal-status”.
One way to do this is to remove these attributes from the data set and see if the decision
tree created in those cases is significantly different from the full dataset case which you
have already done. Did removing these attributes have any significantly effect? Discuss.
Output:
We use the Preprocess Tab in Weka GUI Explorer to remove an attribute “Foreign•
workers” & “Perosnal_status” one by one. In Classify Tab, Select Use Training set option
then
Press Start Button, If these attributes removed from the dataset, we can see change in the
accuracy compare to full data set when we removed.
MRCET Page
125125
Data Warehousing and Mining Lab Department of CSE
i) If Foreign_worker is removed
Evaluation on training set ===
=== Summary ===
Correctly Classified Instances 859 85.9 %
Incorrectly Classified Instances 141 14.1 %
Kappa statistic 0.6377
Mean absolute error 0.2233
Root mean squared error 0.3341
Relative absolute error 53.1347
% Root relative squared error 72.9074
% Coverage of cases (0.95 level) 100
% Mean rel. region size (0.95 level)91.9
% Total Number of Instances 1000
MRCET Page
126126
Data Warehousing and Mining Lab Department of CSE
a b <-- classified as
668 32 | a = good
109 191 | b = bad
i) If Personal_status is removed
Evaluation on training set ===
=== Summary ===
MRCET Page
127127
Total Number of Instances 1000
a b <-- classified as
668 32 | a = good 102
198 | b = bad
Note: With this observation we have seen, when “Foreign_worker “attribute is removed
from the
8. Another question might be, do you really need to input so many attributes to get good
results? May be only a few would do. For example, you could try just having attributes
2,3,5,7,10,17 and 21. Try out some combinations.(You had removed two attributes in
problem 7. Remember to reload the arff data file to get all the attributes initially before
you start selecting the ones you want.)
a b <-- classified as
647 53 | a = good
106 194 | b = bad
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
use the Preprocess Tab in Weka GUI Explorer to remove 3rd attribute (Credit_history). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed
from the dataset, we can see change in the accuracy compare to full data set when we
removed.
a b <-- classified as
645 55 | a = good
106 194 | b = bad
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
th
use the Preprocess Tab in Weka GUI Explorer to remove 5 attribute (Credit_amount). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed
from the dataset, we can see change in the accuracy compare to full data set when we removed.
=== Evaluation on training set ===
=== Summary ===
a b <-- classified as
675 25 | a = good
111 189 | b = bad
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
th
use the Preprocess Tab in Weka GUI Explorer to remove 7 attribute (Employment). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed
from the dataset, we can see change in the accuracy compare to full data set when we removed.
a b <-- classified as
670 30 | a = good
112 188 | b = bad
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
th
use the Preprocess Tab in Weka GUI Explorer to remove 10 attribute (Other_parties). In
Classify Tab, Select Use Training set option then Press Start Button, If these attributes removed
from the dataset, we can see change in the accuracy compare to full data set when we removed.
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
th
use the Preprocess Tab in Weka GUI Explorer to remove 17 attribute (Job). In Classify Tab,
Select Use Training set option then Press Start Button, If these attributes removed from the
dataset, we can see change in the accuracy compare to full data set when we removed.
a b <-- classified as
675 25 | a = good
116 184 | b = bad
Remember to reload the previous removed attribute, press Undo option in Preprocess tab. We
st
use the Preprocess Tab in Weka GUI Explorer to remove 21 attribute (Class). In Classify Tab,
Select Use Training set option then Press Start Button, If these attributes removed from the
dataset, we can see change in the accuracy compare to full data set when we removed.
a b <-- classified as
rd
Note: With this observation we have seen, when 3 attribute is removed from the Dataset, the
nd th
accuracy (83%) is decreased. So this attribute is important for classification. when 2 and 10
attributes are removed from the Dataset, the accuracy(84%) is same. So we can remove any one
th
among them. when 7th and 17 attributes are removed from the Dataset, the accuracy(85%) is
th st
same. So we can remove any one among them. If we remove 5 and 21 attributes the accuracy
is increased, so these attributes may not be needed for the classification.
9. Sometimes, The cost of rejecting an applicant who actually has good credit might be
higher than accepting an applicant who has bad credit. Instead of counting the
misclassification equally in both cases, give a higher cost to the first case ( say cost 5) and
lower cost to the second case. By using a cost matrix in weak. Train your decision tree and
report the Decision Tree and cross validation results. Are they significantly different from
results obtained in problem 6.
In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option . In Classify
Tab then press Choose button in that select J48 as Decision Tree Technique. In Classify Tab
then press More options button then we get classifier evaluation options window in that select
cost sensitive evaluation the press set option Button then we get Cost Matrix Editor. In that
change classes as 2 then press Resize button. Then we get 2X2 Cost matrix. In Cost Matrix (0,1)
location value change as 5, then we get modified cost matrix is as follows.
0.0 5.0
1.0 0.0
Then close the cost matrix editor, then press ok button. Then press start button.
=== Evaluation on training set ===
=== Summary ===
a b <-- classified as
669 31 | a = good
114 186 | b = bad
Note: With this observation we have seen that ,total 700 customers in that 669 classified as good
customers and 31 misclassified as bad customers. In total 300cusotmers, 186 classified as bad
customers and 114 misclassified as good customers.
10. Do you think it is a good idea to prefect simple decision trees instead of having long
complex decision tress? How does the complexity of a Decision Tree relate to the bias of the
model?
Ans)
It is Good idea to prefer simple Decision trees, instead of having complex Decision tree.
11. You can make your Decision Trees simpler by pruning the nodes. One approach is to
use Reduced Error Pruning. Explain this idea briefly. Try reduced error pruning for
training your Decision Trees using cross validation and report the Decision Trees you
obtain? Also Report your accuracy using the pruned model Does your Accuracy increase?
Ans)
We can make our decision tree simpler by pruning the nodes. For that In Weka GUI Explorer,
Select Classify Tab, In that Select Use Training set option . In Classify Tab then press Choose
button in that select J48 as Decision Tree Technique. Beside Choose Button Press on J48 –c 0.25
–M2 text we get Generic Object Editor. In that select Reduced Error pruning Property as
True then press ok. Then press start button.
Decision Tree consisting 2-3 levels and convert into a set of rules. There also exist different
classifiers that output the model in the form of rules. One such classifier in weka is rules.
PART, train this model and report the set of rules obtained. Sometimes just one attribute
can be good enough in making the decision, yes, just one ! Can you predict what attribute
that might be in this data set? OneR classifier uses a single attribute to make decisions(it
chooses the attribute based on minimum error).Report the rule obtained by training a one
R classifier. Rank the performance of j48,PART,oneR.
Ans)
In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option .There also
exist different classifiers that output the model in the form of Rules. Such classifiers in weka are
“PART” and ”OneR” . Then go to Choose and select Rules in that select PART and press
start Button.
a b <-- classified as
653 47 | a = good
56 244 | b = bad
Then go to Choose and select Rules in that select OneR and press start Button.
== Evaluation on training set ===
=== Summary ===
Correctly Classified Instances 742 74.2 %
Incorrectly Classified Instances 258 25.8 %
=== Confusion Matrix ===
a b <-- classified as
642 58 | a = good
200 100 | b = bad
Then go to Choose and select Trees in that select J48 and press start Button.
=== Evaluation on training set ===
=== Summary ===
Correctly Classified Instances 855 85.5 %
Incorrectly Classified Instances 145 14.5 %
=== Confusion Matrix ===
a b <-- classified as
669 31 | a = good
114 186 | b = bad
Note: With this observation we have seen the performance of classifier and Rank is as follows
1. PART
2. J48 3. OneR
Dimension
_name
_hierarchies
Dimensions objects(dimension) consists of set of levels and set of hierarchies defined over those
levels.the levels represent levels of aggregation.hierarchies describe-child relationships among a
set of levels.
For example .a typical calander dimension could contain five levels.two hierarchies can
be defined on these levels.
H1: YearL>QuarterL>MonthL>DayL
H2: YearL>WeekL>DayL
The hierarchies are describes from parent to child,so that year is the parent of Quarter,quarter
are parent of month,and so forth.
When you create a definition for a hierarchy,warehouse builder creates an identifier key for
each level of the hierarchy and unique key constraint on the lowest level (base level)
TIME(day,month,year) PATIENT(patient_name,age,address,etc)
SUPPLIER:( Supplier_name,medicine_brand_name,address,etc..,)
If each dimension has 6 levels,decide the levels and hierarchies,assumes the level names
suitably.
Design the hospital management system data warehousing using all schemas.give the example
Data Preprocessing
The preprocess section allows filters to be defined that transform the data in various ways.
The Filter box is used to set up filters that are required. At the left of the Filter box is a
Choose button. By clicking this button it is possible to select one of the filters in Weka. Once
a filter has been selected, its name and options are shown in the field next to the Choose
button. Clicking on this box brings up a GenericObjectEditor dialog box, which lets you
configure a filter. Once you are happy with the settings you have chosen, click OK to return
to the main Explorer window.
Now you can apply it to the data by pressing the Apply button at the right end of the Filter
panel. The Preprocess panel will then show the transformed data. The change can be undone
using the Undo button. Use the Edit button to view your transformed data in the dataset editor.
• Use the filter AddExpression and add an attribute which is the average of attributes
M1 and M2. Name this attribute as AVG.
• Use the attribute filters Discretize and PKIDiscretize to discretize the M1 and
M2 attributes into five bins. (NOTE: Open the file afresh to apply the second filter
• Perform Normalize and Standardize on the dataset and identify the difference
between these operations.
• Use the attribute filter FirstOrder to convert the M1 and M2 attributes into a
single attribute representing the first differences between them.
• Add a nominal attribute Grade and use the filter MakeIndicator to convert the
attribute into a Boolean attribute.
• Try if you can accomplish the task in the previous step using the filter
MergeTwoValues.
• Try the following transformation functions and identify the purpose of each
• NumericTransform
• NominalToBinary
• NumericToBinary
• Remove
• RemoveType
• RemoveUseless
• ReplaceMissingValues
• SwapValues
• Perform Randomize on the given dataset and try to correlate the resultant sequence
with the given one.
MRCET Page
1401401
LINUX PROGRAMMING
LABORATORY MANUAL
B.TECH
(IV YEAR – I SEM)
(2016-17)
Department of
Computer Science and Engineering
1
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956
Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2008 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
Vision
To acknowledge quality education and instill high patterns of discipline
making the students technologically superior and ethically strong which
involves the improvement in the quality of life in human race.
Mission
To achieve and impart holistic technical education using the best of
infrastructure, outstanding technical and teaching expertise to establish
the students into competent and confident engineers.
Evolving the center of excellence through creative and innovative
teaching learning practices for promoting academic achievement to
produce internationally accepted competitive and world class
professionals.
2
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956
Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2008 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
1. To facilitate the graduates with the ability to visualize, gather information, articulate,
analyze, solve complex problems, and make decisions. These are essential to
address the challenges of complex and computation intensive problems increasing
their productivity.
2. To facilitate the graduates with the technical skills that prepare them for immediate
employment and pursue certification providing a deeper understanding of the
technology in advanced areas of computer science and related fields, thus
encouraging to pursue higher education and research based on their interest.
3. To facilitate the graduates with the soft skills that include fulfilling the mission,
setting goals, showing self-confidence by communicating effectively, having a
positive attitude, get involved in team-work, being a leader, managing their career
and their life.
To facilitate the graduates with the knowledge of professional and ethical responsibilities by
paying attention to grooming, being conservative with style, following dress codes, safety
codes,and adapting themselves to technological advancements.
3
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956
Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2008 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
After the completion of the course, B. Tech Computer Science and Engineering, the
graduates will have the following Program Specific Outcomes:
4
PROGRAM OUTCOMES (POs)
Engineering Graduates will be able to:
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a
member and leader in a team, to manage projects and in multi disciplinary environments.
12. Life- long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.
5
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
Maisammaguda, Dhulapally Post, Via Hakimpet, Secunderabad – 500100
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------
1. Students are advised to come to the laboratory at least 5 minutes before (to the
starting time), those who come after 5 minutes will not be allowed into the lab.
2. Plan your task properly much before to the commencement, come prepared to the lab
with the synopsis / program / experiment details.
3. Student should enter into the laboratory with:
a. Laboratory observation notes with all the details (Problem statement, Aim,
Algorithm, Procedure, Program, Expected Output, etc.,) filled in for the lab
session.
b. Laboratory Record updated up to the last session experiments and other
utensils (if any) needed in the lab.
c. Proper Dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN, and occupy the computer
system allotted to you by the faculty.
5. Execute your task in the laboratory, and record the results / output in the lab
observation note book, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, must
maintain the discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high end branded systems,
which should be utilized properly.
8. Students / Faculty must keep their mobile phones in SWITCHED OFF mode during
the lab sessions.Misuse of the equipment, misbehaviors with the staff and systems
etc., will attract severe punishment.
9. Students must take the permission of the faculty in case of any urgency to go out ; if
anybody found loitering outside the lab / class without permission during working
hours will be treated seriously and punished appropriately.
10. Students should LOG OFF/ SHUT DOWN the computer system before he/she leaves
the lab after completing the task (experiment) in all aspects. He/she must ensure the
system / seat is kept properly.
6
INDEX
7
Procedure to connect to LINUX
(Steps)
8
Procedure to connect to LINUX
Step 3: provide login and password (nothing is displayed on screen while typing
password)
Step 4:changethe default password at your first login
9
EXPERIMENT NO: 1 Date:
Aim: Write a Shell Script that accepts a file name, starting and ending line numbers as
arguments and displays all lines between the given line numbers.
ALGORITHM:
Sed command:
stream editor for filtering and transforming text
1. Replacing or substituting string
Sed command is mostly used to replace the text in a file. The below simple sed
command replaces the word "unix" with "linux" in the file.
$sed 's/unix/linux/' file.txt
nl command:
10
The nl utility in Linux is used to give number lines of a file on console.
Example:
$ nl sort.txt
1 UK
2 Australia
3 Newzealand
4 Brazil
5 America
Execution:
Viva Questions
11
EXPERIMENT NO: 2 Date:
AIM: Write a shell Script that deletes all lines containing the specified word in one or more
files supplied as arguments to it.
ALGORITHM:
#!/bin/bash
if [ $# -lt 2 ]then
echo "Enter atlest two files as input in command line"
else
printf "enter a word to find:"
read word
for f in $*
do
printf "\n In File $f:\n"
sed /$word/d $f
done
fi
Execution:
run1:
check data in input files
[root@localhost sh]# cat abc1.txt
abc
def
ghi
abc
abc
cccc
[root@localhost sh]# cat abc2.txt
abc
def
ghi
abc
abc
cccc
12
Executing shell script
[root@localhost sh]# sh 2.sh abc1.txt abc2.txt
enter a word to find:abc
In File abc1.txt:
def
ghi
cccc
In File abc2.txt:
def
ghi
cccc
Expected output:
Displays lines from files s1 s2 after deleting the word hi
Viva Questions
13
EXPERIMENT NO: 3 Date:
Aim: Write a shell script that displays a list of all files in the current directory to which the user has
read, write and execute permissions.
ALGORITHM:
#!/bin/bash
echo "List of Files which have Read, Write and Execute Permissions in Current Directory
are..."
for file in *
do
if [ -r $file -a -w $file -a -x $file ]
then
echo $file
fi
done
Execution:
$sh 3.sh
Expected output:
by executing above shell script you will get all files which has read ,write and execute
Permissions in current working directory
sample output
[root@localhost sh]# sh 3.sh
List of Files which have Read, Write and Execute Permissions in Current Directory are...
5.sh
a.out
Viva Questions:
1.Display all files in a directory
2.how to use chmod
3.How to change file permissions
14
EXPERIMENT NO: 4 Date:
Aim:-Write a shell script that receives any number of file names as arguments checks if
every argumentsupplied is a file or directory and reports accordingly. whenever the argument
is a file it reports no of linespresent in it
ALGORITHM:
step 1: if arguments are less than 1 print Enter atleast one input file name and gotostep
9
Step 2: selects list a file from list of arguments provided in command line
Step 3: check for whether it is directory if yes print is directory and goto step 9
step 4: check for whether it is a regular file if yes goto step 5 else goto step 8
Step 5: print given name is regular file
step 6: print No of lines in file
step 7: goto step
step 8: print not a file or a directory
step 9: stop
Execution:
provide two file names as input one a regular file and other directory
for example abc1.txt a text file as first argument and japs a directory as second argument
Run1:
[root@localhost sh]# sh 4.sh abc1.txt japs
given name is file: abc1.txt
No of lines in file are : 7 abc1.txt
japs is directory
15
run 2:[root@localhost sh]# sh 4.sh abc1.txt abc2.txt
given name is file: abc1.txt
No of lines in file are : 7 abc1.txt
given name is file: abc2.txt
No of lines in file are : 7 abc2.txt
Viva Questions:
2. x and y are two variables containing numbers? How to add these 2 numbers?
$ expr $x + $y
4. How to find the list of files modified in the last 30 mins in Linux?
$ find . -mmin -30
16
EXPERIMENT NO: 5 Date:
Aim:-Write a shell script that accepts a list of file names as its arguments, counts and reports
the occurrenceof each word that is present in the first argument file on other argument files.
ALGORITHM:
Script name:5.sh
#!/bin/bash
echo "no of arguments $#"
if [ $# -le 2 ]
then
echo "Error : Invalid number of arguments."
exit
fi
str=`cat $1 | tr '\n' ' '`
for a in $str
do
echo "in file $a"
echo "Word = $a, Count = `grep -c "$a" $2`"
done
17
abc
def
ghi
abc
abc
cccc
executing script
[root@localhost sh]# sh 5.sh abc1.txt abc2.txt
Word = abc, Count = 3
Word = def, Count = 1
Word = ghi, Count = 1
Word = abc, Count = 3
Word = abc, Count = 3
Word = cccc, Count = 1
Viva Questions
2. The command "cat file" gives error message "--bash: cat: Command not found". Why?
It is because the PATH variable is corrupt or not set appropriately. And hence the error because the
cat command is not available in the directories present PATH variable.
18
EXPERIMENT NO: 6 Date:
Viva Questions
2. A string contains a absolute path of a file. How to extract the filename alone from the
absolute path in Linux?
$ x="/home/guru/temp/f1.txt"
$ echo $x | sed 's^.*/^^'
3. How to find all the files created after a pre-defined date time, say after 10th April 10AM?
19
2. Find all the files created after this dummy file.
5. The word "Unix" is present in many .txt files which is present across many files and
also files present in sub directories. How to get the total count of the word "Unix" from all
the .txt files?
$ find . -name *.txt -exec grep -c Unix '{}' \; | awk '{x+=$0;}END{print x}'
20
EXPERIMENT NO: 7 Date:
Script Name:7.sh
#!/bin/bash
echo "Factorial Calculation Script...."
echo "Enter a number: "
read f
fact=1
factorial=1
while [ $fact -le $f ]
do
factorial=`expr $factorial \* $fact`
fact=`expr $fact + 1`
done
echo "Factorial of $f = $factorial"
21
EXPERIMENT NO: 8 Date:
Aim:-write an awk script to count number of lines in a file that does not contain vowels
ALGORITHM
Step 1: create a file with 5-10 lines of data
Step 2: write an awk script by using grep command to filter the lines
that do not contain vowels
awk „ $0 ~/aeiou/ {print $0}‟ file1
step3: count=count+1
step4:print count
step5:stop
BEGIN{}
{
If($0 !~/[aeiou AEIOU]/)
wordcount+=NF
}
END
{
print "Number of Lines are", wordcount
}
22
EXPERIMENT NO: 9 Date:
Aim:-write an awk script to find the no of characters ,words and lines in a file
ALGORITHM
Step 1: create a file with 5 to10 lines of data
Step 2: write an awk script
find the length of file
store it in chrcnt
step3: count the no of fields (NF), store it in wordcount
step4: count the no of records (NR), store it in NR
step5: print chrcnt,NRwordcount
step6: stop
Awk script name:nc.awk
BEGIN{}
{
print len=length($0),"\t",$0
wordcount+=NF
chrcnt+=len
}
END {
print "total characters",chrcnt
print "Number of Lines are",NR
print "No of Words count:",wordcount
}
VIVA QUESTIONS:
1.How to find the last modified file or the newest file in a directory?
$ ls -lrt | grep ^- | awk 'END{print $NF}'
2.How to access the 10th command line argument in a shell script in Linux?
$1 for 1st argument, $2 for 2nd, etc... For 10th argument, ${10}, for 11th, ${11} and so on.
4. How to delete a file which has some hidden characters in the file name?
Since the rm command may not be able to delete it, the easiest way to delete a file with some hidden
characters in its name is to delete it with the find command using the inode number of the file.
$ ls –li
total 32
9962571 -rw-r--r-- 1 guru users 0 Apr 23 11:35
$ find . -inum 9962571 -exec rm '{}' \;
5.Using the grep command, how can you display or print the entire file contents?
$ grep '.*' file
6.What is the difference between a local variable and environment variable in Linux?
A local variable is the one in which the scope of the variable is only in the shell in which it is defined.
An environment variable has scope in all the shells invoked by the shell in which it is defined.
24
EXPERIMENT NO: 10 Date:
Algorithm:
step 1:read the file from keyboard if the file exists
step 2:write data into the file by means of cat command
step 3:if file not exists, create new file
step 4: copy data of file to another by means of cp command
if target file exists cp command overwrites or replaces with new file
step 5:display contents of copied file by cat command in terminal
25
EXPERIMENT NO: 11 Date:
Aim: implement in c language the following Unix commands using system calls
a)cat b)ls c)mv
SYNTAX:
cat [OPTIONS] [FILE]...
OPTIONS:
-A Show all.
-b Omits line numbers for blank space in the output.
-e A $ character will be printed at the end of each line prior to a new line.
-E Displays a $ (dollar sign) at the end of each line.
-n Line numbers for all the output lines.
-s If the output has multiple empty lines it replaces it with one empty line.
-T Displays the tab characters in the output.
-v Non-printing characters (with the exception of tabs, new-lines & form-feeds) are printed
visibly.
3. To display a file:
$cat file1.txt
This command displays the data in the file.
Algorithm:
Step 1:Start
26
Step 2:read arguments from keyboard at command line
Step 3:if no of arguments are less than two print ENTER CORRECT ARGUMENTS
Else goto step 4
Step4:read the date from specified file and write it to destination file
Step 5 :stop
#include<stdio.h>
#include<sys/types.h>
#include<stdlib.h>
#include<fcntl.h>
#include<sys/stat.h>
int main(int argc,char *argv[])
{
int fd,n;
char buff[512];
if(argc!=2)
printf("ENTER CORRECT ARGUMENTS :");
if((fd=open(argv[1],4))<0)
{
printf("ERROR");
return 0;
}
while(n=read(fd,buff,sizeof(buff))>0)
write(1,buff,n);
}
Algorithm:
Step 1. Start.
Step 2. open directory using opendir( ) system call.
Step 3. read the directory using readdir( ) system call.
Step 4. print dp.name and dp.inode .
Step 5. repeat above step until end of directory.
Step 6: Stop.
Algorithm:
Step 1: Start
Step 2: open an existed file and one new open file using open() system call
Step 3: read the contents from existed file using read( ) system call
Step 4:write these contents into new file using write system call using write( ) system
call
Step 5: repeat above 2 steps until eof
Step 6: close 2 file using fclose( ) system call
Step 7: delete existed file using using unlink( ) system
Step 8: Stop.
#include<stdio.h>
#include<string.h>
int main(int argc ,char *argv[])
{
int r,i;
char p[20],q[20];
if(argc<3)
printf("improper arguments\n file names required\n");
else
if( argc==3)
{
printf("\n%s\n",argv[1],argv[2]);
r=link(argv[1],argv[2]);
printf("%d\n",r);
unlink(argv[1]);
}
else
{
for(i=1;i<argc-1;i++)
{
strcpy(p,argv[argc-1]);
strcat(p,"/");
strcat(p,argv[i]);
28
printf("%s%s\n",argv[i],p);
link(argv[i],p);
unlink(argv[i]);
}
}
}
29
EXPERIMENT NO: 12 Date:
Aim:Write a C program that takes one or more file/directory names as
command line input and reports following information
A)File Type B)Number Of Links
c)Time of last Acces D) Read,write and execute permissions
Algorithm:
Step 1:start
Step 2:Declare struct stat a
Step 3:read arguments at command line
Step 4: set the status of the argument using stat(argv[i],&a);
Step 5:Check whether the given file is Directory file by using S_ISDIR(a.st_mode)
if it is a directory file print Directory file
Else
print is Regular file
Step6: print number of links
Step 7:print last time access
Step 8:Print Read,write and execute permissions
Step 9:stop
ALGORITHM :
#include<fcntl.h>
#include<stdio.h>
#include<dirent.h>
#include<unistd.h>
#include<sys/stat.h>main()
{
char dirname[10];
DIR *p;
struct dirent *d;
printf("Enter directory name ");
scanf("%s",dirname);
p=opendir(dirname);
if(p==NULL)
{
perror("Cannot find dir.");
exit(-1);
}
while(d=readdir(p))
printf("%s\n",d->d_name);
}
31
EXPERIMENT NO: 14
Aim:Write a C program to list every file in directory,its inode number and file name
Algorithm:
Step 1:Start
Step 2:Read Directory name
Step 3:open the directory
Step 4: print file name and Inode number of each file in the directory
Step 5:Stop
#include<fcntl.h>
#include<stdio.h>
#include<dirent.h>
#include<sys/stat.h>
int main(int argc,char*argv[])
{
DIR *dirop;
struct dirent *dired;
if(argc!=2)
{
printf("Invalid number of arguments\n");
}
else if((dirop=opendir(argv[1]))==NULL)
printf("Cannot open Directory\n");
else
{
printf("%10s %s \n","Inode","File Name");
while((dired=readdir(dirop))!=NULL)
printf("%10d %s\n ",dired->d_ino,dired->d_name);
closedir(dirop);
}
return 0;
}
32
EXPERIMENT NO: 15 Date:
Algorithm:
Step 1: create a file pointer
FILE *fd;
Step 2:open the pipe for reading the data of ls –l command
fd=popen("ls -l","r");
step 3: read the data from pipe and store it in line buffer
while((fgets(line,200,fd))!=NULL) print ("%s\n",line);
step 4: create a file
if((f1=creat("xx.txt",0644))<0)
print ERROR IN CREATING
step 5:read the data from line and store it to file
while((n=read(line,buff,sizeof(buff)))>0)
write(f1,line,n);
step 6:stop
return 1;
}
34
EXPERIMENT NO: 16 Date:
Aim:Write a C program to create child process and allow parent process to display “parent”
and the child to display “child” on the screen
Algorithm:
Step 1: start
Step2: call the fork() function to create a child process
fork function returns 2 values
step 3: which returns 0 to child process
step 4:which returns process id to the parent process
step 5:stop
Execution:
35
EXPERIMENT NO: 17 Date:
Algorithm:
Step 1:call fork function to create a child process
Step 2:if fork()>0
Then creation of Zombie
By applying sleep function for 10 seconds
Step 3: now terminate the child process
Step 4: exit status child process not reported to parent
Step 5: status any process which is zombie can known by
Applying ps(1) command
Step 6: stop
Execution:
To see zombie process, after running the program, open a new terminal Give this
command $ps -el|grep a.out
First terminal
Compilation:
[root@dba ~]# cc 17.c
Executing binary
[root@dba ~]# ./a.out
Iam child my pid is 4732
My parent pid is:4731
36
I am parent, my pid is 4731
37
EXPERIMENT NO: 18 Date:
Algorithm:
Step 1: call the fork function to create the child process
Step 2:if (pid==0)
Then print child id and parent id
else goto step 4
Step 3:Then sleep(10)
Print child id and parent id
Step 4: Print child id and parent id
Step 5:which gives the information of orphan process
Step 6:stop
Execution:
Compilation :
[root@dba ~]# cc -o 18 18-1.c
Executing Binary:
[root@dba ~]# ./18
I am the original process with PID 5960 and PPID 5778
I am child, my pid is 5961 My Parent pid is:5960
I am parent, my pid is 5960
PID:5960 terminates...
[root@dba ~]# Now my pid is 5961 My parent pid is:1
38
EXPERIMENT NO: 19 Date:
Aim:-Write a C program that illustrate how to execute two commands concurrently with a
command pipe Ex: ls –l|sort
Algorithm:
step 1:Start
step 2:call the fork function to create child process
Step 3 : if process id =0 goto step 4
else goto step 7
close writing option
Step 4:copy the old file descriptor
Step 5:exec the command ls –l(long listing files)
This process will be executed by child
Step 6: goto step 11
step 7:Close reading option
step 8: Copy old file descriptor
step 9:exec the command wc t count no of lines, words, characters
step 10: both commands concurrently executed by pipe
step 11:stop
39
EXPERIMENT NO: 20 Date:
Aim:- Write a C program that illustrate communication between two unrelated process using
named pipes
Step 1:start
Step 2:check whether the no of arguments specified were correct or not
Step 3:if no of arguments are less then print error message
Step 4:Open the first named pipe for writing by open system call by setting
O_WRONLY Fd=open(NP1,O_WRONLY)
Step 5: .Open the second named pipe for reading by open system call by setting
O_RDONLY Fd=open(NP2,O_RDONLY)
Step 6: write the data to the pipe by using write system call
write(fd,argv[1],strlen(argv[1]))
Step 7: read the data from the first pipe by using read system call
numread=Read(fd,buf,MAX_BUF_SIZE) buf*numread+=‟\0‟
Step 8: print the data that we have read from pipe
Step 9:stop
#include<stdio.h>
#include<stdlib.h>
#include<sys/types.h>
#include<sys/stat.h>
#include<string.h>
#include<fcntl.h>
40
void server(int,int);
void client(int,int);
int main()
{
int p1[2],p2[2],pid;
pipe(p1);
pipe(p2);
pid=fork();
if(pid==0)
{ close(p1[1]);
close(p2[0]);
server(p1[0],p2[1]);
return 0;
}
close(p1[0]);
close(p2[1]);
client(p1[1],p2[0]);
wait();
return 0;
}
sleep(10);
write(wfd,fname,2000);
n=read(rfd,buff,2000);
buff[n]='\0';
printf("THE RESULTS OF CLIENTS ARE......\n");
write(1,buff,n);
}
void server(int rfd,int wfd)
{
int i,j,n; char fname[2000]; char
buff[2000];
n=read(rfd,fname,2000);
fname[n]='\0';
int fd=open(fname,O_RDONLY);
sleep(10);
if(fd<0)
write(wfd,"can't open",9);
else
n=read(fd,buff,2000);
write(wfd,buff,n);
}
41
42
EXPERIMENT NO: 22 Date:
Aim:Write a C program to create message queue with read and write permissions to write 3
messages to it with different priority numbers
Algorithm:
Step 1.Start
Step 2.Declare a message queue structure
typedef struct msgbuf {
long mtype;
char mtext[MSGSZ];
} message_buf;
Mtype =0 Retrieve the next message on the queue, regardless of its mtype.
Positive Get the next message with an mtype equal to the specified
msgtyp.
Negative Retrieve the first message on the queue whose mtype field is
less than or equal to the absolute value of the msgtyp argument.
Usually mtype is set to1 and mtext is the data this will be added to the queue.
Step 3.Create message queue and store result in msgflg
msgflg = IPC_CREAT | 0666
msgflg argument must be an octal integer with settings for the queue's
permissions and control flags.
Step 4. Get the message queue id for the "name" 1234, which was created by the
server
key = 1234
Writer:writer21.c
#include <stdio.h> /* printf, etc. */
#include <stdlib.h> /* exit, etc. */
#include <string.h> /* strcpy, strlen, etc. */
#include <time.h> /* ctime, etc. */
#include <sys/msg.h> /* msgget, msgsnd, msgrcv, MSGMAX, etc. */
#include <sys/types.h> /* Data type 'key_t' for 1st arg of 'msgget' */
#include <sys/ipc.h> /* 'struct ipc_perm' in 'struct msgid_ds' in 'msgctl' */
#include "21msgq1.h" /* User defined header file for message queues */
44
/* Declare the prototype for function 'print_error' */
void print_error(int msg_num, int exit_code);
45
if ( (msgctl(qd, IPC_STAT, &qstat)) == -1 )
print_error(3,5);
return 0;
}
void print_error(int error_index, int exit_code)
{
fprintf(stderr, "%s", error_msg[error_index]);
exit(exit_code);
}
Reader:reader21.c
47
EXPERIMENT NO: 23 Date:
Aim:-Write a C program that receives a message from message queue and display them
Algorithm:
Step 1:Start
Step 2:Declare a message queue structure
typedef struct msgbuf {
long mtype;
char mtext[MSGSZ];
} message_buf;
Mtype =0 Retrieve the next message on the queue, regardless of its mtype.
PositiveGet the next message with an mtype equal to the specified
msgtyp.
Negative Retrieve the first message on the queue whose mtype field is
less than or equal to the absolute value of the msgtyp argument.
Usually mtype is set to1
mtext is the data this will be added to the queue.
Step 3:Get the message queue id for the "name" 1234, which was created by the server
key = 1234
Step 4 : if ((msqid = msgget(key, 0666< 0) Then print error
The msgget() function shall return the message queue identifier associated with the argument
key.
Step 5: Receive message from message queue by using msgrcv function
int msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
#include < sys/msg.h>
(msgrcv(msqid, &rbuf, MSGSZ, 1, 0)
msqid: message queue id
&sbuf: pointer to user defined structure MSGSZ: message size
Message type: 1
Message flag:The msgflg argument is a bit mask constructed by ORing together zero
or more of the following flags: IPC_NOWAIT or MSG_EXCEPT or
MSG_NOERROR
Step 6:if msgrcv <0 return error
Step 7:otherwise print message sent is sbuf.mext
Step 8:stop
48
EXPERIMENT NO: 24 Date:
Aim:-Write a C program that illustrate the suspending and resuming process using signal
Algorithm:
Step 1: call the signal function to generate the signal
Step 2:execution of process will be started
Step 3:call alarm function to suspend the execution of current process
Step 4:then it will execute the signal function
Step 5:again the process will be resumed
Step 6:stop
Program
#include<stdio.h>
int main()
{
int n;
if(signal(SIGALRM,sig_alarm)==SIG_ERR)
printf(‚Signal error‛);
alarm(5);
for(n=0;n<=15;n++)
printf(‚from for loop n=%d‛,n);
printf(‚main program terminated‛);
}
49
EXPERIMENT NO: 25 Date:
Aim:-Write client server programs using c for interaction between server and client process
using Unix Domain sockets
Algorithm:-
Sample UNIX server
Step 1:define NAME "socket"
Step 2: sock = socket(AF_UNIX, SOCK_STREAM, 0);
Step 3:if (sock < 0) perror("opening stream socket"); exit(1);
step4: server.sun_family = AF_UNIX;
strcpy(server.sun_path, NAME);
if (bind(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_un)))
{
perror("binding stream socket"); exit(1);
}
step 5: print ("Socket has name %s\n", server.sun_path);
listen(sock, 5);
step 6: for (;;)
{
msgsock = accept(sock, 0, 0);
if (msgsock == -1)
perror("accept");
else
do { bzero(buf, sizeof(buf));
if ((rval = read(msgsock, buf, 1024)) < 0)
Step 7:stop
Programs:
Server.c
#include <stdio.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <sys/types.h>
#include <unistd.h>
#include <string.h>
50
int connection_handler(int connection_fd)
{
int nbytes;
char buffer[256];
nbytes = read(connection_fd, buffer, 256);
buffer[nbytes] = 0;
printf("MESSAGE FROM CLIENT: %s\n", buffer);
nbytes = snprintf(buffer, 256, "hello from the server");
write(connection_fd, buffer, nbytes);
close(connection_fd);
return 0;
}
int main(void)
{
struct sockaddr_un address;
int socket_fd, connection_fd;
socklen_t address_length;
pid_t child;
socket_fd = socket(PF_UNIX, SOCK_STREAM, 0);
if(socket_fd < 0)
{
printf("socket() failed\n");
return 1;
}
unlink("./demo_socket");
address.sun_family = AF_UNIX;
snprintf(address.sun_path, UNIX_PATH_MAX, "./demo_socket");
if(bind(socket_fd,
(struct sockaddr *) &address,
sizeof(struct sockaddr_un)) != 0)
{
printf("bind() failed\n");
return 1;
}
if(listen(socket_fd, 5) != 0)
{
printf("listen() failed\n");
51
return 1;
}
while((connection_fd = accept(socket_fd,
(struct sockaddr *) &address,
&address_length)) > -1)
{
child = fork();
if(child == 0)
{
/* now inside newly created connection handling process */
return connection_handler(connection_fd);
}
/* still inside server process */
close(connection_fd);
} close(socket_fd);
unlink("./demo_socket");
return 0;
}
Client.c
#include <stdio.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <unistd.h>
#include <string.h>
int main(void)
{
struct sockaddr_un address;
int socket_fd, nbytes;
char buffer[256];
address.sun_family = AF_UNIX;
snprintf(address.sun_path, UNIX_PATH_MAX, "./demo_socket");
52
if(connect(socket_fd,
(struct sockaddr *) &address,
sizeof(struct sockaddr_un)) != 0)
{
printf("connect() failed\n");
return 1;
}
nbytes = snprintf(buffer, 256, "hello from a client");
write(socket_fd, buffer, nbytes);
close(socket_fd);
return 0;
}
53
EXPERIMENT NO: 26 Date:
Aim:-Write client server programs using c for interaction between server and client process using
Internet Domain sockets
Programs
Server.c
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <time.h>
char sendBuff[1025];
time_t ticks;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
listen(listenfd, 10);
while(1)
{
connfd = accept(listenfd, (struct sockaddr*)NULL, NULL);
ticks = time(NULL);
snprintf(sendBuff, sizeof(sendBuff), "%.24s\r\n", ctime(&ticks));
54
write(connfd, sendBuff, strlen(sendBuff));
close(connfd);
sleep(1);
}
}
Client.c
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <arpa/inet.h>
if(argc != 2)
{
printf("\n Usage: %s <ip of server> \n",argv[0]);
return 1;
}
memset(recvBuff, '0',sizeof(recvBuff));
if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
printf("\n Error : Could not create socket \n");
return 1;
}
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(5000);
return 0;
}
56
EXPERIMENT NO: 27 Date:
Aim:-Write a C program that illustrates two processes communicating using Shared
memory
Algorithm:-
step1.Start
step 2.Include header files required for the program are
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
step 3.Declare the variable which are required as
pid_t pid
int *shared /* pointer to the shm */
int shmid
step 4.Use shmget function to create shared memory
#include <sys/shm.h>
int shmget(key_t key, size_t size, int shmflg)
The shmget() function shall return the shared memory identifier associated with key
The argument key is equal to IPC_PRIVATE. so that the operating system selects the next
available key for a newly created shared block of memory. Size represents
size of shared memory block Shmflg shared memory permissions which are represented by
octal integer
shmid = shmget (IPC_PRIVATE, sizeof(int), IPC_CREAT | 0666);
print the shared memory id
step 5.if fork()==0 Then
begin
shared = shmat(shmid, (void *) 0, 0)
print the shared variable(shared) *shared=2
print *shared sleep(2)
print *shared
end
step 6.else
begin
shared = shmat(shmid, (void *) 0, 0)
print the shared variable(shared)
print *shared sleep(1) *shared=30
printf("Parent value=%d\n", *shared);
sleep(5)
shmctl(shmid, IPC_RMID, 0)
end
step 7.stop.
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
57
int main(void) {
pid_t pid;
int *shared; /* pointer to the shm */ int shmid;
shmid = shmget(IPC_PRIVATE, sizeof(int), IPC_CREAT | 0666); printf("Shared Memory
ID=%u",shmid);
if (fork() == 0) { /* Child */
/* Attach to shared memory and print the pointer */ shared = shmat(shmid, (void *) 0, 0);
printf("Child pointer %u\n", shared); *shared=1;
printf("Child value=%d\n", *shared); sleep(2);
printf("Child value=%d\n", *shared); } else { /* Parent */
/* Attach to shared memory and print the pointer */ shared = shmat(shmid, (void *) 0, 0);
printf("Parent pointer %u\n", shared); printf("Parent value=%d\n", *shared); sleep(1);
*shared=42;
printf("Parent value=%d\n", *shared); sleep(5);
shmctl(shmid, IPC_RMID, 0);
}
58