Ulp Final

Download as pdf or txt
Download as pdf or txt
You are on page 1of 269

Chapter 1

Introduction to UNIX/Linux
1.1 What is Linux?
Linux (pronounced Lih-nucks) is a UNIX-like operating system that runs on many
different computers. Linux was first released in 1991 by its author Linus Torvalds at the
University of Helsinki. Since then it has grown tremendously in popularity as
programmers around the world embraced his project of building a free operating system,
adding features, and fixing problems.

Linux is popular with today's generation of computer users for the same reasons early
versions of the UNIX operating system enticed fans more than 20 years ago. Linux is
portable, which means you'll find versions running on name-brand or clone PCs, Apple
Macintoshes, Sun workstations, or Digital Equipment Corporation Alpha-based
computers. Linux also comes with source code, so you can change or customize the
software to adapt to your needs. Finally, Linux is a great operating system, rich in
features adopted from other versions of UNIX. We think you'll become a fan too!

1.2 Architecture of the Linux Operating System


Linux operating system components
Kernel
The Linux kernel includes device driver support for a large number of PC hardware
devices (graphics cards, network cards, hard disks etc.), advanced processor and memory
management features, and support for many different types of file systems (including
DOS floppies and the ISO9660 standard for CDROMs). In terms of the services that it
provides to application programs and system utilities, the kernel implements most BSD
and SYSV system calls, as well as the system calls described in the POSIX.1
specification.
The kernel (in raw binary form that is loaded directly into memory at system startup
time) is typically found in the file /boot/vmlinuz, while the source files can usually be
found in /usr/src/linux.The latest version of the Linux kernel sources can be downloaded
from http://www.kernel.org.

Shells and GUIs


Linux supports two forms of command input: through textual command line shells
similar to those found on most UNIX systems (e.g. sh - the Bourne shell, bash - the
Bourne again shells and csh - the C shell) and through graphical interfaces (GUIs) such
as the KDE and GNOME window managers. If you are connecting remotely to a server
your access will typically be through a command line shell.
Shell types
Just like people know different languages, the computer knows different shell types:
• sh or Bourne Shell: the original shell still used on UNIX systems and in UNIX
related environments. This is the basic shell, a small program with few features.
When in POSIX-compatible mode, bash will emulate this shell.

• bash or Bourne Again SHell: the standard GNU shell, intuitive and flexible.
Probably most advisable for beginning users while being at the same time a
powerful tool for the advanced and professional user. On Linux, bash is the
standard shell for common users. This shell is a so-called superset of the Bourne
shell, a set of add-ons and plug-ins. This means that the Bourne Again SHell is
compatible with the Bourne shell: commands that work in sh, also work in bash.
However, the reverse is not always the case.

• csh or C Shell: the syntax of this shell resembles that of the C programming
language. Sometimes asked for by programmers.

• tcsh or Turbo C Shell: a superset of the common C Shell, enhancing user-


friendliness and speed.

• ksh or the Korn shell: sometimes appreciated by people with a UNIX


background. A superset of the Bourne shell; with standard configuration a
nightmare for beginning users.

The file /etc/shells give an overview of known shells on a Linux system:


[root@localhost ~]# vi /etc/shells
/bin/bash
/bin/sh
/bin/tcsh
/bin/csh

To switch from one shell to another, just enter the name of the new shell in the active
terminal. The system finds the directory where the name occurs using the PATH settings,
and since a shell is an executable file (program), the current shell activates it and it gets
executed. A new prompt is usually shown, because each shell has its typical appearance.

System Utilities
Virtually every system utility that you would expect to find on standard implementations
of UNIX (including every system utility described in the POSIX.2 specification) has been
ported to Linux. This includes commands such as ls, cp, grep, awk, sed, bc, wc, more,
and so on. These system utilities are designed to be powerful tools that do a single task
extremely well (e.g. grep finds text inside files while wc counts the number of words,
lines and bytes inside a file). Users can often solve problems by interconnecting these
tools instead of writing a large monolithic application program.
Like other UNIX flavours, Linux's system utilities also include server programs called
daemons which provide remote network and administration services (e.g. telnetd and
sshd provide remote login facilities, lpd provides printing services, httpd serves web
pages, crond runs regular system administration tasks automatically). A daemon
(probably derived from the Latin word which refers to a beneficient spirit who watches
over someone, or perhaps short for "Disk And Execution MONitor") is usually spawned
automatically at system startup and spends most of its time lying dormant (lurking?)
waiting for some event to occur.

Application programs
Linux distributions typically come with several useful application programs as standard.
Examples include the emacs editor, xv (an image viewer), gcc (a C compiler), g++ (a C++
compiler), xfig (a drawing package), latex (a powerful typesetting language) and soffice
(StarOffice, which is an MS-Office style clone that can read and write Word, Excel and
PowerPoint files).
Redhat Linux also comes with rpm, the Redhat Package Manager which makes it easy to
install and uninstall application programs.

1.3 Linux Features


Linux is an operating that was initially created as a hobby by a young student Linus
Torvalds, at the University of Helsinki in Finland. Linus had an interest in Minix, a small
Unix System, and decided to develop a system that exceeded the Minix standards. He
began his work in 1991 when he released 0.02 version and work steadily until 1994 when
version 1.0 of Linux Kernel was released. Now days lot of variations and versions of
Linux are available in market.
Linux was developed under the GNU General Public License and its source code freely
available to everyone .Linux may be used for a wide variety of purpose including as an
excellent, low-cost alternative to other more expensive operating systems.
Linux’s functionality and availability have made it popular worldwide and a vast number
of software programmers have taken Linux’s source code and adapted it to meet to
various hardware configuration and purpose. A modern, very stable, multi-user,
multitasking environment.
Here is, lot of reasons why Linux could be the best operating system for you:
Freeware: A Linux distribution has thousands of dollars worth of software for no
Cost. Linux OS is distributed at no cost by many companies like Redhat, Debien and
Fedrora etc.
Stable: The crash of any application is much less likely to bring down the operating
system under Linux.
Reliable: Linux servers are often up for hundreds of days compared with the regular
Reboots required with a windows system.
Complete OS: Linux comes with a complete development environment, including C,
C++ and FORTRAN compilers toolkits such as Qt and scripting languages such as Perl,
Awk, and Sed. A C compiler alone for windows would set you back hundreds of Dollars.
Linux is a rich and powerful platform. Unsurpassed computing power, portability, and
flexibility. Linux can be customized to perform almost any computing task. Dozens of
excellent, free, general-interest desktop applications. This include a range of web
browsers, email programs, word processors, spreadsheets, bitmap and vector graphics
programs, file managers, audio players, CD writers, some games, etc. Thousands of free
applets, tools, and smaller programs. Hundreds of specialized applications built by
researchers around the world (astronomy, information technology, chemistry, physics,
engineering, linguistics, biology, ...). Scores of top-of-the line commercial programs
including all the big databases (e.g., Oracle, Sybase, but no Microsoft's). Many of these
are offered free for developers and for personal use.
Excellent Networking facilities: Linux system provides excellent net working facility.
Environmental to run servers such as a web server ( e.g. Apache) or an FTP server.
Connectivity to Microsoft, Novel, and Apple proprietary networking. Reading/writing to
your DOS/MS Windows and other disk formats. This includes "transparent" use of data
stored on the MS Windows partition of your hard drive(s).
Upgradeable: Linux is an operating system which is easily upgradeable. after any length
of time a typical instillation of window and software gets in to a complete mess. Often
the only way to clear out all the debris is to reformat the hard disk and restart again Linux
however is much better for maintaining the system.
Multiprocessor supports : Linux supports multiple processor to work simultaneously.
Multitasking: The ability to run more than one programmer at the same time.
Open source code ; Linux source code is openly available to all the end users . They can
change this source code according to their equipment and can use accordingly. While this
facility is not available with window system.
Scalable: Linux operating system can used to run on any size machine e.g. from
wristwatch to super computer
Standard platform: Linux is VERY standard; it is essentially a POSIX compliant
UNIX. Linux includes all the UNIX standard tools and utilities.
Advanced graphical user interface: Linux uses a standard, network-transparent X-
windowing system with a "window manager" (typically KDE or GNOME).
Special features: Freedom from viruses, "backdoors" to your computer, forced upgrades,
proprietary file formats, licensing and marketing schemes, product registration, high
software prices, and pirating. Linux has no viruses because it is too secure an operating
system for the viruses to spread with any degree of efficiency.

1.4 Linux Vs Windows


Linux is an open-source Operating System. People can change codes and add programs to
Linux OS which will help use your computer better. Linux evolved as a reaction to the
monopoly position of windows. You can't change any code for windows OS. You can't
even see which processes do what and build your own extension. Linux wants the
programmers to extend and redesign it's OS. Linux user's can edit its OS and design new
OS.
All flavors of Windows come from Microsoft. Linux come from different companies like
LIndows , Lycoris, Red Hat, SuSe, Mandrake, Knopping, Slackware.
Linux is customizable but Windows is not. For example, NASlite is a version of Linux
that runs off a single floppy disk and converts an old computer into a file server. This
ultra small edition of Linux is capable of networking, file sharing and being a web server.
Linux is freely available for desktop or home use but Windows is expensive. For server
use, Linux is cheap compared to Windows. Microsoft allows a single copy of Windows
to be used on one computer. You can run Linux on any number of computers.
Linux has high security. You have to log on to Linux with a userid and password. You
can login as root or as normal user. The root has full privilege.
Linux has a reputation for fewer bugs than Windows.
Windows must boot from a primary partition. Linux can boot from either a primary
partition or a logical partition inside an extended partition. Windows must boot from the
first hard disk. Linux can boot from any hard disk in the computer.
Windows uses a hidden file for its swap file. Typically this file resides in the same
partition as the OS (advanced users can opt to put the file in another partition). Linux
uses a dedicated partition for its swap file.
Windows separates directories with a back slash while Linux uses a normal forward
slash.
Windows file names are not case sensitive. Linux file names are. For example "abc" and
"aBC" are different files in Linux, whereas in Windows it would refer to the same file.
Windows and Linux have different concepts for their file hierarchy. Windows uses a
volume-based file hierarchy while Linux uses a unified scheme. Windows uses letters of
the alphabet to represent different devices and different hard disk partitions. eg: c: , d: , e:
etc.. While in Linux "/" is the main directory.
Linux and windows support the concept of hidden files. In Linux hidden files begin with
" . ", eg: .filename
In Linux each user will have a home directory and all his files will be save under it while
in windows the user saves his files anywhere in the drive. This makes difficult to have
backup for his contents. In Linux it's easy to have backups
Topic Linux Windows
The majority of Linux variants are Microsoft Windows can run between
Price available for free or at a much lower $50.00 - $150.00 US dollars per each
price than Microsoft Windows. license copy.
Microsoft has made several
advancements and changes that have
Majority Linux variants have
made it a much easier to use
improved dramatically in ease of use,
Ease operating system, and although
Windows is still much easier to use
arguably it may not be the easiest
for new computer users.
operating system, it is still Easier than
Linux.
The majority of Linux variants and Microsoft Windows has made great
versions are notoriously reliable and improvements in reliability over the
Reliability
can often run for months and years last few versions of Windows; it still
without needing to be rebooted. cannot match the reliability of Linux.
However, Windows has a much larger
selection of available software.
Linux has a large variety of available Because of the large amount of
Software software programs, utilities, and Microsoft Windows users, there is a
games. much larger selection of available
software programs, utilities, and
games for Windows.
Many of the available software
programs, utilities, and games Windows does have software
available on Linux are freeware programs, utilities, and games for
Software
and/or open source. Even such free, the majority of the programs will
Cost
complex programs such as Gimp, cost anywhere between $20.00 -
Open Office, Star Office, and wine are $200.00 + US dollars per copy.
available for free or at a low cost.
Linux companies and hardware
Because of the amount of Microsoft
manufacturers have made great
Windows users and the broader driver
advancements in hardware support for
support, Windows has a much larger
Linux and today Linux will support
Hardware support for hardware devices and a
most hardware devices. However,
good majority of hardware
many companies still do not offer
manufacturers will support their
drivers or support for their hardware
products in Microsoft Windows.
in Linux.
Microsoft has made great
Linux has always been a very secure improvements over the years with
operating system. Although it still can security on their operating system,
Security
be attacked when compared to their operating system continues to be
Windows, it is much more secured. the most vulnerable to viruses and
other attacks.
Many of the Linux variants and many
Microsoft Windows is not an open
Open Linux programs are open source and
source and the majority of Windows
Source enable users to customize or modify
programs are not open source.
the code however they wish to.
It may be more difficult to find users
Microsoft Windows includes its own
familiar with all Linux variants, there
help section, has vast amount of
are vast amounts of available online
Support available online documentation and
documentation and help, available
help, as well as books on each of the
books, and supports available for
versions of Windows.
Linux.

1.5 UNIX Vs LINUX


Linux is free. Like UNIX, it is very powerful and is a "real" operating system. Also, it is
fairly small compared to other UNIX operating systems.
• Full multitasking—multiple tasks can be accomplished and multiple devices can
be accessed at the same time.
• Virtual memory—Linux can use a portion of your hard drive as virtual memory,
which increases the efficiency of your system by keeping active processes in
RAM and placing less frequently used or inactive portions of memory on disk.
Virtual memory also utilizes all your system's memory and doesn't allow memory
segmentation to occur.
• The X Window System—The X Window System is a graphics system for UNIX
machines. This powerful interface supports many applications and is the standard
interface for the industry.
• Built-in networking support—Linux uses standard TCP/IP protocols, including
Network File System (NFS) and Network Information Service (NIS, formerly
known as YP). By connecting your system with an Ethernet card or over a modem
to another system, you can access the Internet.
• Shared libraries—Each application, instead of keeping its own copy of software,
shares a common library of subroutines that it can call at runtime. This saves a lot
of hard drive space on your system.
• Compatibility with the IEEE POSIX.1 standard—Because of this compatibility,
Linux supports many of the standards set forth for all UNIX systems.
• Nonproprietary source code—The Linux kernel uses no code from AT&T, nor
any other proprietary source. Other organizations, such as commercial companies,
the GNU project, hackers, and programmers from all over the world have
developed software for Linux.
• Lower cost than most other UNIX systems and UNIX clones—If you have the
patience and the time, you can freely download Linux off the Internet. Many
books also come with a free copy. (This book includes it on CD-ROM.)
• GNU software support—Linux can run a wide range of free software available
through the GNU project. This software includes everything from application
development (GNU C and GNU C++) to system administration (gawk, groff, and
so on), to games (for example, GNU Chess, GnuGo, NetHack).

1.6 LINUX DISTRIBUTIONS


Linux is available for download over the Internet. However, this is only useful if your
Internet connection is fast. Another way is to order the CD-ROMs, which saves time, and
the installation is fast and automatic. A typical Linux distribution includes:
▪ Linux kernel.
▪ GNU application utilities such as text editors, browsers etc.
▪ Collection of various GUI (X windows) applications and utilities.
▪ Office application software.
▪ Software development tools and compilers.
▪ Thousands of ready to use application software packages.
▪ Linux Installation programs/scripts.
▪ Linux post installation management tools daily work such as adding users,
installing applications, etc.
▪ And, a Shell to glue everything together.
Corporate and small businesses users need support while running Linux, so companies
such as Red Hat or Novell provide Linux tech-support and sell it as product.
Nevertheless, community driven Linux distributions do exist such as Debian, Gentoo and
they are entirely free. There are over 200+ Linux distributions.
Linux distribution = Linux kernel + GNU system utilities and libraries + Installation
scripts + Management utilities etc.
Fedora Linux - Fedora is a distribution of Linux based on Red Hat linux, developed by
the Fedora Project. Fedora is good for both desktop and laptop usage including sys
admins.
CentOS Linux - CentOS is a community-supported, mainly free software operating
system based on Red Hat Enterprise Linux. CentOS is good for server usage.
Debian Linux - Debian focuses on stability and security and is used as a base for many
other distributions such as Ubuntu. Debian stable is good for server usage.
Ubuntu Linux - Ubuntu originally based on the Debian Linux distribution. Ubuntu is
designed primarily for desktop usage, though netbook and server editions exist as well.
OpenSuse Linux - openSUSE is a general purpose Linux distribution and sponsored by
Novell. However, it is quite popular on Laptop and desktop usage.
Slackware Linux - It was one of the earliest operating systems to be built on top of the
Linux kernel and is the oldest currently being maintained. Slackware is pretty popular
among the hardcore Linux users and sys admins.
Linux Mint Linux - Linux Mint provides an up-to-date, stable operating system for the
average user, with a strong focus on usability and ease of installation.
PCLinuxOS Linux - PCLinuxOS comes with KDE Plasma Desktop as its default user
interface. It is a primarily free software operating system for personal computers aimed at
ease of use.
Mandriva Linux - Mandriva Linux is a French Linux distribution distributed by
Mandriva. It uses the RPM Package Manager.
Sabayon Linux - Sabayon is based upon Gentoo Linux and it follows the OOTB (Out of
the Box) philosophy, having the goal to give the user a wide number of applications
ready to use and a self-configured operating system.
Arch Linux - Arch Linux is a Linux distribution intended to be lightweight and simple.
The design approach of the development team focuses on simplicity, elegance, code
correctness and minimalism.
Gentoo Linux - Gentoo Linux is a computer operating system built on top of the Linux
kernel and based on the Portage package management system.
1.7 LOGGING IN
The system administrator will have given you a user name and initial password.
Different systems have different ways of presenting the login prompt: it could simply be
text on the screen or a very posh graphical interface. Nevertheless, you usually enter your
user name, followed by your password. Usually, the characters you type in for your
password will not show up on the screen, for added security. If, for some reason, you
can't log in, consult the system administrator. Also keep in mind that passwords are case
sensitive.

At this point, depending on how the system administrator has set things up, you may be
presented with a windows-like (usually X-windows) environment or a simple terminal.
Either way, you should now have a screen where you can type in commands. This is your
shell. Your shell is your interface to the operating system (in DOS, your shell was called
COMMAND.COM). Since UNIX is a multi-tasking as well as multi-user environment, you
can have many shells running at the same time. To further complicate things, there are
several different shells you can use. The default shell on Linux is bash (stands for Bourne
Again SHell). Others include csh, tcsh, ksh and sh. They all serve the same purpose, but each
has different features and syntaxes. As far as this tutorial is concerned, they more or less
work the same, however I will assume you are using bash.

Important: When you are all done, remember to log out of the system. This is usually
accomplished by typing logout at the prompt. If you are using X-Windows, there will be a
large button at the bottom of the screen labeled 'Logout'. You will know you have logged
out completely when you see the login prompt. If you forget to log out, someone else
could come along and mess with your files, or send email from your account!

The Linux Login Process


After the system boots, the user will see a login prompt similar to:
login:
This prompt is being generated by a program, usually getty or mingetty, which is
regenerated by the init process every time a user ends a session on the console. The getty
program will call login, and login, if successful will call the users shell. The steps of the
process are:
• The init process spawns the getty process.
• The getty process invokes the login process when the user enters their username
and passes the user name to login.
• The login process prompts the user for a password, checks it, then if there is
success, the user's shell is started. On failure the program displays an error
message, ends and then init will respawn getty.
• The user will run their session and eventually logout. On logout, the shell
program exits and we return to step 1.
Note: This process is what happens for runlevel 3, but runlevel 5 uses some different
programs to perform similar functions. These X programs are called X clients.
Files used by the login program
1) /etc/nologin - This file is used to prevent users who are not root from logging into
the system.
2) /etc/usertty - This file is used to impose special access restrictions on users.
3) /etc/securetty - Controls the terminals that the root user can login on.
4) .hushlogin - When this file exists in the user's home directory, it will prevent
check for mail, printing of the last login time, and the message of the day when
the user logs in.
5) /var/log/lastlog - Contains information about the last time a login was done on the
system.
6) /etc/passwd - Contains information about the user including the ID, name, home
directory, and the path to the preferred shell program. If not using shadow
passwords, this file may also contain user passwords.
The init process
[root@localhost ~]# vi /etc/inittab
1:2345:respawn:/sbin/mingetty tty1
These lines cause init to spawn the mingetty process on runlevels 2 through 5 for tty1 and
other terminals. To do this init will use the "fork" function to make a new copy of itself
and use an "exec" function to run the mingetty program. Getty will wait for the user, then
read the username. Then mingetty will invoke login with the user's name as an argument.
If the password entered does not match for the user, init will load and run mingetty again.
If the login is successful, init will use the "exec" function to run the user's shell program.
When the shell exits through the "logout" command, init will load and run the mingetty
program again. The file "/etc/passwd" determines the shell to be used for the user who is
logging in.
getty
getty performs the following functions:
• Open tty lines and set their modes
• Print the login prompt and get the user's name
• Begin a login process for the user
Use of the /etc/passwd file
Once the user has successfully logged in, the login program will invoke the user's shell.
The login program will look in the /etc/passwd file to determine which shell program to
run. The /etc/passwd file contains entries containing the complete path of the shell
The login program will use the account field to find the username and therefore get the
UID of the user. Login will also use the password (or the /etc/shadow file) to be sure the
entered password is a match. Login will look up the user's home directory and use that to
set the $HOME environment variable. Login will use the shell field to determine what
shell program to run for that user. Then login will pass program control to the shell
program.
Linux Run level scripts
The runlevel scripts are used to bring up many system and networking functions. Since
some functions are interdependent on other functions there is some required order in
which these scripts must be run in order to bring the system up and to bring it gracefully
down. Each runlevel has its own set of start(S) and kill(K) scripts but all these scripts are
supported in the directory /etc/rc.d/init.d. This is because the start and kill scripts are soft
links to the files in the /etc/rc.d/init.d directory.
The rc script Program
The script file /etc/rc.d/rc is run for the appropriate runlevel (typically 3 or 5) This file
does the following:
• It gets the previous and current system runlevels.
• If the word confirm is in the file "/proc/cmdline" if sets up to run the scripts below
in user confirmation mode.
• All kill files (files whose first letter is 'K') in the subdirectory "/etc/rc.d/rc3.d"
(assuming the previous runlevel was 3) are run. The parameter stop is usually
passed on the command line to the kill script.
All startup files (files whose first letter is 'S") in the subdirectory "/etc/rc.d/rc5.d" are run.
The parameter start is usually passed on the command line to the kill script.
These runlevel scripts are used to bring up (or down) various system services such as
cron and gpm along with networking services from the network cards through Samba,
and servers like DNS, DHCP, and NFS.
These services are can be functionally categorized as a system service or a network
service. They are described in more detail in the section on Daemons and Services. For
more information on how some of the script files for these services run, read the Linux
startup Manual. Normally any of these services may be stopped, started, restarted, or
status be checked by typing the name of one of these services (with the correct path)
followed by the word stop, start, restart, or status respectively. For example the line:
/etc/rc.d/init.d/nfs restart
will restart network file sharing assuming it was running.
To see the status type:
/etc/rc.d/init.d/nfs status
The rc.local Script Program
The file "/etc/rc.d/rc3.d/S99.local" is a link file to the file "/etc/rc.d/rc.local". This file
doesn't do much except for setting up the "/etc/issue" and "/etc/issue.net" files to reflect
the system version when a user begins a terminal or telnet session. This is where most
administrators will put any system customizations they want to make.

1.8 BOOTING PROCESS OF LINUX


We must know the booting process of Linux, so that any problem occurs at the booting
time can be handled. Once we know the booting process, we can exactly spotlight the
part where the problem occurs. The following steps occur frequently when Linux boots
1) BIOS loads and it will launch first-stage-boot-loader which resides in MBR.
2) The first-stage-boot loader loads into memory and launch the second-stage-boot-loader
from the /boot part(GRUB loads).
3) The second-stage-boot-loader loads the kernel into the memory.
4) The kernel initialize the 'init' process-it is the first and parent of all process.
5) init switches to run level specified in /etc/inittab file and it loads all the services of
current run-level and it mounts all partition listed in /etc/fstab.
6) The user is presented a log-in screen.
The problem occurs at booting time is at the GRUB loading because the file grub.conf
may be mis-configured or damaged. Sometimes there may be problem in mounting the
partition this is because of damage in /etc/fstab.

1.9 LOGOUT
At the command prompt exits your current user account and returns you to the log-in
prompt. The exit command does the same thing as logout. To log out from multiple
consoles, use alt-Fn to switch between consoles and then log out from each one. But note
that even if you log out from all of your active consoles, Linux is still running.
If you really want to exit Linux. Type enter the command
[root@localhost ~]# shutdown -h now
Which effectively goes to run level 0, and
[root@localhost ~]# shutdown -r now
Which effectively goes to run level 6.

1.10 INIT RUN LEVELS


The init process is the parent of all other processes—it is the super process. You can
think of init as the master process governing the system. init controls the lifecycle of all
other processes and therefore of the system itself. After doing its work, init generally sits
idle in the background, only waking up when the user requests certain actions or when a
crucial process needs to be responded. At any given time, init is one of seven general
states; these states are the system run levels. The run levels are numbered 0 through 6.
Linux run-level
Linux has six run-levels :
0 - halt
1 - Single user mode (or) recover mode (or) safe mode
2 - Multiuser without NFS
3 - Full multiuser mode-(by default boot to command mode)
4 - unused
5 - Muti-user XWindow mode -(by default boot to GUI(XWindow) mode )
6 - reboot
[root@localhost ~]# vi /etc/inittab
default run level is:
id:5:initdefault:
Before entering the default runlevel rc.sysinit script to be invoked by init
si::sysinit:/etc/rc.d/rc.sysinit
The next lines maps runlevels to scripts
l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
...
l6:6:wait:/etc/rc.d/rc 6
The first line starts the kernel's update daemon to periodically synchronize memory cache
with the disk, the second line tells init to execute the shutdown command whenever the
user presses Ctrl-Alt-Delete, and the last two lines specify commands to run if the system
is notified of a power failure.
ud::once:/sbin/update
ca::ctrlaltdel:/sbin/shutdown -t3 -r now
pf::powerfail:/sbin/shutdown -f -h +2 'Power Failure; System Shutting Down'
pr:12345:powerokwait:/sbin/shutdown -c 'Power Restored; Shutdown Cancelled'
The next lines instruct init to create several virtual text consoles. tty consoles are the six
text-based consoles ; users can switch between them via the Alt-F1 through Alt-F6 key
sequences.
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
...
6:2345:respawn:/sbin/mingetty tty6
The next line gives instructs init to start the prefdm graphical login program.
x:5:respawn:/etc/X11/prefdm –nodaemon

Changing the run-level:


By default the system will boot in run-level 5.To change the run-level edit the file
/etc/inittab. In this file go to the line which is like id:5:initdefault: (Here number 5
indicates the default run-level into which the system boots),You can the change number 5
to any run-level(0-6) you want(caution: don't use 0 or 6).

Run-level Problem:
Some times the user may change the run-level specified in /etc/inittab file(in the line
id:5:initdefault:)to 0 (or) 6 ,since the run-level 0 is halt and run-level 6 is reboot, the
system will restarts every time you boot or goes to halt

Recovering from run-level problem:


1) Press space-key when boot-screen
Red Hat Enterprise Linux(kernel 2.6.9).Go to that Line and press: e
2) Now You will get another boot screen with three lines .go to 2nd line press: e
Now enter 1 and then press: Enter key. Go to Second line and press: b. Now the system
will boot into run-level 1
3) When system enters to run-level 1 you will get
Sh-3.00#vi /etc/inttab .
Change the line id:6:initdefault: with id:5:initdefault:
After editing save the file and give command init 6.Now system will reboots and boots to
run-level 5

1.11 INCREASING THE TERMINALS IN LINUX


In Linux by default there will be 6 terminals and 1 XWindow. The 1st terminal can be
opened by pressing ctrl+alt+F1 and 2nd can be opened by pressing ctrl+alt+F2 and so
on upto crl+alt+F6 and the XWindows can be opened by pressing ctrl+alt+F7.

You can increase the number of terminals up to 11.This can be done by following steps

Step 1: Open the file /etc/inittab by giving the command:


[root@localhost ~]# vi /etc/inittab
Step2: In that file goto the place where the following lines are specified
# Run gettys in standard runlevels
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

Step 3: Add the line 7:2345:respawn:/sbin/mingetty tty7 and end of the above lines.
Then save and exit from the file and then restart the system
Step 4:After rebooting press ctrl+alt+F7,now you will have 7th terminal. The XWindow
is moved step forward at Ctrl+alt+F8
Likewise you add upto 11 terminals(that is upto ctrl+alt+F11),if you have 11 terminals
then XWindow will be at ctrl+alt+F12

1.12 SHUTTING DOWN THE SYSTEM


To shut down the system properly, use shutdown. It notifies all users and processes of the
impending shutdown, blocks new logins, and brings the system down cleanly. (Just
cutting power to the system without cleanly stopping processes and unmounting file
systems could result in the loss or corruption of data.)

To halt the system once the shutdown is complete, use the `-h' option; to reboot the
system after shutdown, use `-r' instead.

The following recipes describe ways of using shutdown to do useful things.

Immediate shutdown: Shutting down the system right away.


Timed Shutdown: Telling the system to shut down after a specified period of time.
Cancelling shutdown: Cancelling user shutdown request.

Shutting Down Immediately


To shut down the system at a certain time, you normally give that time as an argument;
use the special `now' argument to begin the shutdown process immediately.
• To immediately shut down and halt the system, type:
# shutdown -h now
• To immediately shutdown the system, and then reboot, type:
# shutdown -r now
You can follow the `now' argument with a quoted message that will be displayed on all
terminals of all users currently logged in.
• To immediately shut down and halt the system, and send a warning message to all
users, type:
• # shutdown -h now The system is being shut down now!
Shutting Down at a certain Time
To shut down the system at a certain time, give that time (in 24-hour format) as an
argument.
• To shut down and then reboot the system at 4:23 a.m., type:
• # shutdown -r 4:23
• To shut down and halt the system at 8:00 p.m., type:
• # shutdown -h 20:00
To shut down the system in a certain number of minutes, give that number of minutes
prefaced by a plus sign (`+').
• To shut down and halt the system in five minutes, type:
• # shutdown -h +5
Follow the time with a quoted message to display it on the terminals of all logged in
users.
• To shut down and halt the system at midnight, and give a warning message to all
logged-in users, type:
• # shutdown -h 00:00 "The system is going down for maintenance at
• midnight"
Cancelling a shutdown
If you have given a shutdown and decide that you don't actually want to shut the system
down, run shutdown again with the `-c' option. This command stops any shutdowns in
progress.
• To cancel any pending shutdown, type:
• # shutdown -c
As with a normal system shutdown, you can send out an explanatory message with the
cancel that will be shown to all users.
• To cancel any pending shutdown and send an explanatory message to all logged in
users, type:
• # shutdown -c "Shutdown cancelled!"
This command cancels any pending system shutdown and displays the message, ` ‘ ‘
‘Shutdown cancelled!' on all the terminals of anyone logged in.
Chapter 2
UNIX/Linux Commands

2.0 Command box of Linux


Internal and External Commands: The Shell recognizes three types of
commands:
1) External Command: The most commonly used ones are the UNIX utilities and
programs like cat, ls and so forth. The shell creates a process for each of these commands
that it executes and remains its parent.
2) Shell Script: The shell execute this script by swapping another shell (sub-shell) which
then executes the commands listed in the script. The sub-shell becomes the parent of the
commands that feature in the script.
3) Internal Command: A programming language on its own, the shell has a number of
built-in commands as well. Some of them like called and echo doesn’t generate a process
and are executed directory by the shell. Similarly variables assignment with the statement
x=5 doesn’t generate a process either.

Running commands
Syntax of command: command [options] [args]
Where args are filenames or other data needed by the command.
'-' used with single letter options like -ivh or -i -v -h.
'--'used with words like -- force ,--aid etc.
Each item is separated by a space.

Example: Running single command.


[root@localhost ~]# rmdir x y z
This command will remove the directory x,y,z.

Example: Running multiple commands.


You can run multiple commands in a single line separating them with semicolon.
[root@localhost ~]# pwd;date;id;
/root
Tue Dec 19 09:12:07 IST 2006
uid=0(root) gid=0(root)
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
context=root:system_r:unconfined_t

2.1 HELP COMMANDS IN LINUX

Help commands in Linux:


what is
This command accepts other command as input and then searches for given command
name in database of short description .If found it will be displayed on the screen.
Syntax: what is <command>
Example: To display help about calendar command.
[root@localhost ~]# whatis cal
cal (1) -displays a calendar

help
This command is used to display usage summary and argument list. It will not display
help for all commands but for most commands help is displayed.
Syntax: <command> --help

Example: To display usage summary and argument list of ls command.


[root@localhost ~]# tee --help
Usage: tee [OPTION]... [FILE]...
Copy standard input to each FILE, and also to standard output.

-a, --append append to the given FILEs, do not overwrite


-i, --ignore-interrupts ignore interrupt signals
--help display this help and exit
--version output version information and exit

Report bugs to <[email protected]>.


Description:
if different options is separated using pipe symbol it means that use any one of them.
e.g:- a|b|c means a or b or c.
square braces[] for optional.
Text followed by “....” for list .
<> represents variable or input data.

man
This command provides documentation for commands. It will display name of command
,short description for it, synopsis of its usage, longer description of commands
functionality ,switch by switch listing of options, files associated with command,
examples and see also section for further reference.
Following 8 sections are available in the man page.
1. Commands This section provides information about user-level commands, such as ps
and ls.

2. UNIX System Calls This section gives information about the library calls that
interface with the UNIX operating system, such as open for opening a file, and exec for
executing a program file. These are often accessed by C programmers.

3. Libraries This section contains the library routines that come with the system. An
example library that comes with each system is the math library, containing such
functions as fabs for absolute value. Like the system call section, this is relevant to
programmers.
4. File Formats This section contains information on the file formats of system files,
such as init, group, and passwd. This is useful for system administrators.

5. File Formats This section contains information on various system characteristics. For
example, a manual page exists here to display the complete ASCII character set (ascii).

6. Games This section usually contains directions for games that came with the system.

7. Device Drivers This section contains information on UNIX device drivers, such scsi
and floppy. These are usually pertinent to someone implementing a device driver, as well
as the system administrator.

8. System Maintenance This section contains information on commands that are useful
for the system administrator, such as how to format a disk.

Syntax: man <commands>


Example: Display manual pages for cal command.
CAL(1) BSD General Commands Manual CAL(1)

NAME
cal - displays a calendar

SYNOPSIS
cal [-smjy13] [[month] year]

DESCRIPTION
Cal displays a simple calendar. If arguments are not specified, the
current month is displayed. The options are as follows:

-1 Display single month output. (This is the default.)

-3 Display prev/current/next month output.

-s Display Sunday as the first day of the week. (This is the


default.)

-m Display Monday as the first day of the week.

-j Display Julian dates (days one-based, numbered from January 1).


-y Display a calendar for the current year.

A single parameter specifies the year (1 - 9999) to be displayed; note the year must
be fully specified: “cal 89” will not display a calendar for 1989. Two parameters denote
the month (1 - 12) and year. If no parameters are specified, the current month’s calendar
is displayed.
HISTORY
A cal command appeared in Version 6 AT&T UNIX.
OTHER VERSIONS
Several much more elaborate versions of this program exist, with support
for colors, holidays, birthdays, reminders and appointments, etc. For
example, try the cal from http://home.sprynet.com/~cbag-
well/projects.html or GNU gcal.

Manual section provides helps for following


a) Games b) User commands c) Administrative commands d) Special files
e)System calls f)Files formats g)Library calls h)Miscellaneous
Navigating man pages:
Navigate with arrows, pgup and pgdown
/<text> search for text.
q Quit viewing page.

info
This command will display pages . Each page is divided into nodes. Links to nodes is
preceded by “*”.
Syntax: info [<command>]

Example: Display info related to date command


[root@localhost ~]# info date
This command will display help pages related to date command.

whereis
Tells location of command in the system.
Syntax: whereis <command>

Example: To display location of cat command


[root@localhost ~]# whereis cat
cat: /bin/cat /usr/share/man/man1/cat.1.gz /usr/share/man/man1p/cat.1p.gz

Extended Documentation (/usr/share/doc directory)


It contains example related to configuration files, Html/pdf/ps documentation and license
details.

2.2 SIMPLE LINUX COMMANDS


date
date command prints system date and time
Syntax: date [OPTION]... [+FORMAT]
or:
date [OPTION] [MMDDhhmm[[CC]YY][.ss]]
Where format can be
%D Date -MM/DD/YY
%H Hour -00 to 23
%I Hour -00 to 11
%M Minute -00 to 59
%s Second -00 to 59
%T Time -HH:MM:SS
%y Year's last two digits
%w Day of the week(0 for sunday ,1 for monday and so on )
%r Time in AM/PM

Example: To display system date and time.


[root@localhost ~]# date
Tue Dec 19 09:23:31 IST 2006

Example: To display system time in AM/PM format.


[root@localhost ~]# date +%r
09:25:10 AM

Example: To display day of week.


[root@localhost ~]# date +%w
2

Example: To display date in MM/DD/YY format.


[root@localhost ~]# date +%D
12/19/06

Example: To display Hours.


[root@localhost ~]# date +%H
09

Example: To display Minutes.


[root@localhost ~]# date +%M
30
Example: To display Seconds.
[root@localhost ~]# date +%s
44

who
This command is used to display who is currently logged on the system.
Syntax: who

Example: Checking who is currently logged on.


[root@localhost ~]# who
root :0 Dec 19 08:55
root pts/1 Dec 19 09:08 (:0.0)
The first column of output represents the user names. The second column represents .The
corresponding terminal names and remaining represents at which the uses are logged on.
Linux is a multi-user operating system. So multiple can logged on at the same time.

who am i
Tells who you are.
Syntax: who am i

Example: Checking who you are.


[root@localhost ~]# who am i
root pts/2 Dec 19 09:38 (:0.0)

cal
This command will display calendar for specified month and year.
Syntax: cal [month] <year>

Example: Display calendar for year 2006


[root@localhost ~]# cal 2006
It will display calender for year 2006.

Example: Display calendar for first month of year 2006.


[root@localhost ~]# cal 1 2006
January 2006
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31

lpr
This command is used to print one or more files on printer.
Syntax: lpr [options] <file1> <file2>..........
Where options may be
m inform you when printing is over.
r removes files from directory after printing.
Example: Printing a file on printer.
[root@localhost ~]# lpr text.txt
Contents of file text.txt will be printed on printer.

tee
It will send output to standard output as well as specified file. This command perform the
operation of pipe and redirection.
Syntax: <command> | tee <filename>
Example:
[root@localhost ~]# cat text.txt
hello
c
language is a middle level language
C++ is an object oriented language
java is purely object oriented language
[root@localhost ~]# cat text.txt>text1.txt
we can combine these two commands into single command using tee command.
[root@localhost ~]# cat text.txt | tee text1.txt
hello
c
language is a middle level language
C++ is an object oriented language
java is purely object oriented language

Example: To display current user information in a file and display number of


current users.
[root@localhost ~]# who |tee text2.txt |wc
3 17 119
[root@localhost ~]# cat text2.txt
root :0 Dec 19 08:55
root pts/1 Dec 19 09:08 (:0.0)
root pts/2 Dec 19 10:09 (:0.0)

history
To display to see list of remembered commands.
Syntax: history [option]

Example: To see history of previously executed commands


[root@localhost ~]# history
1 history
2 man clear
3 ln shi1.txt shi2.txt
4 ls -l shi1.txt
5 chmod 777 shi1.txt

Example : How to clear the history.


c option is used with history command to clear the history
[root@localhost ~]# history -c
[root@localhost ~]# history
1 history

Help related to History command:-


!! To repeat last command
Example:
[root@localhost ~]# !!
date
Tue Dec 19 11:23:13 IST 2006
!n to repeat a command by its number in history output.

Example:
[root@localhost ~]# history
1 date
2 date
3 pwd
4 ls
5 ls -l
6 ls -l text.txt
7 history
[root@localhost ~]# !3
pwd
/root
Use ^old^ new to repeat last command with old to new

Example:
[root@localhost ~]# cat >text2.txt
hi all
g
[root@192 ~]#
[root@localhost ~]# ^text2.txt^ text3.txt
cat > text3.txt
hi all
who are u

clear
To clear the screen
Syntax: clear

Example: To clear the screen


[root@192 ~]#clear

script
This command stores your login session to a specified file. Output is stored in typescript
file .you can also specify any other name of your choice. Exit command is used to finish
the script session.
Syntax: script

Example: To start the script.


[root@localhost ~]# script
Script started, file is typescript
Example: To end the script.
[root@localhost ~]# exit
Script done, file is typescript

Example: To see which commands are run during script session.


[root@localhost ~]# cat typescript
All the operation done during script session will be displayed.

Example: To store result in a specified file during script session.


[root@localhost ~]# script test.txt
Script started, file is test.txt
Now result is stored in test.txt. We can see all operations during script session is seen by
using.
[root@localhost ~]# cat test.txt

expr
This command is used to perform arithmetic operations( operators + for addition,- for
subtraction ,\* for multiplication , % for remainder ,/ for division is used) on integers.

Example: To multiply 6 with 7


[root@localhost ~]# x=6
[root@localhost ~]# y=7
[root@localhost ~]# expr $x \* $y
[root@localhost ~]# 42

Example: To add 6 and 7


[root@localhost ~]# expr $x + $y
[root@localhost ~]# 13
Note: Multiplication operator (*) has to be escaped to prevent the Shell from interpreting
it as the filename meta-character. This command only works with integers.

bc
This command is used for activating calculator for performing arithmetic operations on
integers as well as float numbers.
Syntax: bc
Example: Add 10 to 20
[root@localhost ~]# bc
10 + 20
30

Example: Divide 9 by 4
[root@localhost ~]# bc
a=9
b=2
c=a/b
c
4

Example: To display result in fraction part


[root@localhost ~]# bc
scale=2
11/3
3.66
This be command can be used with ibase (input base) and obase (output base) to convert
numbers in one bash into another.
# bc
Ibase=2  Set ibase to binary (2)
Type the input binary number
7 Result in decimal

# bc
obase=2  Set obase to binary
Type the input decimal number
Result in binary

# bc
Ibase =16  Set ibase to hexadecimal
A Type the input hexadecimal number
10 Result in decimal

#bc
Obase =16  Set obase to hexadecimal
11 Type the input decimal number
B Result in hexa-decimal

Note: Press ^d to exit from calculator.

split
This command is used to split large size file in small size files .Because large size files
cannot be displayed on editor.
Syntax: split -<number> <filename>
Where number is the size of splited files. Default size is 1000 lines. Suppose a file
contains 3550 lines. If we define number size 500. Then large file is splited in 8 files. Out
of them first seven files will be of size 500 lines and last file contains 50 lines. Files will
be named as xaa,xab,xac and so on.

Example: To split a file into different files of size 500 lines each
[root@localhost ~]#split -500 text1.txt
This command will split text1.txt into 500 line files.
To combine the files use following command
[root@localhost ~]#cat x* > newfile
2.3 DIRECTORY ORIENTED COMMANDS IN LINUX

mkdir
This command is used to create a new directory.
Syntax: mkdir <directory-name>

Example: Creating a new directory


[root@localhost ~]# mkdir Linux
This command will create a new subdirectory with name Linux of current directory .

Example: Creating a subdirectories in a directory with single mkdir


-p option is used to achieve this purpose
[root@localhost ~]# mkdir -p Linux/teji/teji1
For current directory, a subdirectory Linux is created. Then for directory Linux teji
subdirectory is created. After that, teji1 is created as a subdirectory of teji.

Example: creating more than one directory in single move.


[root@localhost ~]# mkdir x y z
This will create three directories having names x, y, z respectively.

rmdir
This command is used to remove a directory. Directory should be empty before deletion.
Syntax: rmdir <directory-name>

Example: Removing a directory


[root@localhost ~]# rmdir pan
This will remove directory having name pan.

Example: Removing subdirectories in a directory with single rmdir


-p option is used for this pupose.
[root@localhost ~]# rmdir -p linux/teji/teji1/

pwd
This command will displays full path name for present working directory.
Syntax: pwd

Example: To see current working directory


[root@localhost ~]# pwd
/root
Present working directory is /root

cd
This command is used for changing current directory to specified directory.
Syntax: cd <directory-name>
Example: To move to a specified directory
[root@localhost ~]# cd dir/dir2
[root@localhost dir2]#
If we use pwd command it will display present working directory as
[root@localhost dir2]# pwd
/root/dir/dir2

Example: To move to a parent directory


[root@localhost dir2]# cd ..
[root@localhost dir]# pwd
/root/dir
Now we are on parent directory of dir2.

Example: To move to a root directory


[root@localhost dir]# cd ~
[root@localhost ~]#pwd
/root

ls
This command is used to list the contents of specified directory
Syntax: ls [options] <directory-name>
Options:
Example: Lists all files and directories including hidden files.
-a(All) option is used for this purpose.
[root@localhost ~]# ls -a
. .gnome2 .ssh
.. .gnome2_private .sversionrc
.aaa.sh.swp .gstreamer-0.8 .swp
ani.sh~ .gtkrc t1.doc
.ani.sh.swp .gtkrc-1.2-gnome2 t1.txt
.bash_history .ICEauthority t3.doc
.bash_logout ishan t4.doc
.bash_profile linux book .tcshrc
.bashrc linux slides t.doc
.cshrc naveen .teji.txt.swp
Desktop

Example: Lists all files and directories in long format.


-l (long) option is used for this purpose.
List files in long format (- represents file ,d represents directory, file permissions ,number
of links ,owner of file ,file size ,file creation /modification time, name of file )
[root@localhost ~]# ls -l t1.doc
-rw-r--r-- 1 root root 107 Dec 14 10:51 t1.doc
[root@localhost ~]# ls -l dir
total 24
drwxr-xr-x 2 root root 4096 Dec 14 11:37 dir1
-rw-r--r-- 1 root root 77 Dec 14 11:05 t4.txt
-rw-r--r-- 1 root root 43 Dec 14 11:31 teji.txt
This command will displays directory dir1 as well as all contained files in long format.

Example: Lists all files and directories in reverse order


-r (reverse) option is used for this purpose.
[root@localhost ~]# ls -r
win t4.doc pan cd
web slides t3.doc naveen C___C___PROJECTS
tejinder t1.txt linux slides c2.txt
teji.doc t1.doc linux book c1.txt
teji1.doc solar system ishan ani.sh~
teji rpms Directory commands.doc
te.doc ram.txt dir
t.doc ram Desktop

Example: Recursively lists all files and directories as well as files in subdirectories
[root@localhost ~]# ls -R dir
dir:
dir1 t4.txt teji.txt
dir/dir1:
t.txt

Example: Puts a slash after each directory


[root@localhost ~]# ls -p
ani.sh~ ishan/ solar system/ teji1.doc
c1.txt linux book/ t1.doc teji.doc
c2.txt linux slides/ t1.txt tejinder
naveen/ t3.doc web slides/
cd/ pan/ t4.doc win/
Desktop/ ram/ t.doc
dir/ ram.txt te.doc
rpms/ teji/

Example: Displays the number of storage blocks used by a file


-s(storage)option is used for this purpose.
[root@localhost ~]# ls -s t1.doc
8 t1.doc
This shows file size 8 byte used by file t1.doc.

Example: Displays * after files for those user has executable permission
-F option is used for this purpose
[root@localhost ~]# ls -F teji.doc
teji.doc*
du
Disk usage command reports disk space that is consumed by files in directories including
subdirectories. Without arguments, du reports disk space for current directory.
Syntax: du[options] [<directory-name>]
Options are

Example: displays counts for all files as well as directories


-a option is used for this purpose.
[root@localhost ~]# du -a tejinder
136 tejinder/teji/nis server.doc
176 tejinder/teji/linux clusters
916 tejinder/teji/linux-RHCE.doc
100 tejinder/teji/nfs server.doc
292 tejinder/teji/ldap.doc
3764 tejinder/teji/teji.doc
5392 tejinder/teji
5400 tejinder

Example: displays size in bytes


-b option is used for this purpose.
[root@localhost ~]# du -b tejinder
5453312 tejinder/teji
5457408 tejinder

Example: Displays output along with grand total of all the arguments
[root@localhost ~]# du -c tejinder
5392 tejinder/teji
5400 tejinder
5400 total

Example: Displays size in kilobytes


[root@localhost ~]# du -k tejinder
5392 tejinder/teji
5400 tejinder

Example: Displays size in megabytes


[root@localhost ~]# du -m tejinder
6 tejinder/teji
6 tejinder

df
Disk free command displays free space available on file system.
Syntax: df [options]
Options are:
Example: shows size in kilobytes
[root@localhost ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
15772060 5262736 9708148 36% /
/dev/sda3 101105 12309 83575 13% /boot
none 123680 0 123680 0% /dev/shm

Example: shows local file system only


l option is used for this purpose
[root@localhost ~]# df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
15772060 5262736 9708148 36% /
/dev/sda3 101105 12309 83575 13% /boot
none 123680 0 123680 0% /dev/shm

Example: shows size in megabytes


[root@localhost ~]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
15403 5140 9481 36% /
/dev/sda3 99 13 82 13% /boot
none 121 0 121 0% /dev/shm

Example: shows free, used and percentage of used i nodes


[root@localhost ~]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00
2003424 124579 1878845 7% /
/dev/sda3 26208 38 26170 1% /boot
none 30920 1 30919 1% /dev/shm

find
Search a folder hierarchy for filename(s) that meet desired criteria: Name, Size, File Type
etc.
Syntax: find [path...] [options] [tests] [actions]
find searches the directory tree starting at the given pathname (or pathnames if several are
given) by evaluating the given expression from left to right, according to the rules of
precedence, until the outcome is known (the left hand side is false for AND operations,
true for OR), at which point find moves on to the next file name. Conditions may be
grouped by enclosing them in \( \) (escaped parentheses), negated with !, given as
alternatives by separating them with -o, or repeated (adding restrictions to the match;
usually only for -name, -type, or -perm). Note that "modification" refers to editing of a
file's contents, whereas "change" means a modification, or permission or ownership
changes. In other words, -ctime is more inclusive than -atime or -mtime.
The first argument that begins with a - or (or) or, or! is taken to be the beginning of the
expression; any arguments before this are paths to search, and any arguments after it are
the rest of the expression. If no paths are given, the current directory is used. If no action
is given, `-print' is used. find exits with status 0 if all files are processed successfully,
greater than 0 if errors occur.

Example: finding all text files in current directory


List all filenames ending in .txt, searching in the current folder and all sub folders:
[root@localhost ~]# find . -name "*.txt"
./ram/dir/dir1/t.txt
./ram/dir/t2.txt
./ram/dir/teji.txt
./C___C___PROJECTS/egg.txt
./c1.txt

Example: List all files that belong to the user


[root@localhost ~]# find . -user teji
./Directory oriented commands.doc
./dir
./dir/dir2
./dir/t4.txt
./dir/dir1
./.fonts.cache-1
./.viminfo
./.ssh
./.ssh/known_hosts

Example: List all the directory and sub-directory names


[root@localhost ~]# find . -type d
Output will be like
./.gstreamer-0.8
./tejinder
./tejinder/teji
./.metacity

Example: List all files in those sub-directories (but not the directory names)
[root@localhost ~]# find . -type f

Example: List all the file links:


[root@localhost ~]# find . -type l

Example: List all files (and subdirectories) in your home directory:


[root@localhost ~]# find $HOME

Example: Find files that are over a gigabyte in size:


[root@localhost ~]# find ~/Movies -size +1024M
Example: Find files have been modified within the last day
[root@localhost ~]# find ~/Directory oriented commands-mtime -1

Example: Find files have been modified within the last 30 minutes
[root@localhost ~]#find ~/Directory oriented commands -mmin -30

Example: Find all file have .c Extension?


[root@localhost ~]# find / -name *.c -print

Example: How do I find all core file and delete them?


[root@localhost ~]# find / -name core* -ok rm {} \;

Example: How do I find all *.c files having 2kb file size?
Syntax : find -size number[ckwb]
number - It's 2 byte or 1 kilobyte etc
c - bytes
k - kilobytes
w - words (2-byte)
b - 512-byte block
[root@localhost ~]# find / -name *.c -size 2k -print

Example: How do I find all files access 2 days before?


Syntax: find -type f -atime -days
[root@localhost ~]# find /home/teji -type f -atime -2

Example: Find all files NOT accessed in a given period (2 days)?


Syntax: find -type f -atime +days
[root@localhost ~]# find /home/teji -type f -atime +2

Example: How do I find all special block files in my system?


Syntax: find -type [bcdpfls]
b block device
c character device
d directory
p name pipe (FIFO)
f regular file
l symbolic link
s socket
[root@localhost ~]# find / -type b -print

ln
link command is used to establish an additional filename to a specified file. It is not the
duplicate copy. It just likes to call a person with two names.
Syntax: ln <filename> <additional-filename>
Example: creating a new link for a file.
After execution of this command shows number of link for ram.txt
[root@localhost ~]# ls -l ram.txt
-rw-r--r-- 1 root root 33 Dec 14 14:22 ram.txt
After execution of this command we would be able to see contents of ram.txt
[root@localhost ~]# cat ram.txt
hello
india
is
a great country
After execution of this command ram.txt has two links.
[root@localhost ~]# ln ram.txt ram1.txt
Now we are able to see contents of ram.txt with ram1.txt also.
[root@localhost ~]# cat ram1.txt
hello
india
is
a great country
[root@localhost ~]# ls -l ram.txt
-rw-r--r-- 2 root root 33 Dec 14 14:22 ram.txt

2.4 FILE ORIENTED COMMANDS IN LINUX

cat
Syntax: cat [-Options] [File]...
Options:
-n To number all the lines in a file.
[root@localhost ~]# cat >te.txt
hi all
what is ur name

what abt u?
[root@localhost ~]# cat te.txt -n
1 hi all
2 what is ur name
3
4 what abt u?

-b Numbers non-blank output lines


Example:
[root@localhost ~]# cat te.txt -b
1 hi all
2 what is ur name

3 what abt u?
-T display TAB characters as ^I
Example: To check this option creates a file with tabs.
[root@localhost ~]# cat >t2.txt
c programs are easier to debug as compare to assembly language
What abt c++?

After execution all tabs are replaced with ^I


[root@localhost ~]# cat t2.txt -T
c^Iprograms are easier to debug^Ias compare to ^Iassembly language
What^Iabt^Ic++?

-E display $ at end of each line


Example:
[root@localhost ~]# cat t2.txt -E
c programs are easier to debug as compare to assembly language$
What abt c++?$

Purpose of cat command:


1) This command is used with redirection operator (>) to create new files.
Example:
[root@localhost ~]# cat >teji.txt
hi i am an indian
hello world
what abt you?
press ctrl+d to save the file.s

2) Display the contents of a file.


Example:
[root@localhost ~]# cat teji.txt
hi i am an indian
hello world
what abt you?

3) This command is also used to copy a file into another file.


[root@localhost ~]# cat teji.txt>teji1.txt
After execution of this command contents of teji.txt will be copied into teji1.txt.
using cat command we can check the contents of teji1.txt.
[root@localhost ~]# cat teji1.txt
hi i am an indian
hello world
what abt you?

4) Cat command is also used to display contents of multiple files one by one.
[root@localhost ~]# cat teji.txt t.txt
After execution of this command contents of teji.txt and t.txt will be displayed one by
one.
hi i am an indian
hello world
what abt you?
c is a middle level language.
c++ is an object oriented language.

5) cat command is used for concatenate files into other files.


[root@localhost ~]# cat teji.txt t.txt >> t1.txt
After execution of these command contents of t.txt is concatenated at the end of teji.txt
and result is stored into t1.txt.
we can see the contents of concatenated file using following command.
[root@localhost ~]# cat t1.txt
hi i am an indian
hello world
what abt you ?c is a middle level language.
C++ is an object oriented language

cp
Copy one or more files to another
Syntax: cp [options]... Source Dest
cp [options]... Source... Directory

Example: copying single file with cp


[root@localhost ~]# cp t2.txt t3.txt
After execution of this command contents of t2.txt will be copied into t3.txt.
[root@localhost ~]# cat t3.txt
c programs are easier to debug as compare to assembly language
What abt c++?
Example: copying multiple files with cp
Syntax: cp -i file1 file2 .........target_dir
[root@localhost ~]# mkdir dir
This will create a new directory with name dir.
[root@localhost ~]# cp -i t2.txt teji.txt dir
After execution of this command t2.txt teji.txt will be saved into dir directory.

Example: copying one directory to another directory with cp


Create a new directory where you want to copy contents of existing directory. After
execution of this command new directory is created with name ram.
[root@localhost ~]#mkdir ram
After execution of this command contents of directory dir will be copied into directory
ram.
[root@localhost ~]# cp -R dir ram
Example: Copy a file from/to a DOS files system (no mounting necessary)
Syntax: -mcopy source destination
[root@localhost ~]# mcopy a:\autoexec.bat ~/junk

rm
This command is used to remove a file from a specified directory.
Note:-To remove a file, you must have write permission for directory that contains file.
Syntax: rm [options ] <filename>

Example: removing a single file from a specified directory


[root@localhost ~]# rm dir/t2.txt
rm: remove regular file `dir/t2.txt'? y

Example: removing files without confirmation


Remove (delete) files. You must own the file in order to be able to remove it. On many
systems, you will be asked or confirmation of deletion, if you don't want this, use the "-f"
(=force) option,
[root@localhost ~]# rm -f *
This will remove all files in my current working directory, no questions asked.

Example: removing files recursively


(recursive remove) Remove files, directories, and their subdirectories. Careful with this
command as root--you can easily remove all files on the system with such a command
executed on the top of your directory tree, and there is no undelete in Linux (yet). But if
you really wanted to do it (reconsider), here is how (as root):
[root@localhost ~]#rm -rf /*
This command will forcefully delete all the files.

mv
Move or rename files or directories. mv can move only regular files across file systems.
Syntax: mv [options]... Source Destination
mv [options]... Source... Directory
If the last argument names an existing directory, `mv' moves each other given file into a
file with the same name in that directory. Otherwise, if only two files are given, it
renames the first as the second. It is an error if the last argument is not a directory and
more than two files are given.

Example: Move file to another file:


Moves contents of t2.txt into t5.txt
[root@localhost ~]# mv t2.txt t5.txt
cat command will display the contents of t5.txt
[root@localhost ~]# cat t5.txt
c programs are easier to debug as compare to assembly language
What abt c++?
message “No file or directory” will be displayed because contents moved to t5.txt.
[root@localhost ~]# cat t2.txt
cat: t2.txt: No such file or directory

Example: Move file to the specified folder:


Move teji.txt to the dir folder with name t4.txt.
Now dir contains file t4.txt
[root@localhost ~]# mv teji.txt ~/dir/t4.txt
cat will display contents of file.
[root@localhost ~]# cat ~/dir/t4.txt
c programs are easier to debug as compare to assembly language
What abt c++?

Example: Remove existing destination files and never prompt the user.
[root@localhost ~]# mv -f t5.txt teji.txt
[root@localhost ~]# cat teji.txt
c programs are easier to debug as compare to assembly language
What abt c++?

Example: Rename a bunch of file extensions


Change *.txt into *.doc
[root@localhost ~]# for i in *.txt; do mv ./"$i" "${i%txt}doc"; done
After execution of this command all files with txt extension in home directory will
be changed to doc extension. But files insides folders remains with same extension.

Example: Print the name of each file before moving it.


--v(--verbose)option is used for this purpose
Example:
[root@localhost ~]# mv -v t2.txt t1.txt `t2.txt' -> `t1.txt'

wc
This command is used to display the number of lines, words and characters of
information stored on the specified file.
Syntax: wc [options] <filename>
Options:
l -Displays the number of lines in a specified file.
w- Displays the number of words in a specified file.
c- Displays the number of characters in a specified file.

Example: Displaying number of words, lines and characters in a file


[root@localhost ~]# cat >ram.txt
hello
india
is
a great country
[root@localhost ~]# wc ram.txt
4 6 33 ram.txt
ram.txt contains 4 lines,6 words and 33 characters
file
This command lists general classification of a specified file. It will tell us weather a file is
ASCII text, c program text, data, executable, empty or others.
Syntax: file <filename>

Example: To check file type of any specified file


[root@localhost ~]# file t4.doc
t4.doc: ASCII fortran program text

Example: To check file type all files in home directory


[root@localhost ~]# file *
ani.sh~: ASCII English text
linux.doc: Microsoft Office Document
linux.sxw: Zip archive data, at least v2.0 to extract
ram: directory
ram.txt: ASCII text
rpms: directory
t1.doc: ASCII text
t1.txt: ASCII fortran program text
t3.doc: ASCII fortran program text
t4.doc: ASCII fortran program text
t.doc: ASCII fortran program text
teji.doc: Microsoft Office Document
tejinder: directory

comm
This command is takes two files as input and tells what is common .It compares each line
of first file with its corresponding line in second file. It uses three columns
1) Column1 contains line unique to file1.
2) Column2 contains line unique to file2.
3) Column3 contains lines common in file1 and file2.
Syntax: comm [-options] <filename1><filename2>
Options:
1 suppresses listing of column1
2 suppresses listing of column2
3 suppresses listing of column3

Example: Without suppressing finding common between two files.


[root@localhost ~]# cat >c1.txt
who are you?
what is your name?
what abt others?
[root@localhost ~]# cat >c2.txt
who are you?
this is jmit radaur college
what abt other colleges?
[root@localhost ~]# comm c1.txt c2.txt
who are you?
this is jmit radaur college
what abt other colleges?
what is your name?
what abt others?
This command displays the lines that are unique to c1.txt in column1, the lines that are
unique to c2.txt in column2 and common lines in column3.

Example: Finding common between two files


This can be done with the help of suppression.
[root@localhost ~]# comm -12 c1.txt c2.txt
who are you?
This command will suppress the column1 and column2 .Display only common in file1
and file 2.

cmp
This command compares two files. This compares file byte by byte and displays first
mismatch on the screen.
Syntax: cmp <filename1><filename2>

Example: Comparing two files


[root@localhost ~]# cmp c1.txt c2.txt
c1.txt c2.txt differ: byte 14, line 2

File access permissions:


File:File is collection of data items stored on disk. Or it's device which can store the
information/data/music/picture/movie/sound/book; In fact what ever you store in
computer it must be inform of file. Files are always associated with devices like hard
disk, floppy disk etc. File is the last object in your file system tree.
Types of files in Linux:
For Linux every thing is a file. Users and groups are used to control access to files and
resources. Users log into the system by supplying their names and password. Every file in
system is owned by user and associated with a group. There are 3 types of files in
Linux.
Ordinary file: These files consist of stream of data that is stored on some magnetic
media.
Directory file: Directories Keep tracks of all files and subdirectories. Directories do not
contain any data. Directory is group of files. Directory is divided into two types as
(1) Root directory: Strictly speaking, there is only one root directory in your system,
which is shown by / (forward slash), which is root of your entire file system. And can not
be renamed or deleted.
(2) Sub directory: Directory under root (/) directory is subdirectory which can be created,
rename by the user. For e.g. bin is subdirectory which is under / (root) directory.
Directories are used to organize your data files, programs more efficiently.
Device files: For Linux every device is also a file. These are also called special files.

And the process which manages the Files i.e. creation, deletion, copying of files and
giving read, write access of files to user is called File management.

How to hide folders or files


In Linux you can see the hidden files by using the command
[root@localhost ~]# ls -a
which list all the files inside a folder.
We can hide all the files by naming the file with a dot(.) in front of the file name.

Example:
[root@localhost ~]# cat> .ramesh.txt
Now if you list the file by using ls -l or ls command you will not be able to see the file.
The hidden folders can also be created in the same manner

Example :
[root@localhost ~]# mkdir .dirname
mkdir is the make directory command and .dirname is your folder name.

Example: Finding the biggest files


you can find the biggest files in the current directory with:
[root@localhost ~]# ls -lSrh
The "r" causes the large files to be listed at the end and the "h" gives human readable
output (MB and such). You could also search for the biggest MP3.
[root@localhost ~]# ls -lSrh *.mp*
You can also look for the largest directories with:
[root@localhost ~]# du -kx | egrep -v "./.+/" | sort -n
Example: How to list today’s files only
Sometime earlier in the day, we created a text file, which now is urgently required. you
can't remember file name you gave it, and your home folder has number of different files.
[root@localhost ~]# ls -al --time-style=+%D | grep `date +%D`

File extension
Compressed and Archived Files
.bz2. a file compressed with bzip2
.gz. a file compressed with gzip
.tar. a file archived with tar (short for tape archive), also known as a tar file
.tbz. a tarred and bzipped file
.tgz. a tarred and gzipped file.
.zip. a file compressed with ZIP compression, commonly found in MS-
DOS applications.
File Formats
.au. an audio file
.gif. a GIF image file
.html/.htm. an HTML file
.jpg. a JPEG image file
.pdf. an electronic image of a document; PDF stands for Portable Document
Format
.png. a PNG image file (short for Portable Network Graphic)
.ps. a PostScript files; formatted for printing
.txt. a plain ASCII text file
.wav. an audio file
.xpm. an image file
System Files
.conf a configuration file. Configuration files sometimes use the .cfg extension,
as well.
.lock a lock file; determines whether a program or device is in use
.rpm a Red Hat Package Manager file used to install software
Programming and Scripting Files
.c a C program language source code file
.cpp . a C++ program language source code file
.h . a C or C++ program language header file
.o . a program object file
.pl . a Perl script
.py. a Python script
.so a library file
.sh a shell script
.tcl a TCL script

Operation on files:
Copy file
Delete file
Rename file
Move file
Changing file date & time stamp
Creating symbolic link
Changing file permission or ownership
Searching files
Compressing / Decompressing files
Comparison between files
Printing files on printer
Sorting files etc

Linux users:
user(u):- The user who owns it (u)
user group(g):-Other users in the file's group (g)
Others(o):-Other users not in the file's group (o)
All(a):-All users (a)

Accessing Modes:
Linux contains three types of modes of accessing.
read mode(r):-Permission to read a file or list a directory's contents.
write mode (w):-Permission to write to a file or create and remove files from directory.
execute mode(x):-Permission to execute a program or change into directory.

Note: If no access permission is given then “-” will be displayed.

chgrp
Change group ownership. 'chgrp' changes the group ownership of each given File to
Group (which can be either a group name or a numeric group id) or to the group of an
existing reference file. Before assigning a file to a particular group, group must be exists.
Example: Changing group ownership for a file.
[root@localhost ~]# ls -l ram.txt
---x-w---x 2 kiran root 22 Dec 18 11:27 ram.txt
[root@localhost ~]# chgrp teji ram.txt
After execution group owner of file ram.txt is changed to teji in place of root.
[root@localhost ~]# ls -l ram.txt
---x-w---x 2 kiran teji 22 Dec 18 11:27 ram.txt

chown
This command is used to change ownership of a specified file .super user and owner of
file can change the ownership of file. New owner must be exists in user list.
Syntax: chown <new-owner-name> <filename>

Example: Changing file ownership.


[root@localhost ~]# chown teji ram.txt
After execution file owner is changed to kiran in place of root.
[root@localhost ~]# ls -l ram.txt
---x-w---x 2 teji teji 22 Dec 18 11:27 ram.txt

chmod
Change access permissions, change mode. By default, read and write permission is given
to user; only read permission is given to user group and others.

Numeric mode:
Syntax: chmod three-digits-number <file1>................
Use table to allocate permission
Numeric value Octal value Permissions
0 000 No Permission
1 001 Execute Permission
2 100 Write Permission
3 011 Write and execute Permission
4 100 Read Permission
5 101 Read and execute Permission
Numeric value Octal value Permissions
6 110 Read and write Permission
7 111 All Permission

From one to four octal digits any omitted digits are assumed to be leading zeros.
The octal (0-7) value is calculated by adding up the values for each digit
User (rwx) = 4+2+1 = 7
Group(rx) = 4+1 = 5
World (rx) = 4+1 = 5

Examples
chmod 400 file - Read by owner
chmod 040 file - Read by group
chmod 004 file - Read by world
chmod 200 file - Write by owner
chmod 020 file - Write by group
chmod 002 file - Write by world
chmod 100 file - execute by owner
chmod 010 file - execute by group
chmod 001 file - execute by world
To combine these just add the numbers together:
chmod 444 file - Allow read permission to owner and group and world
chmod 777 file - Allow everyone to read, write, and execute file

Symbolic Mode
Syntax: chmod user-symbol set/deny symbol <filenames(s)>
multiple symbolic operations can be given, separated by commas. A combination of the
letters `ugoa' controls which users' access to the file will be changed. If none of these are
given, the effect is as if `a' were given, but bits that are set in the umask are not affected.
all users (a) is effectively
user + group + others

'+' adds the permissions to the existing permissions of each file.


'-’ removed the permissions to the existing permissions of each file.
'=' Absolute permissions (all previously given permission are detached .only currently
given permissions are allocated).
Read (r),
Write (w),
Execute (x),
The permissions that the user who owns the file currently has for it (u).
The permissions that other users in the file's group have for it (g).
Permissions that other users not in the file's group have for it (o)

Example: Deny read permission to everyone:


create a file ram.txt.
[root@localhost ~]# cat >ram.txt
hi hello world
indian [ctrl+d]
Check default permissions
[root@localhost ~]# ls -l ram.txt
-rw-r--r-- 2 root root 33 Dec 14 14:22 ram.txt
[root@localhost ~]# chmod a-r ram.txt
[root@localhost ~]# ls -l ram.txt
--w------- 2 root root 33 Dec 14 14:22 ram.txt

Example : Allow Execute permission to everyone


[root@localhost ~]# ls -l ram.txt
--w------- 2 root root 33 Dec 14 14:22 ram.txt
[root@localhost ~]# chmod a+x ram.txt
[[root@localhost ~]# ls -l ram.txt
--wx--x--x 2 root root 22 Dec 18 11:27 ram.txt

Example: Make a file readable and writable by the group and others:
[root@localhost ~]# chmod go+rw ram.txt
[root@localhost ~]# ls -l ram.txt
---xrwxrwx 2 root root 22 Dec 18 11:27 ram.txt

Example:Allow everyone to read, write, and execute the file and turn on the set
group-ID:
[root@localhost ~]# ls -l ram.txt
---xrwxrwx 2 root root 22 Dec 18 11:27 ram.txt
[root@localhost ~]# chmod rwx,g+s ram.txt
[root@localhost ~]# ls -l ram.txt
-rwxr-sr-x 2 root root 22 Dec 18 11:27 ram.txt

Example:Assigning absolute permissions to a file.


[root@localhost ~]# ls -l ram.txt
-rwxr-sr-x 2 root root 22 Dec 18 11:27 ram.txt
[root@localhost ~]# chmod a=x ram.txt
After execution of this command all previously existing permissions are detached. Only
execute permission to all is allocated.
[root@localhost ~]# ls -l ram.txt
---x--x--x 2 root root 22 Dec 18 11:27 ram.txt

Example: Giving all permissions to others only and detach all other permissions
given group and user.
[root@localhost ~]# chmod a-rwx,o+rwx ram.txt
After execution firstly all permissions are detached from al l(user, group, others).then all
permission are allocated to others.

[root@localhost ~]# ls -l ram.txt


-------rwx 2 root root 22 Dec 18 11:27 ram.txt

Example : Giving different types of permissions to Different types of users.


[root@localhost ~]# ls -l ram.txt
-------rwx 2 root root 22 Dec 18 11:27 ram.txt
[root@localhost ~]# chmod a-rwx,u+x,g+w,o+x ram.txt
After execution firstly all permissions are detached from all(user, group, others).Then
read permission is allocated to users, write permission to group and execute are allocated
to others.
[root@localhost ~]# ls -l ram.txt
---x-w---x 2 root root 22 Dec 18 11:27 ram.txt

Important note: Here we will discuss about the 3 special attributes other than the
common read/write/execute.
Example:
drwxrwxrwt - Sticky Bits - chmod 1777
drwsrwxrwx - SUID set - chmod 4777
drwxrwsrwx - SGID set - chmod 2777

Sticky bits: Sticky bits are mainly set on directories. If the sticky bit is set for a directory,
only the owner of that directory or the owner of a file can delete or rename a file within
that directory.

SUID: SUID is set for files (mainly for scripts). The SUID permission makes a script to
run as the user who is the owner of the script, rather than the user who started it.

SGID: If a file is SGID, it will run with the privileges of the files group owner, instead of
the privileges of the person running the program. This permission set also can make a
similar impact. Here the script runs under the group's ownership. You can also set SGID
for directories.

tac
tac command reverses a file, last line becomes first line.
Syntax: tac <filename>
Example: To display contents of a file in reverse order.
[root@localhost ~]# cat >text.txt
hello
c
language is a middle level language
C++ is an object oriented language
java is purely object oriented language

[root@localhost ~]# tac text.txt


java is purely object oriented language
C++ is an object oriented language
language is a middle level language
c
hello

nl
Number lines and write files. It numbers only non-blank lines.
Syntax: nl <filename>

Example: To assign numbers to all non blank lines in a file.


[root@localhost ~]# nl text.txt
1 hello
2 c
3 language is a middle level language
4 C++ is an object oriented language

5 java is purely object oriented language.

expand
Convert tabs to spaces; write the contents of each given file, to standard output, with tab
characters converted to the appropriate number of spaces. If no file is given, or for a file
of `-', write to standard input
Syntax: expand [options]... [file]...

Example: To convert tabs into blank.


[root@localhost ~]# cat >text1.txt
hi india will win the test match
cricket is a passion
[root@localhost ~]# expand -t 2 text1.txt
hi india will win the test match
cricket is a passion
Note: unexpand command is used to convert spaces into tabs. It will perform opposite
operation to expand command.

touch
touch command is used to change last modification and access time of a specified file
into the specified time.
Syntax: touch MMDDHHmm <filename1>..............
Where MM Month(01-12)
DD Day(01-31)
HH Hour(00-23)
mm Minute(00-59)
if MMDDHHmm format is not used then current date and time as taken by default.

Example: To convert last modification and access date and time to current date and
time.
[root@localhost ~]# ls -l dir/t4.txt
-rw-r--r-- 1 root root 77 Dec 14 11:05 dir/t4.txt
[root@localhost ~]# touch dir/t4.txt
[root@localhost ~]# ls -l dir/t4.txt
-rw-r--r-- 1 root root 77 Dec 18 14:20 dir/t4.txt

Example: To convert last modification and access date and time to specified date
and time.
[root@localhost ~]# ls -l dir/t4.txt
-rw-r--r-- 1 root root 77 Dec 18 14:20 dir/t4.txt
[root@localhost ~]# touch -t 08232245 dir/t4.txt
[root@localhost ~]# ls -l dir/t4.txt
-rw-r--r-- 1 root root 77 Aug 23 22:45 dir/t4.txt

Example: To convert last modification and access date and time to specified date
and time of all the file's existing for a particular user.
[root@localhost ~]#touch *
-rw-r--r-- 1 root root 43 Dec 18 14:30 teji1.doc
-rwxr-xr-x 1 root root 3843072 Dec 18 14:30 teji.doc
drwxr-xr-x 3 root root 4096 Dec 18 14:30 tejinder
-rw-r--r-- 1 root root 54 Dec 18 14:30 text1.txt
-rw-r--r-- 1 root root 118 Dec 18 14:30 text.txt
drwxr-xr-x 2 root root 4096 Dec 18 14:30 x
drwxr-xr-x 2 root root 4096 Dec 18 14:30 y
drwxr-xr-x 2 root root 4096 Dec 18 14:30 z

tail
Output the last part of files.
Syntax: tail +/-n <filename>

Example: To display last name of files from root in long format.


[root@localhost ~]# ls -l | tail
-rw-r--r-- 1 root root 54 Dec 18 14:30 text1.txt
-rw-r--r-- 1 root root 118 Dec 18 14:30 text.txt
drwxr-xr-x 2 root root 4096 Dec 18 14:30 x
drwxr-xr-x 2 root root 4096 Dec 18 14:30 y
drwxr-xr-x 2 root root 4096 Dec 18 14:30 z

Example: To display lines starting from second last line.


It will display all contents of file text.txt.
[root@localhost ~]# cat text.txt
hello
c
language is a middle level language
C++ is an object oriented language
java is purely object oriented language
Example: To display lines starting from second last line.
[root@localhost ~]# tail -2 text.txt
C++ is an object oriented language
java is purely object oriented language

Example: To display lines starting from second line onward.


[root@localhost ~]# tail +2 text.txt
c
language is a middle level language
C++ is an object oriented language
java is purely object oriented language

head
Output the first part of file(s)
Syntax: head -n <filename>
If -n option is used, then first n lines of a file are displayed.

Example: To display first n lines of a file.


[root@localhost ~]# head -3 text.txt
hello
c
language is a middle lavel language.

Example: To display first n files of directory listing of current directory.


[root@localhost ~]# ls -l | head
total 3964
-rw-r--r-- 1 root root 141 Dec 18 14:30 ani.sh~
-rw-r--r-- 1 root root 49 Dec 18 14:30 c1.txt
-rw-r--r-- 1 root root 65 Dec 18 14:30 c2.txt
drwxr-xr-x 3 root root 4096 Dec 18 14:30 C___C___PROJECTS
drwxr-xr-x 2 root root 4096 Dec 18 14:30 Desktop
drwxr-xr-x 4 root root 4096 Dec 18 14:30 dir
dr-xr-xr-x 9 root root 4096 Dec 18 14:30 ishan
drwxr-xr-x 4 root root 4096 Dec 18 14:47 linux book
drwxr-xr-x 2 root root 4096 Dec 18 14:30 ram

dd
This command copies files and converts them in one format into another.
Syntax: dd[options=values]
Where options can be,
if input filename
of output filename
conv file conversion specification. More than one conversion may be specified
by separating them with commas. The values for this option may be as
follows,
lcase convert uppercase letters into lowercase
ucase converts lowercase letters into uppercase
ASCII converts the file by translating the character set from
EBCDIC to ASCII
EBCDIC converts the file by translating the character set from
ASCII to EBCDIC

Example: To convert uppercase to lowercase


[root@localhost ~]# cat test1.txt
HI I AM AN INDIAN
RAJNI BELONGS TO RADAUR TOWN.
[root@localhost ~]# dd if=test1.txt of=test2.txt conv=lcase
0+1 records in
0+1 records out
[root@localhost ~]# cat test2.txt
hi i am an indian
rajni belongs to radaur town.

Example: To convert lowercase to uppercase.


[root@localhost ~]# cat test2.txt
hi i am an indian
rajni belongs to radaur town.
[root@localhost ~]# dd if=test2.txt of=test3.txt conv=ucase
0+1 records in
0+1 records out
[root@localhost ~]# cat test3.txt
HI I AM AN INDIAN
RAJNI BELONGS TO RADAUR TOWN.
[root@localhost ~]# dd if=test2.txt of=test3.txt conv=ascii
0+1 records in
0+1 records out

Example: To convert lowercase to ascii mode.


[root@localhost ~]# dd if=test2.txt of=test3.txt conv=ascii
0+1 records in
0+1 records out
[root@localhost ~]# cat test3.txt
/_ /> > />シ/|> %?> ? / / ? ># /_
/> > />シ/|> %?> ? / / ? >

umask
For changing default permissions. When we create directories they have default
permissions 755 (rwxr-xr-x) and the files have default permission 644(rw-r- -r--).
[root@jmit ~]# umask
0022
Current umask is displayed. Subtracting 777-022=755 default permissions.
[root@jmit ~]# umask 077
For setting permission 077 i.e 700(rwx for owner and no permission group and others)
This umask is set only for the current session and also umask cannot change the
permission for existing files and directories. Normal user has different umask from root.
This umask is for directories. For files the default permission is 644 and maximum
permission can be set is 666 for normal and text file's.
Thus 666-022=644 is default setting. When we set umask as 077 the directories has 700
permissions and files have 600 permissions.
Example: How to store umask permanently
Every user has a .bashrc file. The startup file or logon scripts are written in this file. This
file is the first file to be read. To make umask permanent append this file with umask
<permission>. For example umask 077.

alias and unalias


Create an alias, aliases allow a string to be substituted for a word when it is used as the
first word of a simple command.
Syntax: alias [-p] substituted name=value
If no value is given, `alias' will print the current value of the alias.
[root@jmit ~]# alias
alias cp='cp -i'
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'
alias mv='mv -i'
alias rm='rm -i'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot –show-tilde'
With p option, alias prints the list of aliases on the standard output in a form that allows
them to be reused as input. unalias will remove each name from the list of aliases. If `-a'
is supplied, all aliases are removed.
Syntax: unalias [-a] [name ... ]

Example: How to set an alias


[root@jmit ~]# alias list='ls -l'
[root@jmit ~]# list test.txt
-rw-r--r-- 1 root root 88 Dec 29 14:02 test.txt

Example: How to make an alias permanent


We can make alias permanent by writing it in the .bashrc file. For example append alias
list='ls -l' in this file. OR Create a .bash_aliases file, and type the alias commands into
the file. .bash_aliases will run at login.

Add user defined aliases to ~/.bashrc file.


[root@localhost~]#alias name=’command’
[root@localhost~]#alias name=’command arg1 arg2’
Examples:
[root@localhost~]#alias c='clear'
To clear the terminal, enter:
c
Create an aliase called d to display the system date and time, enter:
[root@localhost~]#alias d='date'
d:38:59 IST 2009
How do I remove the alias?
Aliases are created and listed with the alias command, and removed with the unalias
command.
Syntax : unalias alais-name
[root@localhost~]#unalias c
[root@localhost~]#unalias c d
To list currently defined aliases, enter:
[root@localhost~]#alias
alias c='clear'
alias d='date'

If you need to unalise a command called d, enter:


[root@localhost~]#unalias d
alias
If the -a option is given, then remove all alias definitions, enter:
[root@localhost~]#unalias -a
alias

Permanently adding aliases to my session


If you want to add aliases for every user, place them either in /etc/bashrc or
/etc/profile.d/useralias.sh file. Please note that you need to create
/etc/profile.d/useralias.sh file. User specific alias must be placed in
~/.bashrc ($HOME/.bashrc) file.
Sample ~/.bashrc file
Example:
~/.bashrc script:
# protect cp, mv, rm command with confirmation
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'

# vi is vim
alias vi='vim'
Ignoring an alias
To ignore aliases called ls and run ls command, enter:
\ls
OR
"ls"
Or just use the full path:
/bin/ls

mc (Midnight Commander Program for Handling Files)


The mc command, called Midnight Commander, is a graphical interface for handling
files.
[root@jmit ~]# mc
This section does not cover all of the details of the mc command. But here are the
highlights of its features:

• Provides visual interface to two directories at a time, and directory browsing with
mouse clicks
• Allows menu-driven file operations with dialogs and mouse or keyboard support
Has an open command line to your shell
• Runs commands through mouse clicks
• Extensive, built-in hypertext help screens
• Emulates and supports the ls, cp, ln, mv, mkdir, rmdir, rm, cd, pwd, find, chown,
chgrp, and tree commands

The Midnight Commander visual shell displays a graphical interface to Linux file
commands.
• Compares directory contents
• Uses customized menus so you can build your own commands
• Can use network links for telnet or FTP operations (see Hour 13, "Internet
Downloading and Browsing")
• Offers mouse-click decompression of files (see gzip)
• Can undelete files (if your Linux filesystem is configured to support this)

Midnight Commander is a handy and convenient tool to use for file handling and
directory navigation. You will have to invest some time in learning how to use this
program, but if you've used similar file-management interfaces, you'll feel right at home.

2.5 Communication Oriented Commands


Linux provides the communication facility, from which a users can communication with
the others users .The communication can be on line or off line .In online communication,
the users, to whom the message is to be sent (recipient) .must be logged on the system. In
offline communication the recipient need not be logged on the system.

wall(write to all)
This command is used by administrator to send message to all currently logged user.
Syntax:
[root@localhost ~]#wall
<message to be send>
^d
Example:
[root@localhost ~]#wall
Save your data
No power backup
^d

write
This online communication commands lets you to write messages on another users
terminal.
Syntax:
[root@localhost ~]# Write <reception in name>
<messages>
^d
Example:
[root@localhost ~]# write Pooja
Hello Pooja sharma
what are you doing?
^d
On the execution of this command, the specified message is displayed on the terminal of
the user Pooja. If the recipient does not want to allow these message (sent by others
users) on this terminal then he can use message command .If he want to revoke this
option and want to allow any one to communicate with him then he can use message
command.
General format is Message (y/n)
Y Allows write access to your terminal.
N Disallows writes access to your terminal.
The messages without argument give the states of the messages setting.
The finger command can be used to know the user who are sent currently logged on the
system and to know which terminal of the users are set to messages and which are sent to
messages n. A* symbol is placed on those terminals where the messages set to n.

mail
This command offers off line communication.
Syntax: To sent mail,
[root@localhost ~]# mail (users name)
<Message>
cntl+d
The mail programmed mails the messages to the specified users. If the users (recipient)is
logged on the system , the messages you have new mail is display on the recipient
terminal .However the users is logged on the system or not ,the mail will be kept in the
mail box until the users issues the necessary command to read the mail. To check mails,
give the mail command with out arrangement. This command will list all the incoming
mails received since the latest usage of the mail command .A & symbol is displayed at
the bottom. This is called mail prompt. Here we can issue several mail prompt command
some command communication commands are given below.
Mail Prompt Commands Function
+ Display the next mail messages if exists.
_ Display the previous mail messages if exists
<Number> Display the specified number mail if exists
D Delete currently viewed mail
d<number> Delete the specified mail number if exists
s <filename> Store the current mail in specified file.
s <number><filename> Store the specified mail number in specified file.
R reply to current mail
r <number> reply to specified mail number
q Quit

2.6 PROCESS ORIENTED COMMANDS IN LINUX


ps
This command is used to know which processes are running at our terminal
Syntax
ps -a This lists the process of the entire user who is logged on the system.
ps –t –terminal name: This command lists the processes, which the running on the
specified users terminal names.
ps-u- uses name: The command lists the processes, which are running for the specified
users’ username.
ps-x- This command lists the system processes .Apart from the processes that are
generated by users, there are some processes that keep on running all the time, these
processes are called system process.
Examples:
[root@localhost ~]# ps
[root@localhost ~]# ps -a
#[root@localhost ~]# ps -t -tty2
This displays the processes, which are running on the terminal-tty2.
[root@localhost ~]# ps-u teji
This displays the processes which are running for the users –teji.

bg
Put a process in the background. Linux provides the facility for background processes.
That is, when one processes is running in the foreground, another process can be
executed in the background .The ampersand symbol placed at the end of a command for
back ground processing.

Example: [root@localhost ~]# sort emp.doc &


By the execution of the command, a number is display. This is called PID (process
Identification) number; In Linux any every process has a unique PID to identify the
process. The PID’S can running 0 to 32767
Then, the command sort exam.doc will run on background. We can execute another
command in foreground (as normal).

fg
Put a process in the foreground.
Example:
[root@localhost ~]#fg sortemp.doc&
[3] 7610

jobs
An alternate way of listing your own processes.
[root@localhost]# sort emp.doc &
[1] 6465

In this case, the prompt returned because the process was put in the background.
[root@localhost]# bg
[1]+ emp.doc &

Now that we have a process in the background, it would be helpful to display a list of the
processes we have launched. To do this, we can use either the jobs command or the more
powerful ps command.

[root@localhost]# jobs
[1]+ Running emp.doc &

[root@localhost ~]# ps
PID TTY TIME CMD
6464 pts/2 00:00:00 bash
6465 pts/2 00:00:00 emp.doc
6466 pts/2 00:00:00 ps

kill
If you can want a command to terminate prematurely, process (ctrl +c) .This type of
interrupt characterless does not affect the background processes, because the background
processes are protected by the shell from these interrupt signals. This kill command is
used to terminate a background processes.
Syntax:
[root@localhost ~]# kill (-signal number) < PID.>
The PID is the processes identification number is the processes that we want to terminate
Use ps command to know the PID of the current processes. By default this kill command
uses the signal number 15 to terminate a process. But some programmed like login shell
simply ignore this signal of introduction, and continuous execution normally. In this case,
you can use the signal number 9 (often referred as sure kill.)
[root@localhost ~]# kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE
9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2
13) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD
18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN
22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO
30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1
36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5
40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9
44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13
52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9
56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5
60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1
64) SIGRTMAX
will give you a list of the signals it supports. Most are rather obscure, but several are
useful to know:

Signal # Name Description

Hang up signal. Programs can listen for this signal and act (or not
1 SIGHUP
act) upon it.

Interrupt signal. This signal is given to processes to interrupt


them. Programs can process this signal and act upon it. You can
2 SIGINT
also issue this signal directly by typing control-c in the terminal
window where the program is running.

Termination signal. This signal is given to processes to terminate


them. Again, programs can process this signal and act upon it.
15 SIGTERM You can also issue this signal directly by typing control-c in the
terminal window where the program is running. This is the
default signal sent by the kill command if no signal is specified.
Kill signal. This signal causes the immediate termination of the
9 SIGKILL process by the Linux kernel. Programs cannot listen for this
signal.

Now let's suppose that you have a program that is hopelessly hung (Netscape, maybe)
and you want to get rid of it. Here's what you do:

1. Use the ps command to get the process id (PID) of the process you want to
terminate.
2. Issue a kill command for that PID.
3. If the process refuses to terminate (i.e., it is ignoring the signal), send increasingly
harsh signals until it does terminate.
Example:
[root@localhost ~]# kill1209
This command terminate the processes who has the PID 1200
[root@localhost ~]# kill -9 1300
[root@localhost ~]# kill -9 0
This command kills all the processes including the logion shell. (The kernel it self being
the first processes gets the PID. The above command kills the kernel, so all the processes
are killed)
[root@localhost ~]# kill %1
This command kills first background process .

nohup
If a user wants processes that he has executed not to die even when he logged-out from
the system, you can use this nohup (no hang-up) command .Note that normally all the
process of a user are terminate if he has logs out.
Syntax:
[root@localhost ~]# nohup <commands >

Example:
[root@localhost ~]# nohup dataemp.doc&
434
Here, 434 is the PID of the process. Then the comment sort dataemp.doc will be executed
even if you logged out from the system. The out put of this command will be stored in a
file named nohup out.

at
This command is used to execute the specified Linux command at future time,
Syntax:
[root@localhost ~]# at time
<Commands>.
^d
(Press (ctrl+ d) at the end)
Here <time> specifies the time at which the specified command are to be executed.

Example:
[root@localhost ~]# at 12:00
echo “Lunch break”
^d
At offers the keywords –now , noon ,midnight, today and tomorrow and they convey
special meanings.
[root@localhost ~]# at noon
echo “lunch break’
^d
At also offers the keywords hours, days, weeks ,month ,and years, can be used with
+operator as shown in the following examples.
[root@localhost ~]# at 12:00+1 day at 122:00 tomorrow
[root@localhost ~]# at 13:00 Jan 20 , 2003+2 days --at 1 pm january20,2004

atq
This command is used to list the job submitted by you on at queue. This command lists
the job number. Scheduled date of execution.
Syntax: [root@localhost ~]# atq

atrm
This command is used to remove a job from at queue.
Syntax: atrm job number
Example
[root@localhost ~]# atq
2002-12-16-12:00 a teji
2002-12-17-12:0 a teji
2003-01-22 13:00 a teji
[root@localhost ~]# atrm 8
[root@localhost ~]# atq
2002-12-16-12:00 a abc
2003-01-22-13:00 a emp.doc

batch
This command is used to execute the specified command when the system load permitted
(when becomes nearly free).
Syntax:
[root@localhost ~]# batch
<commands>
^d
Any job schedule with batch also goes to the at queue, you can list or delete them through
atq and atrm respectively.
Example:
[root@localhost ~]# batch
Sort a. c
Sort b.c.
^d
pstree
This command is also used the check the process on the server. " pstree " command will
list all the running process in the form of a tree structure.

Example:
[root@localhost ~]# pstree
init─┬─agetty
├─antirelayd
├─bdflush
├─chkservd
├─4*[courierlogger───couriertcpd]
├─courierlogger───authdaemond───5*[authdaemond───authProg]
├─cpanellogd
├─cpdavd
├─cphulkd.pl
├─cpsrvd-ssl───cpsrvd-ssl
├─crond
├─entropychat
├─exim───exim─┬─3*[exim]
│ └─spamc
├─2*[exim]
├─exim───20*[exim]
├─eximstats
├─hpt_wt
├─httpd───56*[httpd]
├─interchange
├─keventd
├─7*[kjournald]
├─klogd
├─ksoftirqd_CPU0
├─ksoftirqd_CPU1
├─ksoftirqd_CPU2
├─ksoftirqd_CPU3
├─kswapd
├─kupdated
├─mailmanctl───8*[python2.4]
├─mdrecoveryd
├─6*[mingetty]
├─mysqld_safe───mysqld───mysqld───26*[mysqld]
├─named───named───6*[named]
├─portsentry
├─pure-authd
├─pure-ftpd
├─10*[python2.4]
├─scsi_eh_0
├─spamd───2*[spamd]
├─ssh
├─sshd─┬─sshd───sshd───bash───su───bash
│ └─sshd───sshd───bash───su───bash───pstree
├─syslogd
└─xinetd

Also try the following options for " pstree ".


[root@localhost ~]# pstree -p
init(1)─┬─agetty(7480)
├─antirelayd(8658)
├─bdflush(8)
├─chkservd(6224)
├─courierlogger(6833)───couriertcpd(6834)
├─courierlogger(6840)───couriertcpd(6841)
├─courierlogger(6846)───couriertcpd(6847)
├─courierlogger(6852)───couriertcpd(6853)
├─courierlogger(6858)───authdaemond(6859)─┬─authdaemond(6873)
│ ├─authdaemond(6874)───authProg(26164)
│ ├─authdaemond(6875)───authProg(17488)
│ ├─authdaemond(6876)───authProg(8194)
│ └─authdaemond(6877)───authProg(29956)

w
This command is also find the load and users on the server. "w" command will provide a
brief description about the load, time, number of users and the uptime of the server.
Example:
[root@localhost ~]# w
15:23:31 up 6:25, 3 users, load average: 0.01, 0.01, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root :0 09:12 ?xdm? 4:25 1.03s /usr/bin/gnome-
root pts/1 :0.0 12:28 2:54m 0.01s 0.01s bash
root pts/2 :0.0 15:23 0.00s 0.01s 0.00s w

uptime
This command gives the basic information about the uptime and load of the server.

Example:
[root@localhost ~]# uptime
15:23:39 up 6:25, 3 users, load average: 0.01, 0.01, 0.00
top
This command is used to find the load on the server. “top " command can also be used to
find the process and users that causes load on the server. It gives information about the
total process, sleeping process, the zombie process etc.

Example:
[root@localhost ~]# top
top - 15:24:48 up 6:26, 3 users, load average: 0.16, 0.03, 0.01
Tasks: 113 total, 1 running, 112 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.7% us, 0.3% sy, 0.0% ni, 99.0% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 497840k total, 418308k used, 79532k free, 14036k buffers
Swap: 1052216k total, 128k used, 1052088k free, 145996k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4488 root 15 0 179m 45m 6240 S 1 9.3 2:16.72 X
6896 root 15 0 37336 13m 8608 S 0 2.8 0:01.65 gnome-terminal
1 root 16 0 2996 552 472 S 0 0.1 0:00.52 init
2 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.81 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1
5 root 34 19 0 0 0 S 0 0.0 0:00.84 ksoftirqd/1
6 root 5 -10 0 0 0 S 0 0.0 0:00.03 events/0
7 root 5 -10 0 0 0 S 0 0.0 0:00.03 events/1
8 root 14 -10 0 0 0 S 0 0.0 0:00.01 khelper
9 root 15 -10 0 0 0 S 0 0.0 0:00.00 kacpid
28 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/0
From the above example you can see the load average, total processes, sleeping processes
and the CPU usage. You can find the load average (here the load average is “4.95 " ), the
memory usage, stats, swap and the list of process and its users.

su(Becoming the super user for a short while)


It is often useful to become the super user to perform important system administration
tasks, but as you have been warned (and not just by me!), you should not stay logged on
as the super user. Fortunately, there is a program that can give you temporary access to
the super user's privileges. This program is called su (short for super user) and can be
used in those cases when you need to be the super user for a small number of tasks. To
become the super user, simply type the su command. You will be prompted for the super
user's password.
[root@localhost]# su
Password:
After executing the su command, you have a new shell session as the super user. To exit
the super user session, type exit and you will return to your previous session.

2.7 NETWORK RELATED COMMAND AND TOOLS IN LINUX


route
route is used configure routing information.
Example:
[root@localhost ~]# route add -net 10.20.30.40 netmask 255.255.255.248 eth0
[root@localhost ~]# route add -net 10.20.30.48 netmask 255.255.255.248 gw
10.20.30.41
The first line states that the route to network 10.20.30.40/255.255.255.248 is through our
local interface eth0. The second line states that the route to network
10.20.30.48/255.255.255.248 is through gateway 10.20.30.41 .

netconfig
CUI based.
Used to configure network interface.
Used by text based installation methods.

redhat-config-network
This is a GUI administration tool that allows you to configure several aspects of your
networking: interfaces, boot protocols, host resolution, routing, and more.

ifup / ifdown
These shell script wrappers allow you to bring an interface up and take it down. They use
the configuration information in the /etc/sysconfig directory to configure the interface
specified.
For example, to bring up interface eth0, simply type:
[root@localhost ~]# ifup eth0

ping
ping is used to check network connectivity. Ping sends dummy packets to host to check
weather host is responding or not. If host working then it sends reply packets to client and
it continue sly send reply packets. To stop it Press Ctrl+c .It uses ICMP echo request and
echo reply to test the connectivity. Host may not be responding due to following reasons:
The remote host is down.
Some point in the network in-between the two hosts is down.
A device in-between the two hosts is filtering ICMP packets.
Examples:
[root@localhost ~]# ping 192.168.100.256
[root@localhost ~]# ping -b 192.168.100.256
ping options:

-t Ping the specified host until interrupted


-a Resolve addresses to hostnames
-n count Number of echo requests to send
-l size Send buffer size
-f Set Don't Fragment flag in packet
-i TTL Time To Live
The first line pings a single host, 192.168.100.256. The second line performs a broadcast
ping to all hosts on the 192.168.100.256 network.

traceroute
traceroute is also used to test network connectivity. However, it displays each hop along
the way from the source to the destination. It can help you determine if the problem is
with the remote host itself, or some point in-between the hosts.
Example:
[root@localhost ~]# traceroute 192.168.100.256
This will print a line for each hop in-between the local and remote host
(192.168.100.256) as well as a line for the final destination up to a maximum of 30 hops.

ifconfig
The ifconfig command configures a network interface, or displays its status if no options
are provided. If no arguments are provided, the current state of all interfaces is displayed.
Syntax: [root@localhost ~]# ifconfig interface options address
Where interface specifies the name of the network interface (e.g. eth0 or eth1).
To get the IP address of the host machine we enter at the command prompt the
command
[root@localhost ~]# ifconfig
This provides us with the network configuration for the host machine. For example
Ethernet adapter
IP address 152.106.50.60
Subnet Mask 255.255.255.0
Default Gateway 152.106.50.240
Example:
[root@localhost ~]# ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up
This configures interface eth0 with an IP of 192.168.1.10/255.255.255.0. Note that "up"
is assumed if left off. A default network mask will also be determined by the IP if it is not
specified.

arp
arp is used to administer the arp cache. It can view, add, and delete entries in the
cache.
View arp cache:
[root@localhost ~]# arp
This will display something like:
Address HWtype HWaddress Flags Mask Iface
192.168.100.256 ether 00:60:08:27:CE:A2 C eth0
192.168.100.169 ether 00:60:08:27:CE:B2 CM eth0

The "C" flag means it's a complete entry. The "M" flag indicates it's an entry added
manually and it is permanent.
Add an entry:
[root@localhost ~]# arp -s 192.168.100.145 00:60:08:27:CE:B2
Delete an entry:
[root@localhost ~]# arp -d 192.168.100.256

netstat
netstat provides a lot of useful information, including: Routing tables, Interface statistics
(dropped packets, buffer overruns, etc.) ,Network connections ,Multicast memberships.
Examples:
[root@localhost ~]# netstat -i
Display interface statistics.
[root@localhost ~]# netstat -lpe
Display all listening sockets and the programs that own them
[root@localhost ~]# netstat -r
Display routing information
[root@localhost ~]# netstat -ape
Show all listening and non-listening sockets

nslookup
The nslookup command queries a DNS nameserver. It can be run in interactive
mode. If no host name is provided, then the program enters interactive mode. By
default, the DNS server specified in
/etc/resolv.conf
is used unless another is specified. If we want to specify a server but not look up a
Specified host, we must provide a - in place of the host. The syntax is
[root@localhost ~]# nslookup [host|-[server]]

hostname
The hostname command displays or sets the system’s host name. If no flags or arguments
are given, then the host name of the system is displayed.
Syntax:-
[root@localhost ~]# hostname [-a] [--alias] [-d]
[--domain] [-f] [--fqdn] [-i]
[--ip-address] [--long] [-s]
[--short] [-y] [--yp] [--nis]
Imporloginrtant
-a/--alias: Displays the alias name of host if available.
-d/--domain: Displays the DNS domain name of the host.
f/--fqdn/--long: Displays the fully qualified domain name of the host.
i/--ip-address: Displays the IP address of the host.
s/--short: Displays the host name without the domain name.
y/--yp/--nis: Displays the NIS domain name of the system.

rcp
rcp copies files between machines. Each file or directory argument is either a remote file
name of the form ``rname@rhost:path'', or a local file name .

Syntax:
rcp [-px ] file1 file2
rcp [-px ] [-r ] file ... directory

-r If any of the source files are directories, rcp copies each subtree rooted at that
name; in this case the destination must be a directory.

-p The -p option causes rcp to attempt to preserve (duplicate) in its copies the
modification times and modes of the source files.
If path is not a full path name, it is interpreted relative to the login directory of the
specified user ruser on rhost or your current user name if no other remote user name is
specified. A path on a remote host may be quoted (using \, ", or ´) so that the
metacharacters are interpreted remotely.
rcp does not prompt for passwords; it performs remote execution via rsh, and requires the
same authorization.
rcp handles third party copies, where neither source nor target files are on the current
machine.
EXAMPLES
The command rcp copies files between computer systems. To be able to use the rcp
command, both computers need a ".rhosts" file in the user's home directory, which would
contain the names of all the computers that are allowed to access this computer along
with the user name. Here is an example of an .rhosts file:
jmit.ac.in jmit
cse.ac.in jmitcse
[root@localhost ~]#rcp teji.txt jmit.ac.in:teji.doc
Copies teji.txt from the local machine to the user's home directory on the computer with URL jmit.ac.in,
assuming that the user names are the same on both systems.
[root@localhost ~]#rcp teji.txt pooja@:jmit.ac.in:pooja.doc
Copies teji.txt from the local machine to the home directory of user pooja on the
computer with URL jmit.ac.in.

[root@localhost ~]#rcp jmit.ac.in:teji.txt teji.txt


Copies teji.txt from the remote computer jmit.ac.in to the local machine with the same
name.

[root@localhost ~]#rcp -r teji jmit.ac.in:seema


Copies the directory teji, including all subdirectories, from the local machine to the
directory seema in the user's home directory on the computer with URL jmit.ac.in,
assuming that the user names are the same on both systems.
[root@localhost ~]#rcp -r jmit.ac.in:seema/docs thesis
Copies the directory docs, including all subdirectories, from the remote machine to the
directory thesis on the local machine.
scp
secure copy (remote file copy program)
scp copies files between hosts on a network. It uses ssh for data transfer, and uses the
same authentication and provides the same security as ssh.
Any file name may contain a host and user specification to indicate that the file is to be
copied to/from that host. Copies between two remote hosts are permitted.

rsh (remote shell )


rsh copies its standard input to the remote command, the standard output of the remote
command to its standard output, and the standard error of the remote command to its
standard error. Interrupt, quit and terminate signals are propagated to the remote
command; rsh normally terminates when the remote command does. The options are as
follows:
-d The -d option turns on socket debugging on the TCP sockets used for
communication with the remote host.
-l y default, the remote username is the same as the local username. The -l option
allows the remote name to be specified.
-n he -n option redirects input from the special device /dev/null (see the Sx BUGS
section of this manual page).
If no command is specified, you will be logged in on the remote host using rlogin.

Shell metacharacters which are not quoted are interpreted on local machine, while quoted
metacharacters are interpreted on the remote machine. For example, the command
[root@localhost ~]#rsh otherhost cat remotefile >> localfile

Appends the remote file remotefile to the local file localfile while
[root@localhost ~]#rsh otherhost cat remotefile ">>" other_remotefile
Appends remotefile to other_remotefile

rsync
rsync is a program that behaves in much the same way that rcp does, but has many more
options and uses the rsync remote-update protocol to greatly speed up file transfers when
the destination file already exists.
The rsync remote-update protocol allows rsync to transfer just the differences between
two sets of files across the network link, using an efficient checksum-search algorithm
described in the technical report that accompanies this package.
Some of the additional features of rsync are:
1) Support for copying links, devices, owners, groups and permissions
2) Exclude and exclude-from options similar to GNU tar
3) A CVS exclude mode for ignoring the same files that CVS would ignore
4) Can use any transparent remote shell, including rsh or ssh
5) Does not require root privileges
6) Pipelining of file transfers to minimize latency costs
7) Support for anonymous or authenticated rsync servers (ideal for mirroring)
There are six different ways of using rsync. They are:
1) For copying local files. This is invoked when neither source nor destination path
contains a : separator
2) For copying from the local machine to a remote machine using a remote shell
program as the transport (such as rsh or ssh). This is invoked when the destination
path contains a single: separator.
3) For copying from a remote machine to the local machine using a remote shell
program. This is invoked when the source contains a : separator.
4) For copying from a remote rsync server to the local machine. This is invoked
when the source path contains a :: separator or a rsync:// URL.
5) For copying from the local machine to a remote rsync server. This is invoked
when the destination path contains a :: separator.
6) For listing files on a remote machine. This is done the same way as rsync transfers
except that you leave off the local destination.
Note that in all cases (other than listing) at least one of the source and destination paths
must be local.

Examples:
You use rsync in the same way you use rcp. You must specify a source and a destination,
one of which may be remote.
Perhaps the best way to explain the syntax is some examples:
[root@localhost ~]#rsync *.c foo:src/
This would transfer all files matching the pattern *.c from the current directory to the
directory src on the machine foo. If any of the files already exist on the remote system
then the rsync remote-update protocol is used to update the file by sending only the
differences. See the tech report for details.
[root@localhost ~]#rsync -avz foo:src/bar /data/tmp
this would recursively transfer all files from the directory src/bar on the machine foo into
the /data/tmp/bar directory on the local machine. The files are transferred in "archive"
mode, which ensures that symbolic links, devices, attributes, permissions, ownerships etc
are preserved in the transfer. Additionally, compression will be used to reduce the size of
data portions of the transfer.
[root@localhost ~]#rsync -avz foo:src/bar/ /data/tmp
a trailing slash on the source changes this behavior to transfer all files from the directory
src/bar on the machine foo into the /data/tmp/. A trailing / on a source name means "copy
the contents of this directory". Without a trailing slash it means "copy the directory". This
difference becomes particularly important when using the --delete option.
You can also use rsync in local-only mode, where both the source and destination don't
have a ':' in the name. In this case it behaves like an improved copy command.
[root@localhost ~]#rsync somehost.mydomain.com::
this would list all the anonymous rsync modules available on the host
somehost.mydomain.com.

TFTP
Terminal file transfer program .The utility allows a user to transfer files to and from a
remote network site.
Check that TFTP server package is installed by running following command

[root@localhost ~]#rpm –qa tftp*


If not then install it you can find packages at Url:-https//rpmfind.net and in search bar
enter ftp –server and select according to your OS as red hat or fedora or some other.

To configure ftp Go to Dir


[root@localhost ~]#cd /etc/xinetd.d

Run command is to see that that tftp is there.


[root@localhost ~]#/ vi tftp
disable=no
Esc +:wq

Run service
[root@localhost ~]#/etc/init.d/xinetd restart

Command list for ftp:


! [command ]
Invoke an interactive shell on the local machine.
? [command ]
Perform same operation as help.
ascii
Set the file transfer type to network ASCII this is the default type.
bell
Arrange that a bell be sounded after each file transfer command is completed.
binary
Set the file transfer type to support binary image transfer.
bye
Terminate the FTP session.
cd remote-directory
Change the working directory on the remote machine to remote-directory
cdup
Change the remote machine working directory to the parent of the current remote
machine working directory.
chmod mode file-name
Change the permission modes of the file file-name on the remote system to mode
close
Terminate the FTP session with the remote server, and return to the command
interpreter.
delete remote-file
Delete the file remote-file on the remote machine.
dir [remote-directory ] [local-file ]
Print a listing of the directory contents in the directory, remote-directory and,
optionally, placing the output in local-file
disconnect
A synonym for close
get remote-file [local-file ]
Retrieve the remote-file and store it on the local machine.
help [command ]
Print an informative message about the meaning of command If no argument is
given, ftp prints a list of the known commands.
idle [seconds ]
Set the inactivity timer on the remote server to seconds seconds. If seconds is
ommitted, the current inactivity timer is printed.
lcd [directory ]
Change the working directory on the local machine. If no directory is specified,
the user's home directory is used.
mdelete [remote-files ]
Delete the remote-files on the remote machine.
mget remote-files
Expand the remote-files on the remote machine and do a get for each file name
thus produced. See glob for details on the filename expansion. Resulting file
names will then be processed according to case ntrans and nmap settings. Files
are transferred into the local working directory, which can be changed with `lcd'
directory ; new local directories can be created with `!' mkdir directory .
mkdir directory-name
Make a directory on the remote machine.
mput local-files
Expand wild cards in the list of local files given as arguments and do a put for
each file in the resulting list. See glob for details of filename expansion. Resulting
file names will then be processed according to ntrans and nmap settings.
.
put local-file [remote-file ]
Store a local file on the remote machine.
pwd
Print the name of the current working directory on the remote machine.
quit
Perform same operation as bye
recv remote-file [local-file ]
Perform same operation as get.
remotehelp [command-name ]
Request help from the remote FTP server. If a command-name is specified it is
supplied to the server as well.
remotestatus [file-name ]
With no arguments, show status of remote machine. If file-name is specified,
show status of file-name on remote machine.
rmdir directory-name
Delete a directory on the remote machine.
send local-file [remote-file ]
Perform same operation as put.
size file-name
Return size of file-name on remote machine.
status
Show the current status of ftp
system
Show the type of operating system running on the remote machine.

EXAMPLES:

[root@localhost ~]#ftp 192.168.100.34


This command will attempt to connect to the ftp server at 192.168.100.35. If it succeeds,
it will ask you to log in using a username and password.

ftp>help
This lists the commands that you can use to show the directory contents, transfer files,
and delete files.

ftp> ls
This command prints the names of the files and sub-directories in the current directory on
the remote computer.

ftp>cd teji
This command changes the current directory to the subdirectory "teji", if it exists.

ftp>cd ..
Changes the current directory to the parent directory.

ftp>ascii
Changes to "ascii" mode for transferring text files.

ftp>binary
Changes to "binary" mode for transferring all files that are not text files.

ftp>get teji.txt
Downloads the file teji.txt from the remote computer to the local computer. Warning: If
there already is file with the same name it will be overwritten.

ftp>put seema.txt

Uploads the file seema.txt from the local computer to the remote computer. Warning: If
there already is file with the same name it will be overwritten.

ftp> !ls

A '!' in front will execute the specified command on the local computer. So '!ls' lists the
file names and directory names of the current directory on the local computer.
ftp> mget *.txt

With mget you can download multiple text files. This command downloads all files that
end with ".txt".

ftp>mput *.sh

Uploads all files that end with ".,sh".


ftp >mdelete *.mpeg

Deletes all files that end with ".mpeg".

ftp >bye
Exits the ftp program.

telnet
The telnet command is used to communicate with another host using the TELNET
protocol. If telnet is invoked without the host argument, it enters command mode,
indicated by its prompt (telnet> ) In this mode, it accepts and executes the commands . If
it is invoked with arguments, it performs an open command with those arguments.
Installation and configuration:
Check that telnet server package is installed by running following command
[root@localhost ~]# rpm –qa telnet –server*
if not then install it you can find packages at Url:-http//rpmfind.net and in search bar
enter telnet –server and select according to your OS as red hat or fedora or some other .

To configure telnet Go to Dir


[root@localhost ~]# cd /etc/xinetd.d

Run command is to see that telnet is there


[root@localhost ~]# vi telnet
Disable =no
Esc +:wq

Run service
[root@localhost ~]# /etc/init.d/xinetd restart

Check telnet 192.168.100.34 from any machine on network

But you can login as user and not able to logon as root in telnet to enable
Root as root has super user privileges so change entry in

Simply edit the file /etc/securetty and add the following to the end of the File.

[root@localhost ~]#vi /etc/securetty


Pts/0
Pts/1
Pts/2
Pts/3
Pts/4
Pts/5
Pts/6
Pts/7
Pts/8
Pts/9
Esc +:wq
Now you are able to login as root. But never allow it as root is security concern.
Command list for telnet
! [command ]
Execute a single command in a subshell on the local system.
? [command ]
Get help. With no arguments, telnet prints a help summary.
status
Show the current status of telnet .
quit
Close TELNET session.
close
Close a TELNET session and return to command mode
send arguments
Sends one or more special character sequences to the remote host. The following
are the arguments which may be specified (more than one argument may be
specified at a time):
abort
Sends the TELNET ABORT (Abort processes) sequence.
ao
Sends the TELNET AO (Abort Output) sequence, which should cause the remote
system to flush all output from the remote system to the user's terminal.
ayt
Sends the TELNET AYT (Are You There) sequence, to which the remote system
may or may not choose to respond.
brk
Sends the TELNET BRK (Break) sequence, which may have significance to the
remote system.
ec
Sends the TELNET EC (Erase Character) sequence, which should cause the
remote system to erase the last character entered.
el
Sends the TELNET EL (Erase Line) sequence, which should cause the remote
system to erase the line currently being entered.
eof
Sends the TELNET EOF (End Of File) sequence.
eor
Sends the TELNET EOR (End of Record) sequence.
escape
Sends the current telnet escape character (initially ``^]'').
ga
Sends the TELNET GA (Go Ahead) sequence, which likely has no significance
to the remote system.
getstatus
If the remote side supports the TELNET STATUS command, getstatus will send
the subnegotiation to request that the server send its current option status.
ip
Sends the TELNET IP (Interrupt Process) sequence, which should cause the
remote system to abort the currently running process.
nop
Sends the TELNET NOP (No OPeration) sequence.
susp
Sends the TELNET SUSP (SUSPend process) sequence.
synch
Sends the TELNET SYNCH sequence. This sequence causes the remote system
to discard all previously typed (but not yet read) input. This sequence is sent as
TCP urgent data (and may not work if the remote system is a BSD 4.2 system --
if it doesn't work, a lower case ``r'' may be echoed on the terminal).
do cmd
Sends the TELNET DO cmd sequence. cmd can be either a decimal number
between 0 and 255, or a symbolic name for a specific TELNET command. cmd
can also be either help or ? to print out help information, including a list of
known symbolic names.
dont cmd
Sends the TELNET DONT cmd sequence. cmd can be either a decimal number
between 0 and 255, or a symbolic name for a specific TELNET command. cmd
can also be either help or ? to print out help information, including a list of
known symbolic names.
will cmd
Sends the TELNET WILL cmd sequence. cmd can be either a decimal number
between 0 and 255, or a symbolic name for a specific TELNET command. cmd
can also be either help or ? to print out help information, including a list of
known symbolic names.
wont cmd
Sends the TELNET WONT cmd sequence. cmd can be either a decimal number
between 0 and 255, or a symbolic name for a specific TELNET command. cmd
can also be either help or ? to print out help information, including a list of
known symbolic names.
?
Prints out help information for the send command.
ssh
OpenSSH SSH client (remote login program)
ssh (SSH client) is a program for logging into a remote machine and for executing
commands on a remote machine. It is intended to replace rlogin and rsh, and provide
secure encrypted communications between two untrusted hosts over an insecure network.
X11 connections and arbitrary TCP/IP ports can also be forwarded over the secure
channel.
ssh connects and logs into the specified hostname The user must prove his/her identity to
the remote machine using one of several methods depending on the protocol version
used:
For configuring OpenSSH server we need to change the /etc/ssh/sshd_config
configuration file. Most of the entries in this file are commented by default. Steps are
On server side
1) Login as root.
2) Open terminal and type the following command
[root@localhost~]#vi /etc/ssh/sshd_config
Press I button to edit contents of the file
To change listen port .
Go to line #ListenAdress 0.0.0.0
Change line to ListenAdress192.168.100.1:32
To change login grace
Go to line #LoginGraceTime 2m
Change line to LogingraceTime 20s
To send acknowledgement to client computer
Go to line #Banner /some/path
Change line to Banner /etc/issue.net
Press esc +:wq for save and quit
3) Type following command on terminal
[root@localhost]#vi /etc/issue.net
Press I button to edit contents of the file.
Welcome to jmit
Press esc +:wq for save and quit
4) Restart the sshd to save the changes. [root@localhost]#service sshd restart
5) [root@localhost]#chkconfig sshd on
On client side
1) Login as root and open terminal.
2) Try to login on server machine using ssh
[root@localhost]#ssh 192.168.100.1
The following error message will displayed
ssh:connect to host 192.168.100.1 port 22: connection refused
3) Type the following command with port number
[root@localhost]#ssh –p 32 192.168.100.1
It will ask for a question
Are you sure you want to continue connecting (yes/no)?
Type yes and enter password of remote machine. Remote shell will be displayed
.Type following command to check whether we are working on remote machine
or not
[root@localhost]#init 0
This will restart the remote machine.
Copying data from local machine to remote machine
1) login as root on client machine and open terminal.
2) Type following command
[root@localhost]#scp –p 32 /home/staff.doc 192.168.100.1 :/stafflist.doc
It will ask for remote machine password .type the password of remote machine.
File will be copied to remote machine.
3) [root@localhost]#exit
Chapter 3
Linux Editors

3.0 The Linux Editors


Linux has a surprisingly large number of available editors, many of them inherited from
UNIX.
Editor Description
Ed Original UNIX line-based editor, useful in scripts
Vi Classic screen-based editor for UNIX
emacs GNU editor and fully integrated user environment
Vi Improved, enhanced support for programmers. Vim is an advanced text editor
Vim
that seeks to provide the power of the editor 'Vi', with a more complete feature set.
Kate is a multi document editor which is part of the kdebase package of KDE, the
Kate
K Desktop Environment.
nano is a curser-based text editor. It is a clone of Pico, the editor of the Pine email
nano
client
gedit gedit is a small and lightweight text editor for the GNOME environment

3.1 vi editor
vi stands for ”Visual Editor”. vi is popular because it is small and is found on virtually all
Unix systems. To start the vi editor, we type its name at the shell prompt (command line).
[root@localhost]# vi
If we know the name of the file we want to create or edit, we can issue the vi
Command with the file name as an argument. For example, to create the file
myshell.doc with vi,
we enter
[root@localhost]# vi myshell.doc
To edit a given file for example myshell.doc we enter
[root@localhost]# vi myshell.doc
When vi becomes active, the terminal screen clears and a tilde character ~ appears on the
left side of very screen line, except for the first. The ~ is the empty-buffer line flag. The
cursor is at the leftmost position of the first line. We probably see 20 to 22 of the tilde
characters at the left of the screen. If that’s not the case, check the value of TERM. When
we see this display we have successfully started vi. Then vi is in command mode, waiting
for our first command.
vi’s Operation Modes
Command mode: In command mode, vi interprets our keystrokes as commands. We
press any character from the keyboard to perform an action related to it.
Insert mode: We use input mode only for entering text. Most word processors start in
input mode, but vi does not. We must go into input mode by pressing a or i before we
start entering text
Save mode: After this there is need to save the changes made in the file. Then explicitly
press <Esc> to return to command mode. There is also a command line mode. This mode
is used for complex commands such as searching and to save our file or to exit vi. We
enter this mode by typing “:”, “/”, “?” or “!”. A few of these commands are:
:w – write to file
:wq - write to file and exit vi
:q! – Forcefully quit without saving file.
:q Exits after making no changes to the buffer or exits after the buffer is modified and
saved to a file
:q! Exits and abandons all changes to the buffer since it was last saved to a file
:wq! Writes buffer to the working read only file and then exits
:x Perform same operation as :wq
ZZ Perform same operation :wq
Working with vi editor
1. Start vi
Type vi and press Enter.
2. Go to input mode.
Press I or insert key
3. Enter the text.
Type the text into the buffer.
4. Press Esc.
5. Type: wq <filename >for write and quit.

Moving within a File


To move around within a file without affecting your text, you must be in command mode
(press Esc twice). Here are some of the commands you can use to move around one
character at a time:
Command Description
K Moves the cursor up one line.
J Moves the cursor down one line.
H Moves the cursor to the left one character position.
L Moves the cursor to the right one character position.
There are following two important points to be noted:
• The vi is case-sensitive, so you need to pay special attention to capitalization
when using commands.
• Most commands in vi can be prefaced by the number of times you want the action
to occur. For example, 2j moves cursor two lines down the cursor location.
There are many other ways to move within a file in vi. Remember that you must be in
command mode (press Esc twice). Here are some more commands you can use to move
around the file:
Command Description
0 or | Positions cursor at beginning of line.
$ Positions cursor at end of line.
W Positions cursor to the next word.
B Positions cursor to previous word.
( Positions cursor to beginning of current sentence.
) Positions cursor to beginning of next sentence.
E Move to the end of Blank delimited word
{ Move a paragraph back
} Move a paragraph forward
[[ Move a section back
]] Move a section forward
n Moves to the column n in the current line
1G Move to the first line of the file
G Move to the last line of the file
nG Move to nth line of the file
:n Move to nth line of the file
Fc Move forward to c
Fc Move back to c
H Move to top of screen
nH Moves to nth line from the top of the screen
M Move to middle of screen
L Move to bottom of screen
nL Moves to nth line from the bottom of the screen
Colon followed by a number would position the cursor on line
:x
number represented by x

Control Commands
There are following useful command which you can use along with Control Key.
Command Description
CTRL+d Move forward 1/2 screen
CTRL+d Move forward 1/2 screen
CTRL+f Move forward one full screen
CTRL+u Move backward 1/2 screen
CTRL+b Move backward one full screen
CTRL+e Moves screen up one line
CTRL+y Moves screen down one line
CTRL+u Moves screen up 1/2 page
CTRL+d Moves screen down 1/2 page
CTRL+b Moves screen up one page
CTRL+f Moves screen down one page
CTRL+I Redraws screen

Editing Files
To edit the file, you need to be in the insert mode. There are many ways to enter insert
mode from the command mode:
Command Description
i Inserts text before current cursor location.
I Inserts text at beginning of current line.
A Inserts text after current cursor location.
A! Inserts text at end of current line.
O Creates a new line for text entry below cursor location.

Deleting Characters
Here is the list of important commands which can be used to delete characters and lines
in an opened file.
Command Description
X Deletes the character under the cursor location.
X Deletes the character before the cursor location.
Dw Deletes from the current cursor location to the next word.
D^ Deletes from current cursor position to the beginning of the line.
D$ Deletes from current cursor position to the end of the line.
D Deletes from the cursor position to the end of the current line.
Dd Deletes the line the cursor is on.
As mentioned above, most commands in vi can be prefaced by the number of times you
want the action to occur. For example, 2x deletes two character under the cursor location
and 2dd deletes two lines the cursor is on.

Change Commands
You also have the capability to change characters, words, or lines in vi without deleting
them. Here are the relevant commands.
Command Description
Cc Removes contents of the line, leaving you in insert mode.
Changes the word the cursor is on from the cursor to the lowercase
Cw
w end of the word.
Replaces the character under the cursor. vi returns to command
R
mode after the replacement is entered.
Overwrites multiple characters beginning with the character
R currently under the cursor. You must use Esc to stop the
overwriting.
Replaces the current character with the character you type.
S
Afterward, you are left in insert mode.
Deletes the line the cursor is on and replaces with new text. After
S
the new text is entered, vi remains in insert mode.

Copy and Paste Commands


You can copy lines or words from one place and then you can past them at another place
using following commands:
Command Description
Yy Copies the current line.
Copies the current word from the character the lowercase w cursor
Yw
is on until the end of the word.
P Puts the copied text after the cursor.
P Puts the yanked text before the cursor.
Advanced Commands
There are some advanced commands that simplify day-to-day editing and allow for more
efficient use of vi:
Command Description
Join the current line with the next one. A count joins that many
J
lines.
<< Shifts the current line to the left by one shift width.
>> Shifts the current line to the right by one shift width.
~ Switch the case of the character under the cursor.
Press CNTRL and G keys at the same time to show the current
^G
filename and the status.
Restore the current line to the state it was in before the cursor
U
entered the line.
Undo the last change to the file. Typing 'u' again will re-do the
U
change.
Join the current line with the next one. A count joins that many
J
lines.
Displays current position in the file in % and file name, total
:f
number of file.
:f filename Renames current file to filename.
:w filename Write to file filename.
:e filename Opens another file with filename.
:cd dirname Changes current working directory to dirname.
:e # Use to toggle between two opened files.
In case you open multiple files using vi, use :n to go to next file in
:n
the series.
In case you open multiple files using vi, use :p to go to previous
:p
file in the series.
In case you open multiple files using vi, use :N to go to previous
:N
file in the series.
:r file Reads file and inserts it after current line
:nr file Reads file and inserts it after line n.
Word and Character Searching
The vi editor has two kinds of searches: string and character. For a string search, the /
and? Commands are used. When you start these commands, the command just typed will
be shown on the bottom line, where you type the particular string to look for.
These two commands differ only in the direction where the search takes place:
• The / command searches forwards (downwards) in the file.
• The? Command searches backwards (upwards) in the file.
The n and N commands repeat the previous search command in the same or opposite
direction, respectively. Some characters have special meanings while using in search
command and preceded by a backslash (\) to be included as part of the search expression.
Character Description
Search at the beginning of the line. (Use at the beginning of a
^
search expression.)
. Matches a single character.
* Matches zero or more of the previous character.
$ End of the line (Use at the end of the search expression.)
[ Starts a set of matching, or non-matching expressions.
Put in an expression escaped with the backslash to find the ending
<
or beginning of a word.
> See the '<' character description above.
The character search searches within one line to find a character entered after the
command. The f and F commands search for a character on the current line only. f
searches forwards and F searches backwards and the cursor moves to the position of the
found character.
The t and T commands search for a character on the current line only, but for t, the cursor
moves to the position before the character, and T searches the line backwards to the
position after the character.

Set Commands
You can change the look and feel of your vi screen using the following: set commands.
To use these commands you have to come in command mode then type: set followed by
any of the following options:
Command Description
:set ic Ignores case when searching
:set ai Sets auto indent
:set noai To unset auto indent
:set nu Displays lines with line numbers on the left side.
Sets the width of a software tabstop. For example you would set a
:set sw
shift width of 4 with this command: :set sw=4
If wrapscan is set, if the word is not found at the bottom of the
:set ws
file, it will try to search for it at the beginning.
If this option has a value greater than zero, the editor will
:set wm automatically "word wrap". For example, to set the wrap margin to
two characters, you would type this: :set wm=2
:set ro Changes file type to "read only"
:set term Prints terminal type
:set bf Discards control characters from input

Running Commands
The vi has the capability to run commands from within the editor. To run a command,
you only need to go into command mode and type:! Command.
For example, if you want to check whether a file exists before you try to save your file to
that filename, you can type :! ls and you will see the output of ls on the screen.
When you press any key (or the command's escape sequence), you are returned to your vi
session.

Replacing Text
The substitution command (:s/) enables you to quickly replace words or groups of words
within your files. Here is the simple syntax:
:s/search/replace/g

The g stands for globally. The result of this command is that all occurrences on the
cursor's line are changed.

3.2 emacs editor


Emacs is a powerful, integrated computing environment for Linux that provides a wide
range of editing, programming and file management tasks. Emacs is an acronym derived
from ”Editor MACroS” Emacs is more than "just an editor" -- it provides a fully
integrated user environment offering the sort of facilities outlined below.
• Issue shell commands
• Open a window for a shell
• Read and send mail
• Read news
• Access the internet
• Write and test programs
• Maintain a calendar
• Play a game!
Emacs has a vast number of editing modes, which create an environment designed for the
type of editing you are doing. There are two types of modes, major and minor.

Major Modes
These include modes for various programming languages (e.g. Bash, C, Lisp, and Perl),
for text processing (e.g. Latex, SGML, troff, and plain text), and even Dired (Directory
Editor) for managing directories.
Minor Modes
These allow you to set or unset features that are independent of the major mode, e.g.
auto-fill (word wrapping), insert vs overwrite, and auto-save.
If you spend a lot of time editing files with a particular structure, then a customised
version of emacs will pay real dividends by reducing the number of keystrokes needed to
complete a specific task.
Working with emacs editor
1. Start emacs.
Type emacs and press <Enter>.
2. Enter the text.
Type the text into the buffer.
3. Save buffer to file.
Press Ctrl-c Ctrl-s and answer y to the prompt asking to save the file; then press
Enter.
4. Name the file.
Type the file name and press <Enter>.
5. Quit emacs.
Press Ctrl-x Ctrl-c.
To start editing a new or existing file using emacs, simply type the following on
command prompt.
[root@localhost ~]# emacs filename
Where filename is the file to be edited.
Two important key sequences which are used to start many command sequences are
Cntl+x (holding down the “ctrl” key while typing “x”)
[esc]-x (simply pressing the “esc” key followed by typing “x”)
To save the file being edited the sequence is ^x^s.
To exit (and be prompted to save) emacs, the sequence is ^x^c.
To open another file within emacs, the sequence is ^x^f. This sequence can be used to
open an existing file as well as a new file. If you have multiple files open, emacs stores
them in different “buffers”.
To switch from one buffer to another use the key sequence ^x-b. The arrow keys usually
work as the cursor movement keys, but there are other nagivation keys Combinations
listed below.
Running emacs
[root@localhost ~]# emacs <filename>
Run emacs .Adding a '&' after the above command will run emacs in the background,
freeing up your shell)
^z Suspend emacs
^x^c Quit emacs
^x^f Load a new file into emacs
^x^v Load a new file into emacs and unload previous file
^x^s Save the file
^x-k Kill a buffer
Moving About
^f Move forward one character
^b Move backward one character
^n Move to next line
^p Move to previous line
^a Move to beginning of line
^e Move to end of line
^v Scroll down a page
[ESC]-v Scroll up a page
[ESC]-< Move to beginning of document
^x-[Move to beginning of page
[ESC]-> Move to end of document
^x-] Move to end of page
^l Redraw screen centered at line under the cursor
^x-o Move to other screen
^x-b Switch to another buffer
Searching
^s Search for a string
^r Search for a string backwards from the cursor (quit both of these with ^f)
[ESC]-% Search-and-replace
Deleting
^d Deletes letter under the cursor
^k Kill from the cursor all the way to the end of the line
^y Yanks back all the last kills.
^k ^y combination you can get a primitive cut-paste effect to move text around
Regions
emacs defines a region as the space between the mark and the point. A mark is set with
^-space (control-spacebar). The point is at the cursor position.
[ESC]-w Copy the region
^w Delete the region. Using ^y will also yank back the last region
killed or copied — this is the way to get a cut/copy/paste effect with
regions.
Screen Splitting
^x-2 Split screen horizontally
^x-3 Split screen vertically
^x-1 Make active window the only screen
^x-0 Make other window the only screen
Miscellaneous
[ESC]-$ Check spelling of word at the cursor
^g In most contexts, cancel, stop, go back to normal command
[ESC]-x goto-line num Goes to the given line number
^x-u Undo
[ESC]-x shell Start a shell within emacs
[ESC]-q Re-flow the current line-breaks to make a single paragraph of text
15
Compiling
[ESC]-x compile Compile code in active window. Easiest if you have a makefile set
up.
^c ^c Do this with the cursor in the compile window, scrolls to the next
compiler error. Cool!
Getting Help
^h emacs help
^h t Run the emacs tutorial
emacs does command completion for you. Typing [ESC]-x space will give you a list of
emacs
commands. There is also a man page on emacs. Type man emacs in a shell.
Printing Your Source Files
There's a really neat way to print out hardcopies of your source files. Use a command
called “enscript”. Commonly, it's used at the Unix command line as follows:
enscript -2GrPsweet5 binky.c lassie.c *.h
This example shoes printing the two source files binky.c and lassie.c, as well as
all of the header files to printer sweet5. You can change these parameters to fit your
needs

3.3 vim editor


Vim is an advanced text editor that seeks to provide the power of the editor 'Vi', with a
more complete feature set. This editor is very useful for editing programs and other plain
ASCII files. All commands are given with normal keyboard characters, so those who can
type with ten fingers can work very fast. Additionally, function keys can be defined by
the user, and the mouse can be used. Vim is often called a "programmer's editor," and is
so useful for programming that many consider it to be an entire Integrated Development
Environment. However, this application is not only intended for programmers. Vim is
highly regarded for all kinds of text editing, from composing email to editing
configuration files. Vim's interface is based on commands given in a text user interface.
Although its graphical user interface, gVim, adds menus and toolbars for commonly used
commands, the software's entire functionality is still reliant on its command line mode.

Features include:
• 3 modes:
o Command mode
o Insert mode
o Command line mode
• Unlimited undo
• Multiple windows and buffers
• Flexible insert mode
• Syntax highlighting - highlight portions of the buffer in different colors or styles,
based on the type of file being edited
• Interactive commands
o Marking a line
o vi line buffers
o Shift a block of code
• Block operators
• Command line history
• Extended regular expressions
• Edit compressed/archive files (gzip, bzip2, zip, tar)
• Filename completion
• Block operations
• Jump tags
• Folding text
• Indenting
• ctags and cscope intergration
• 100% vi compatibility mode
• Plugins to add/extend functionality
• Macros
• vimscript, Vim's internal scripting language
• Unicode support
• Multi-language support
• Integrated On-line help

3.4 kate editor


Kate is a multi document editor which is part of the kdebase package of KDE, the K
Desktop Environment. With a multi-view editor like Kate you get a lot of advantages.
You can view several instances of the same document and all instances are synced. Or
you can view more files at the same time for easy reference or simultaneous editing. The
terminal emulation and sidebar are docked windows that can be plugged out of the main
window, or replaced therein according to your preference. KDevelop (an integrated
development environment) and Quanta Plus (a web development environment) are two of
the major KDE applications that use Kate as an editing component.

Features include:
• Powerful syntax highlighting and bracket matching
• MDI, window splitting, window tabbing
• Spell checking
• CR, CRLF, LF newline support
• Encoding support (utf-8, utf-16, ascii etc.)
• Encoding conversion
• Search and replace text using regular expressions
• Drag and drop text editing
• Code and text folding
• Infinite undo/redo support
• Block selection mode
• Auto indentation
• Auto completion support
• Integrated shell
• Wide protocol support (http, ftp, ssh, webdav etc.)
• Plugin architecture for the application and editor component
• Customizable shortcuts
• Integrated command line
• Full DCOP scripting
• Scriptable using JavaScript .

3.5 nano editor


nano is a curser-based text editor. It is a clone of Pico, the editor of the Pine email client.
The nano project was started in 1999 due to licensing issues with the Pine suite and also
because Pico lacked some essential features. nano aims to emulate the functionality and
easy-to-use interface of Pico, while offering additional functionality, but without the tight
mailer integration of the Pine/Pico package. nano, like Pico, is keyboard-oriented,
controlled with control keys.
Features include:
• Interactive search and replace
• Color syntax highlighting
• Go to line and column number
• Auto-indentation
• Feature toggles
• UTF-8 support
• Mixed file format auto-conversion
• Verbatim input mode
• Multiple file buffers
• Smooth scrolling
• Bracket matching
• Customizable quoting string
• Backup files
• Internationalization support
• Filename tab completion

3.6 gedit Editor


gedit is a small and lightweight text editor for the GNOME environment. Complete
GNOME integration is provided, with support for Drag and Drop (DnD) from Nautilus
(the GNOME file manager), the use of the GNOME help system, the GNOME Virtual
File System and the GNOME print framework. gedit uses a Multiple Document Interface
(MDI), which lets you edit more than one document at the same time. gedit supports most
standard editing features, plus several not found in your average text editor (plug-in being
the most notable of these). gedit plug-in may also be written in the Python scripting
language: to enable python support you need the pygtk and gnome-python-desktop
bindings. Included plugins: Word count, Spell checker, Change case of selected text, file
browser, sort, tag list, insert date/time, and shell out. Other external plugins are also
available.
Features include:
• Complete support for internationalized text (UTF-8)
• Configurable syntax highlighting for various languages (C, C++, Java, HTML,
XML, Python, Perl and many others)
• GNOME VFS support for remote files
• Complete integration with the GNOME Environment
• Search and Replace
• Clipboard support
• Text wrapping
• Undo/Redo
• File Revert
• Editing files from remote locations
• A complete preferences interface
• Configurable Plug-ins system, with optional python support
• Bracket matching
• Backup files
• Printing and Print Previewing Support
• Complete online user manual
Chapter 4
Regular Expression and Filters in Unix/Linux

4.1 Regular Expression

A regular expression is a set of characters that specify a pattern. Regular expressions are
used when you want to search for specify lines of text containing a particular pattern.
Most of the UNIX utilities operate on ASCII files a line at a time. Regular expressions
search for patterns on a single line, and not for patterns that start on one line and end on
another. It is simple to search for a specific word or string of characters. Almost every
editor on every computer system can do this. Regular expressions are more powerful and
flexible. You can search for words of a certain size. You can search for a word with four
or more vowels that end with an "s". Numbers, punctuation characters, you name it, a
regular expression can find it. What happens once the programs you are using find it is
another matter? Some just search for the pattern. Others print out the line containing the
pattern. Editors can replace the string with a new pattern. It all depends on the utility.

^ And $
Most UNIX text facilities are line oriented. Searching for patterns that span several lines
is not easy to do. You see, the end of line character is not included in the block of text
that is searched. It is a separator. Regular expressions examine the text between the
separators. If you want to search for a pattern that is at one end or the other, you use
anchors. The character "^" is the starting anchor, and the character "$" is the end anchor.
The regular expression "^A" will match all lines that start with a capital A. The
expression "A$" will match all lines that end with the capital A. If the anchor characters
are not used at the proper end of the pattern, then they no longer act as anchors. That is,
the "^" is only an anchor if it is the first character in a regular expression. The "$" is only
an anchor if it is the last character. The expression "$1" does not have an anchor. Neither
is "1^". If you need to match a "^" at the beginning of the line, or a "$" at the end of a
line, you must escape the special characters with a backslash. Here is a summary:
Pattern Matches
^B "B" at the beginning of a line
B$ "B" at the end of a line
B^ "B^" anywhere on a line
$B "$B" anywhere on a line
^^ "^" at the beginning of a line
$$ "$" at the end of a line

Character set matching with character


The simplest character set is a character. The regular expression "the" contains three
character sets: "t," "h" and "e". It will match any line with the string "the" inside it. This
would also match the word "other". To prevent this, put spaces before and after the
pattern: " the ". Some characters have a special meaning in regular expressions. If you
want to search for such a character, escape it with a backslash.

“. “ To match any character


The character "." is one of those special meta-characters. By itself it will match any
character, except the end-of-line character. The pattern that will match a line with a single
characters is
^.$

Specifying a range of characters with […]


If you want to match specific characters, you can use the square brackets to identify the
exact characters you are searching for. The pattern that will match any line of text that
contains exactly one number is
^[0123456789]$
This is verbose. You can use the hyphen between two characters to specify a range:
^[0-9]$
You can intermix explicit characters with character ranges. This pattern will match a
single character that is a letter, number, or underscore:
[A-Za-z0-9_]
Character sets can be combined by placing them next to each other. If you wanted to
search for a word that started with a capital letter "S". Was the first word on a line .The
second letter was a lower case letter .Was exactly three letters long, and The third letter
was a vowel .
The regular expression would be "^S[a-z][aeiou] ".
Regular Expression Matches
[] The characters "[]"
[0] The character "0"
[0-9] Any number
[^0-9] Any character other than a number
[-0-9] Any number or a "-"
[0-9-] Any number or a "-"
[^-0-9] Any character except a number or a "-"
[]0-9] Any number or a "]"
[0-9]] Any number followed by a "]"
[0-9-z] Any number,
or any character between "9" and "z".
[0-9\-a\]] Any number, or
a "-", a "a", or a "]"

Character set repetition with *


The third part of a regular expression is the modifier. It is used to specify how times you
expect to may see the previous character set. The special character "*" matches zero or
more copies. That is, the regular expression "0*" matches zero or more zeros, while the
expression "[0-9]*" matches zero or more numbers.
This explains why the pattern "^#*" is useless, as it matches any number of "#'s" at the
beginning of the line, including zero. Therefore this will match every line, because every
line starts with zero or more "#'s".
At first glance, it might seem that starting the count at zero is stupid. Not so. Looking for
an unknown number of characters is very important. Suppose you wanted to look for a
number at the beginning of a line, and there may or may not be spaces before the number.
Just use "^ *" to match zero or more spaces at the beginning of the line. If you need to
match one or more, just repeat the character set. That is, "[0-9]*" matches zero or more
numbers, and "[0-9][0-9]*" matches one or more numbers.

Matching specific number of sets with \ {and\}


You can continue the above technique if you want to specify a minimum number of
character sets. You cannot specify a maximum number of sets with the "*" modifier.
There is a special pattern you can use to specify the minimum and maximum number of
repeats. This is done by putting those two numbers between "\{" and "\}". The
backslashes deserve a special discussion. Normally a backslash turns off the special
meaning for a character. A period is matched by a "\." and an asterisk is matched by a
"\*". If a backslash is placed before a "<," ">," "{," "}," "(," ")," or before a digit, the
backslash turns on a special meaning. This was done because these special functions
were added late in the life of regular expressions. Here is a list of examples, and the
exceptions:
Regular Expression Matches
_
* Any line with an asterisk
\* Any line with an asterisk
\\ Any line with a backslash
^* Any line starting with an asterisk
^A* Any line
^A\* Any line starting with an "A*"
^AA* Any line if it starts with one "A"
^AA*B Any line with one or more "A"'s followed by a "B"
^A\{4,8\}B Any line starting with 4, 5, 6, 7 or 8 "A"'s
followed by a "B"
^A\{4,\}B Any line starting with 4 or more "A"'s
followed by a "B"
^A\{4\}B Any line starting with "AAAAB"
\{4,8\} Any line with "{4,8}"
A{4,8} Any line with "A{4,8}"

Words matching with \< and \>


Searching for a word isn't quite as simple as it at first appears. The string "the" will match
the word "other". You can put spaces before and after the letters and use this regular
expression: " the ". However, this does not match words at the beginning or end of the
line. And it does not match the case where there is a punctuation mark after the word.
The characters "\<" and "\>" are similar to the "^" and "$" anchors, as they don't occupy a
position of a character. They do "anchor" the expression between to only match if it is on
a word boundary. The pattern to search for the word "the" would be "\<[tT]he\>". The
character before the "t" must be either a new line character or anything except a letter,
number, or underscore. The character after the "e" must also be a character other than a
number, letter, or underscore or it could be the end of line character.

4.2 INPUT - OUTPUT REDIRECTION


Mostly all command gives output on screen or takes input from keyboard, but in Linux
it’s possible to send output to file or to read input from file.
Example:
[root@localhost ~] # ls
This command gives output to screen; to send output to file of ls command give
command.
[root@localhost ~]#ls > filename
It means put output of ls command to filename.
There are three main redirection symbols >, >>, <

> Redirector Symbol


Syntax: Linux-command > filename
It will redirect the output of Linux-commands to specified file. Note that if file already
exist, it will be overwritten else new file is created.

Example:
[root@localhost~]#cat>name.txt
ram
mohan
sohan
seema
akshara
Cntl+d
[root@localhost ~]# sort name.txt > sortfile
[root@localhost~]#cat sortfile
Sorted output will be stored in sortfile

>> Redirector Symbol


Syntax: Linux-command >> filename
To append the result of Linux-commands at the end of file. Note that if file exist, it will
be opened and new information/data will be written to END of file, without losing
previous information/data, and if file does not exist, then new file is created.

Example:
[root@localhost ~]# cat >jmitstaff.doc
reena
Pooja sharma
Vipul gupta
Krishan sharma
Shilpa mehta
Cntl +d
[root@localhost ~]# cat >> name.txt jmitstaff.doc
[root@localhost ~]#cat jmitstaff.doc

< Redirector Symbol


Syntax: Linux-command < filename
To take input to Linux-command from file instead of key-board.
Example:
[root@localhost~]#cat>name.txt
ram
mohan
sohan
seema
akshara
Cntl+d
[root@localhost~]#sort < name. txt> sortfile
[root@localhost~]#cat sortfile
In above example sort command takes input from name.txt file and output of sort
command is redirected to sortfile.

Example:
[root@localhost~]#tr "[a-z]" "[A-Z]" < name.txt > capsnames
[root@localhost~]# cat capsnames
tr command is used to translate all lower case characters to upper-case letters. It takes
input from name.txt file, and tr's output is redirected to capsnames file.

Create staff.doc file


[root@localhost ~]# cat > staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1003 pooja IT Information Technology 23-9-1981

4.3 PIPES
A pipe is a way to connect the output of one program to the input of another program
without any temporary file.
Pipe:
"A pipe is nothing but a temporary storage place where the output of one command is
stored and then passed as the input for second command. Pipes are used to run more than
two commands (Multiple commands) from same command line."
Syntax:-command1 | command2

Examples:
[root@localhost~]# ls | more
Output of ls command is given as input to more command So that output is printed one
screen full page at a time.
[root@localhost~]# who | sort
Output of who command is given as input to sort command sorted list of users will be
printed on screen.
[root@localhost~]#ls -l | wc
Output of ls command is given as input to wc command. It will print number of lines,
words and characters of the files in current directory
[root@localhost~]# who | sort > loggedusers
Output of who command is given as input to sort command and result is stored in
loggedusers file

4.4 FILTERS
Filters take standard input and perform an operation upon it and send the results to
standard output. These filters are used to display the contents of a file in sorted order,
extract the lines of a specified file that contains a specific pattern. A filter performs some
kind of process on the input and gives output.

Sort
This command sorts the contents of a given file based on ASCII value of characters.
Syntax: sort [options] <Filename>
Various options available with Sort command are
• -m <filelist>
[root@localhost ~]# sort staff.doc -m staff.doc
1001 tajinder IT Information Technology 19-02-1979
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983

In the above example, two already sorted files are merged with each other using -m
option.

• -o <Filename>
[root@localhost ~]# sort staff.doc -o staff.txt
[root@localhost ~]# cat staff.txt
1001 tajinder IT Information Technology 19-02-1979
1002 reena IT Information Technology 6-5-1983
1003 pooja IT Information Technology 23-9-1981
1004 vikas CSE computer Engg. 19-9-1982
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987

• -r <Filename>
[root@localhost ~]# sort -r staff.doc
1006 krishan COM commerce 6-10-1987
1005 vipul CSE computer Engg. 4-10-1984
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1001 tajinder IT Information Technology 19-02-1979
In the above example, the file is sorted in the reverse order by using the -r option.

• -n <filename>
[root@localhost ~]# sort -n staff.doc
1001 tajinder IT Information Technology 19-02-1979
1002 reena IT Information Technology 6-5-1983
1003 pooja IT Information Technology 23-9-1981
1004 vikas CSE computer Engg. 19-9-1982
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
In the above example, the sort with -n option will arrange its input according to numerical
value. As per rule, the digits, alphabets & special symbols are converted into their ASCII
value and then the sorting takes place, leading to uncertain results. Sort with -n option is
used to overcome this limitation.
• -c <Filename>
[root@localhost ~]# sort -c staff.doc
sort: staff.doc:4: disorder: 1004 vikas CSE computer Engg. 19-9-1982
[root@localhost ~]# cat staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
In the above example, the -c option checks whether the given file is sorted or not.
If not, then it specifies the point of disorder.

• +pos <Filename>
[root@localhost ~]# sort +1 staff.doc
1006 krishan COM commerce 6-10-1987
1006 krishan IT Information Technology 6-10-1987
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1001 tajinder IT Information Technology 19-02-1979
1004 vikas CSE computer Engg. 19-9-1982
1005 vipul CSE computer Engg. 4-10-1984
In the above example, +1 specifies that the sort operation is being performed on the
second field i.e. sorting starts after the specified field number in the command parameter.

[root@localhost ~]# sort +1 -2 staff.doc


1006 krishan IT Information Technology 6-10-1987
1006 krishan COM commerce 6-10-1987
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1001 tajinder IT Information Technology 19-02-1979
1004 vikas CSE computer Engg. 19-9-1982
1005 vipul CSE computer Engg. 4-10-1984
Here +1 states the same thing but the difference lies in the use of -pos option. The -pos
option is used to specify the end of the sort operation at that field. The sorting operation
stops at the second field itself.

grep (global search for regular expression)


The grep command examines each line of data it receives from standard input and outputs
every line that contains a specified pattern of characters.
Syntax: grep [-options] pattern <Filename>
Various options available with grep command are
• -i option
[root@localhost ~]# grep -i "KRISHAN" staff.doc
1006 krishan COM commerce 6-10-1987
In the above example, when we use -i option, the case distinction for matching the
patterns are ignored.

• -v option
[root@localhost ~]# grep -v "krishan" staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
In the above example, the only lines that do not match the specified pattern are displayed.

• -n option
[root@localhost ~]# grep -n "krishan" staff.doc
3:1006 krishan COM commerce 6-10-1987
In the above example, the resultant lines are displayed along with their line numbers.

• -c option
[root@localhost ~]# grep -c "IT" staff.doc
3
In above example, grep will search all occurrences of IT word and then -c option will
print the total number of occurrences of the specified word.
[root@localhost ~]# grep "IT" staff.doc
1001 tajinder IT Information Technology 19-02-1979
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983

• -<number> option
[root@localhost ~]# grep -1 "COM" staff.doc
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
This example displays the lines matching with the specified pattern along with the
<number> of lines above and below.

REGULAR EXPRESSION CHARACTER SET


The following symbols are used in the regular expression character set.
• * symbol
[root@localhost ~]# grep "com*" staff.doc
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
In this, * acts as the wild character and it matches zero or more characters.
[root@localhost ~]# grep "CO*" staff.doc
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
(*) symbol doesn’t work appropriately when the number of characters to be matched is
just 1.

• . symbol
[root@localhost ~]# grep "CO." staff.doc
1006 krishan COM commerce 6-10-1987
In this example, (.) symbol is used to match a single character.

• ^<character>
[root@localhost ~]# grep "^1" staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1003 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
In this example, the lines beginning with “1” are matched and displayed.

• <character>$
[root@localhost ~]# grep "7$" staff.doc
1006 krishan COM commerce 6-10-1987
This example displays only those matched lines that are ending with the character
specified in the <character>.
It will display the lines ending with “7”.

egrep command
This command provides more features than simple grep command. It offers multiple
pattern matching. This is possible using pipe(|) symbol.

Example:
[root@localhost ~]# egrep "krishan|reena" staff.doc
1006 krishan COM commerce 6-10-1987
1002 reena IT Information Technology 6-5-1983
In this example, multiple patterns like krishan and reena are matched using egrep
command.

fgrep command
This command works just like grep with the only point of difference is the inability of
fgrep to work on regular expressions.
[root@localhost ~]# fgrep "krishan" staff.doc
1006 krishan COM commerce 6-10-1987
[root@localhost ~]# fgrep "kri*" staff.doc
This example shows that fgrep works fine for the simple grep operations but when the
use of regular expressions is inculcated, the fgrep command is no longer useful.
more
The more command is used to display the output page by page without scrolling up on
the screen fastly.
Keys: Space bar/[f] key to scroll one screen forward.
[b] key to scroll backward one screen backward.
[q] key to quit displaying.
Syntax: more<Filename>
Example: [root@localhost ~]# more staff.doc

pr
This command is used to display the file contents along with the suitable headers and
footers. The header part contains last modification date and time along with file name and
page number.
Syntax: pr [-options] <Filename>
Various available options are
• -n option: This is used to number the lines, display the header at the top and increase
the page size to the default value i.e. 66 lines.
[root@localhost ~]# pr -n staff.doc
2009-11-04 14:11 staff.doc Page 1
1 1001 tajinder IT Information Technology 19-02-1979
2 1005 vipul CSE computer Engg. 4-10-1984
3 1006 krishan COM commerce 6-10-1987
4 1004 vikas CSE computer Engg. 19-9-1982
5 1004 pooja IT Information Technology 23-9-1981
6 1002 reena IT Information Technology 6-5-1983
7 1001 tajinder IT Information Technology 19-02-1979

• -t option: This option turns off the heading at the top of the page.
[root@localhost ~]# pr -t staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1001 tajinder IT Information Technology 19-02-1979

• -l <number>: It changes the page size to specified<number> of lines. Default value is


66 lines.
[root@localhost ~]# pr -l 24 staff.doc
2009-11-04 14:11 staff.doc Page 1
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1004 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
1001 tajinder IT Information Technology 19-02-1979

Note: - The page size is increased to 24 lines.

cut
This command is used to cut the columns/fields of a specified file.
Syntax: cut [-options] <Filename>
Options available are
• -c <columns>
The column with the number given in the <columns> from the file being specified.
[root@localhost ~]# cut -c 4 staff.doc
1
5
6
4
4
2
1
In this example, the 4th column from the file staff.doc containing data is being displayed
on the screen.
We can also provide the range of the column numbers to be printed i.e 1-4.
[root@localhost ~]# cut -c 1-4 staff.doc
1001
1005
1006
1004
1004
1002
1001
• -f <fields>
This option is used to cut the fields with the number given in the <fields> from the file
being specified.
[root@localhost ~]# cut -f 1-2 staff.doc
1001 tajinder
1005 vipul
1006 krishan
1004 vikas
1004 pooja
1002 reena

paste
This command is used to concatenate the contents of the specified files into a single file
vertically.
Syntax: paste < filename1> <filename2>....
Examples:
[root@localhost ~]# cat staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
[root@localhost ~]# cat staff.doc | tr -d "(a-c)"
1001 tjinder IT Informtion Tehnology 19-02-1979
1005 vipul CSE omputer Engg. 4-10-1984
1006 krishn COM ommere 6-10-1987
1004 viks CSE omputer Engg. 19-9-1982
1004 pooj IT Informtion Tehnology 23-9-1981
1002 reen IT Informtion Tehnology 6-5-1983
1004 vikas CSE omputer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983

[root@localhost ~]# cat >subject.doc


Linux
java
CD
MVR
AI
WD
[root@localhost ~]# cat >serial.doc
1
2
3
4
5
6
[root@localhost ~]# paste -f 6 staff.doc subject.doc
1001 tajinder IT Information Technology 19-02-1979 Linux
1005 vipul CSE computer Engg. 4-10-1984 java
1006 krishan COM commerce 6-10-1987 CD
1004 vikas CSE computer Engg. 19-9-1982 MVR
1004 pooja IT Information Technology 23-9-1981 AI
1002 reena IT Information Technology 6-5-1983 WD

[root@localhost ~]# paste -f 1 serial.doc staff.doc


1 1001 tajinder IT Information Technology 19-02-1979 Linux
2 1005 vipul CSE computer Engg. 4-10-1984 java
3 1006 krishan COM commerce 6-10-1987 CD
4 1004 vikas CSE computer Engg. 19-9-1982 MVR
5 1004 pooja IT Information Technology 23-9-1981 AI
6 1002 reena IT Information Technology 6-5-1983 WD
tr
So called translate command is used to change the case of alphabets.
Syntax:- tr<CharacterSet1> <CharacterSet2> <StandardInput>
[root@localhost ~]# cat staff.doc | tr "(a-z)" "(A-Z)"
1001 TAJINDER IT INFORMATION TECHNOLOGY 19-02-1979
1005 VIPUL CSE COMPUTER ENGG. 4-10-1984
1006 KRISHAN COM COMMERCE 6-10-1987
1004 VIKAS CSE COMPUTER ENGG. 19-9-1982
1004 POOJA IT INFORMATION TECHNOLOGY 23-9-1981
1002 REENA IT INFORMATION TECHNOLOGY 6-5-1983

In this example, the tr command translates the first character in the characterset1 into the
first character in the characterset2 and so on. The pipe symbol is used to take the input
from the specified file.
Option available with tr is

• -d option: This will delete the character specified in the character set from the input
but won't translate.

[root@localhost ~]# cat staff.doc | tr -d "(a-c)"


1001 tjinder IT Informtion Tehnology 19-02-1979
1005 vipul CSE omputer Engg. 4-10-1984
1006 krishn COM ommere 6-10-1987
1004 viks CSE omputer Engg. 19-9-1982
1004 pooj IT Informtion Tehnology 23-9-1981
1002 reen IT Informtion Tehnology 6-5-1983

4.5 PROGRAMMABLE FILTERS


sed
sed is a stream editor. sed command helps to edit or delete all occurrences of one string to
another within a file. It takes a file as input and prints the result on screen or redirects the
output to a specified file.
Working
The sed utility works by sequentially reading a file, line by line, into memory. It then
performs all actions specified for the line and places the line back in memory to dump to
the terminal with the requested changes made. After all actions have taken place to this
one line, it reads the next line of the file and repeats the process until it is finished with
the file.
Note:-The output can be redirected to another file to save the changes and the original file
is left unchanged.

Syntax : sed [options] '{command}' [filename]


OPTIONS:
Command and its function
-n do not output the trailing new line
-e enable interpretation of the backslash-escaped characters listed below
-E disable interpretation of those sequences in Strings

Without -E, the following sequences are recognized and interpolated:


\NNN the character whose ASCII code is NNN (octal)
\a alert (BEL)
\b Backspace
\c suppress trailing new line
\f form feed
\n new line
\r carriage return
\t horizontal tab
vertical tab
\v
i insert before line
a appends after line
c changes lines
d delete lines
p print lines
w write lines
q Quit early
Note: For explaining sed command, we'll take following example as an input
[root@localhost ~]# cat >fruits.txt
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
Copying using sed
[root@localhost ~]# sed G fruits.txt >order.txt
[root@localhost ~]# cat order.txt
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
In the above example, using the sed command with G would double space the file
fruits.txt and output the results to the order.txt.
Numbering using sed
[root@localhost ~]# sed = fruits.txt | sed 'N;s/\n/\./'
1.mango 1kg
2.orange 12pcs
3.apple 5kg
4.kiwis 1kg
5.bananas 24pcs
6.grapes 5kg
In the above example, sed command is used to output each of the lines in fruits.txt with
the line number followed by a period before each line.

Substitution using sed


sed is used to substitute one value for another.
Single Substitution
Syntax: sed 's/{previous data}/{current data}/' <filename>
Example:
[root@localhost ~]# sed 's/mango/litchi/' fruits.txt
litchi 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
In above example, in place of mango, word litchi is being substituted.

Multiple substitutions
If multiple changes need to be made to the same file, use the following options.
1) Using -e
[root@localhost ~]# sed -e 's/mango/litchi/' -e 's/apple/guava/' fruits.txt
litchi 1kg
orange 12pcs
guava 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg

2) Using ; between different substitutions


[root@localhost ~]# sed 's/mango/litchi/ ; s/apple/guava/' fruits.txt
litchi 1kg
orange 12pcs
guava 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg

3) Using Enter Key after each substitution


[root@localhost ~]# sed 's/mango/litchi/
> s/apple/guava/' fruits.txt
litchi 1kg
orange 12pcs
guava 5kg
kiwis 1kg
bananas 24pcs

Global substitutions
Suppose the contents of file contain more than one occurrence of the word to be changed
and you want to replace all those with single command, then use the following method.
Syntax: sed 's/{previous data}/{current data}/g' <filename>
Example:
[root@localhost ~]# sed ' s/1kg/5kg/g' fruits.txt
mango 5kg
orange 12pcs
apple 5kg
kiwis 5kg
bananas 24pcs
grapes 5kg
In above example, it will search for all 1kg words and then replace all those words with
5kg.
[root@localhost ~]# sed ' s/1kg/5kg/g
> s/12pcs/24pcs/g' fruits.txt
mango 5kg
orange 24pcs
apple 5kg
kiwis 5kg
bananas 24pcs
grapes 5kg
In above example, it will search for all 1kg and 12pcs words and then replace all those
words with 5kg and 24 pcs respectively.

Restricted substitution
For applying substitution on restricted number of lines, we specify the line numbers.
[root@localhost ~]# sed '1,4 s/1kg/2kg/' fruits.txt
mango 2kg
orange 12pcs
apple 5kg
kiwis 2kg
bananas 24pcs
grapes 5kg
In above example, "1kg" is substituted with "2kg" only in the first and fourth line of the
fruits.txt output.

Printing using sed


[root@localhost ~]# sed -n '3,5p' fruits.txt
apple 5kg
kiwis 1kg
bananas 24pcs
In above example, using -n option , it will start printing from 3rd line and print up to 5th
line using p option.

Deleting using sed


Delete works in the same manner as substitute, only it removes the specified lines.
Syntax: sed '{lines to be deleted} d' <filename>
Examples:
[root@localhost ~]# sed '3,4 d' fruits.txt
mango 1kg
orange 12pcs
bananas 24pcs
grapes 5kg
In above example, using d option , it will delete from 3rd line to 5th line and display the
rest lines.
[root@localhost ~]# sed '/mango/ d' fruits.txt
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
In above example, using d option , it will search for the word mango and will delete all
the lines containing mango word.
[root@localhost ~]# sed '3,5 !d' fruits.txt
apple 5kg
kiwis 1kg
bananas 24pcs
In above example , using !d option , it will delete all the lines other than the range
specified in the parameter.
[root@localhost ~]# sed '/^mango/ d' fruits.txt
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
In above example, up carat (^) signifies the beginning of a line and would only delete the
line if "mango" were the first five characters of the line.

Insertion and appending using sed


For inserting text using sed i option is used and for appending text a option is used
[root@localhost ~]# sed '1i\
This is the fruit order for today\' fruits.txt
This is the fruit order for today
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
In above example, the backslashes (\) is for carriage return .This will insert the specified
message before the specified line.
[root@localhost ~]# sed '$a\
This is the end of fruit order\' fruits.txt
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg
This is the end of fruit order
In above example, dollar sign ($) signifies that the text is to be appended to the end of the
file. The backslashes (\) is for carriage return .this will append the specified message at
the end of the file.

[root@localhost ~]# sed '3a\


Mr. ram order start from here\' fruits.txt
mango 1kg
orange 12pcs
apple 5kg
Mr. ram order start from here
kiwis 1kg
bananas 24pcs
grapes 5kg
In above example, append the specified message after 3rd line.

Reading and writing to files using sed


During operation of the editing commands, you can send the output to other file
simultaneously.
[root@localhost ~]# sed '
>s/grapes/mango/
>2,4 w fruitsorder.txt ' fruits.txt
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
mango 5kg
[root@localhost ~]# cat fruitorder.txt
orange 12pcs
apple 5kg
kiwis 1kg
In above example, will search for word grapes with mango and simultaneously store the
2nd to 4th line.

Changing using sed


It is possible to change the lines from one value to another.
[root@localhost ~]# sed '/mango/ c\
pineapple 5pcs' fruits.txt
pineapple 5pcs
orange 12pcs
apple 5kg
kiwis 1kg
bananas 24pcs
grapes 5kg

Quitting early using sed


The default is for sed to read through an entire file and stop only when the end is reached.
You can stop processing early, however, by using the quit command.
[root@localhost ~]# sed '
> s/1kg/5kg/
> 3q ' fruits.txt
mango 5kg
orange 12pcs
apple 5kg
In above example , will search for word 1kg with 5kg till the end of 3rd line.
[root@localhost ~]# sed 4q fruits.txt
mango 1kg
orange 12pcs
apple 5kg
kiwis 1kg

Labels and Comments


Labels can be placed inside sed script files to make it easier to explain.
• : The colon signifies a label name. For example:
:HERE
Labels beginning with the colon can be addressed by "b" and "t" commands.
• b {label} Works as a "goto" statement, sending processing to the label preceded by
a colon. For example,
b HERE
sends processing to the line
:HERE
If no label is specified following the b, processing goes to the end of the script
file.
• Branches to the label only if substitutions have been made since the last
t {label}
input line or execution of a "t" command. As with "b," if a label name is not
given, processing moves to the end of the script file.
• # The pound sign as the first character of a line causes the entire line to be treated
as a comment. Comment lines are different from labels and cannot be branched to
with b or t commands.

gawk
This is programmable filter available in Linux. gawk is useful for manipulating files that
contain columns of data on a line by line basis.
Syntax: gawk '{pattern action}' {file-name}
gawk is used for the following purposes.

Field Selection
There are some predefined variables which can be used for the purpose of field selection.
$0 is a special variable of gawk, which print entire record or fields.
$1 is used to print the first field
$2 is used to print the second field and so on.
Examples:
[root@localhost ~]# gawk '{print $2}' staff.doc
tajinder
vipul
krishan
vikas
pooja
reena

This example is used to print the second field in the staff.doc file.
[root@localhost ~]# gawk '{print $0}' staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1003 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
The special variable $0 prints the complete record in the above example.

Predefined variables in gawk


NR and NF are predefined variables of gawk which means Number of input Record,
Number of Fields in input record respectively. In above example NR is changed as our
input record changes, and NF is constant as there are only 4 field per record except in 3rd
record.
[root@localhost ~]# cat > def_var
{
print "Printing Rec. #" NR "(" $0 "),End # of field for this record is " NF
}
[root@localhost ~]# gawk -f def_var staff.doc
Printing Rec. #1(1001 tajinder IT Information Technology 19-02-1979),
End # of field for this record is 6 `
Printing Rec. #2(1005 vipul CSE computer Engg. 4-10-1984),
End # of field for this record is 6
Printing Rec. #3(1006 krishan COM commerce 6-10-1987),
End # of field for this record is 5
Printing Rec. #4(1003 vikas CSE computer Engg. 19-9-1982),
End # of field for this record is 6
Printing Rec. #5(1004 pooja IT Information Technology 23-9-1981),
End # of field for this record is 6
Printing Rec. #6(1002 reena IT Information Technology 6-5-1983),
End # of field for this record is 6
In the above example, first, we create a gawk file named def_var and it is run using the
command # gawk -f def_var staff.doc. Here -f option instruct gawk, to read its command
from given file i.e from def_var & staff.doc is the name of file which is taken as input for
gawk.
The NR variable can also be used in the default form as in the given examples: -
[root@localhost ~] # gawk 'NR==3 {print}' staff.doc
1006 krishan COM commerce 6-10-1987
[root@localhost ~] # gawk 'NR==3, NR==5 {print}' staff.doc
1006 krishan COM Commerce 6-10-1987
1003 vikas CSE Computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981

User Defined Variables


gawk provides facility to the user to declare their own variables.
Example:
[root@localhost ~] # cat> arith
{
num1 = $1;
num2 = $2;
sum = $1 + $2;
print num1 " + " num2 "=" sum;
diff = $1 - $2;
print num1 " - " num2 "=" diff;
rem = $1 % $2;
print num1 " % " num2 "=" rem;
mul = $1 * $2;
print num1 " * " num2 "=" mul;
}
[root@localhost ~]# gawk -f arith
15 4
15 + 4=19
15 - 4=11
15 % 4=3
15 * 4=60
In the above program, num1, num2, sum, diff, mul, rem all are user defined variables.
Value of first and second field is assigned to num1, num2 variable respectively. Value of
variable can be printed using print statement as, print operand1 “operator” operand2 " =
"result. If string is not enclosed in double quotes it’s treated as variable.

Pattern Matching
general syntax:- gawk '/pattern/{action}' <Filename>
[root@localhost ~]# gawk '/IT/ {print}' staff.doc
1001 tajinder IT Information Technology 19-02-1979
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
In this example, gawk searches for the pattern “IT” in the file staff.doc and displays the
resulting lines.

Usage of printf statement


printf statement is used to print formatted output of the variables or text.
Syntax: printf "format" ,var1, var2, var N
gawk also allows the use of field specifiers like %c,%d etc in the format part.
[root@localhost ~]# gawk '{printf"%s\n",$0}' staff.doc
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1003 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
[root@localhost ~]# gawk '{printf"%d\n",$0}' staff.doc
1001
1005
1006
1003
1004
1002
[root@localhost ~]# gawk '{printf"%f\n",$0}' staff.doc
1001.000000
1005.000000
1006.000000
1003.000000
1004.000000
1002.000000
The above example shows the usage of different specifiers in the field specifier. %d gives
the value of the decimal number used.
[root@localhost ~]# gawk '/IT/ {printf "%s\n",$0}' staff.doc
1001 tajinder IT Information Technology 19-02-1979
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
In this example, IT pattern is searched in the entire records and the resulting lines are
displayed.
[root@localhost ~]# gawk '$3=="IT" {printf "%s\n",$0}' staff.doc
1001 tajinder IT Information Technology 19-02-1979
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
The above example displays the usage of arithmetic operators such as ==,<= etc.

Begin and end patterns


BEGIN instructs gawk that performs BEGIN actions before the first line (Record) has
been read from database file. Use BEGIN pattern to set value of variables, to print
heading for report etc.
Syntax:
BEGIN {
action 1
action 2
action N
}
END instructs gawk that performs END actions after reading all lines (RECORD) from
the database file.
Syntax:
END {
action 1
action 2
action N
}
Example:
[root@localhost ~]# gawk 'BEGIN {printf"\n\t\t\tSTAFF DETAILS\t\t\n"}\
>{printf"%s \n",$0}\
>END {printf"\t\t\tThis is the end\t\t\n"}' staff.doc
STAFF DETAILS
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg. 4-10-1984
1006 krishan COM commerce 6-10-1987
1003 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
This is the end
The above example demonstrates the use of BEGIN and END . In the header part, we are
printing the heading as”STAFF DETAILS” and the footer is used to mark the end of the
file.
[root@localhost ~]# gawk 'BEGIN {printf"\n\n\t\t\tSTAFF DETAILS\t\t\n"}\
/COM/{printf"%s \n",$0}\
END {printf"\t\t\tThis is the end\t\t\n"}' staff.doc
STAFF DETAILS
1006 krishan COM commerce 6-10-1987
This is the end
In this example, the pattern to be matched is being specified in the command. This
example prints only those lines which are containing “COM” word in them.
[root@localhost ~]# gawk 'BEGIN {printf"\n\n\t\t\tSTAFF DETAILS\t\t\n"}\
COM/{printf"%s\t %s \n",$2,$5}\ END {printf"\t\t\tThis is the end\t\t\n"}' staff.doc
STAFF DETAILS
krishan 6-10-1987
This is the end
This example displays the 2nd and 5th column of the row which satisfies the search pattern
“COM”.

if condition in gawk
General syntax of if condition is as follows:
Syntax:
if ( condition )
{
Stmt 1
Stmt 2
Stmt N
}
else
{
Stmt 1
Stmt 2
Stmt N
}
Example:
[root@localhost ~]# cat>check
{
if($1>1003)
{
print "Editable records\n" $0;
}
else
{
print "non-editable Records\n" $0 ;
}
}
[root@localhost ~]# gawk -f check staff.doc
non-editable Records
1001 tajinder IT Information Technology 19-02-1979
Editable records
1005 vipul CSE computer Engg. 4-10-1984
Editable records
1006 krishan COM commerce 6-10-1987
Editable records
1003 vikas CSE computer Engg. 19-9-1982
Editable records
1004 pooja IT Information Technology 23-9-1981
non-editable Records
1002 reena IT Information Technology 6-5-1983
In this example, the if condition checks for those records which satisfies the condition
($1>1003) and print the resultant lines with a message “Editable records” and according
to else part, the other records are printed with a message”non-editable records”

Loops in gawk
For loop and while loop are used for looping purpose in gawk.
Syntax of for loop
Syntax:
for (initialization; condition; incr/decr)
{
Stmt 1
Stmt 2
Stmt N
}
Statement(s) are executed repeatedly while the condition is true. Before the first iteration,
we initialize variables for the loop. After each iteration of the loop, we
increment/decrement a loop counter.
Example:-
[root@localhost ~]# cat >check1
{
for(i=1;i<3;i++)
{
printf "%s\n ",$0;
}
for (j=1;j<=79;j++)
{
printf "*";
}
}
[root@localhost ~]# gawk -f check1 staff.doc
1001 tajinder IT Information Technology 19-02-1979
1001 tajinder IT Information Technology 19-02-1979
**********************************************************************
1005 vipul CSE computer Engg. 4-10-1984
1005 vipul CSE computer Engg. 4-10-1984
**********************************************************************
1006 krishan COM commerce 6-10-1987
1006 krishan COM commerce 6-10-1987
**********************************************************************
1003 vikas CSE computer Engg. 19-9-1982
1003 vikas CSE computer Engg. 19-9-1982
**********************************************************************
1004 pooja IT Information Technology 23-9-1981
1004 pooja IT Information Technology 23-9-1981
**********************************************************************
1002 reena IT Information Technology 6-5-1983
1002 reena IT Information Technology 6-5-1983
**********************************************************************
In above example, first for loop will print all record twice and second for loop will pad *
symbol between the consecutive records.
Syntax:
initialize ;
while (condition)
{
Stmt 1
Stmt 2
Stmt N
Incr/decr
}
Example
[root@localhost ~]# cat >check1
BEGIN {print "Staff ID NAME"}
{
i=1;
while(i<3)
{
printf "%s ",$i;
i++;
}
printf "\n";
}
[root@localhost ~]# gawk -f check1 staff.doc
Staff ID NAME
1001 tajinder
1005 vipul
1006 krishan
1003 vikas
1004 pooja
1002 reena

Functions
gawk provides 2 types of functions.
• Built-in function
• User-defined Functions
Some of the built-in functions are as follow.
1) Numeric Functions
gawk has the following built-in arithmetic functions:
atan2(y, x) Returns the arctangent of y/x in radians.
cos(expr) Returns the cosine of expr, which is in radians.
exp(expr) The exponential function.
int(expr) Truncates to integer.
log(expr) The natural logarithm function.
rand() Returns a random number N, between 0 and 1, such that 0 ≤ N < 1.
sin(expr) Returns the sine of expr, which is in radians.
sqrt(expr) The square root function.

2) String Functions
Gawk has the following built-in string functions:
length([s]) :Returns the length of the string s, or the length of $0 if s is not supplied.
sub(r, s [, t]) : Just like gsub(), but only the first matching substring is replaced.
substr(s, i [, n]): Returns the at most n-character substring of s starting at i. If n is
omitted, the rest of s is used.
tolower(str): Returns a copy of the string str, with all the upper-case characters in str
translated to their corresponding lower-case counterparts. Non-alphabetic characters are
left unchanged.
toupper(str) :Returns a copy of the string str, with all the lower-case characters in str
translated to their corresponding upper-case counterparts. Non-alphabetic characters are
left unchanged.

3) Time Functions
systime() Returns the current time of day as the number of seconds since the Epoch
(1970-01-01 00:00:00 UTC on POSIX systems).
User-defined functions
Functions in AWK are defined as follows:
Syntax: function name (parameter list) {statements}
Functions are executed when they are called from within expressions in either
patterns or actions. Actual parameters supplied in the function call are used to instantiate
the formal parameters declared in the function. Arrays are passed by reference; other
variables are passed by value. Since functions were not originally part of the AWK
language, the provision for local variables is rather clumsy: They are declared as extra
parameters in the parameter list. The convention is to separate local variables from real
parameters by extra spaces in the parameter list.
Example:
[root@localhost ~]# gawk 'BEGIN {header()}\
function header()
{
printf "Employee Details\n"
}
function footer()
{
printf "...this is the end of file...\n"
}
{printf "%s \n", $0}
END { footer() }' staff.doc
Employee Details
1001 tajinder IT Information Technology 19-02-1979
1005 vipul CSE computer Engg 4-10-1984
1006 krishan COM commerce 6-10-1987
1003 vikas CSE computer Engg. 19-9-1982
1004 pooja IT Information Technology 23-9-1981
1002 reena IT Information Technology 6-5-1983
...this is the end of file...
The statements written in the BEGIN section are now executed within a user defined
function header and same is applied to the END section. A call to the function is made in
the BEGIN and the END part of the command.
Chapter 5
Linux Basics
5.1 LINUX FILE SYSTEM
Directory
Use to store this kind of files
Name
All your essential program (executable files) which can be use by most of
/bin
the user are store here. For e.g. vi, ls program are store here.
All super user executable (binaries) files are store which is mostly used
by root user . As mentioned in Red Hat Linux documentation: "The
/sbin
executables in /sbin are only used to boot and mount /usr and perform
system recovery operations."
Your sweet home. Mostly you work here. All the user have their own
/home
subdirectory under this directory.

Configuration file of your Linux system are here. For e.g. smb.conf -
Samba configuration file. It may contained sub directories like xinetd.d
/etc
or X11 which contains more configuration files related to particular
application or program..

When your system crashes, this is the place where you will get your files.
For e.g. You have shutdown your computer without unmounting the file
system. Next time when your computer starts all your 'fsck' program try
/lost+found
to recover you file system, at that time some files may stored here and
may be removed from original location so that you can still find those file
here.
Mostly the file(s) stored in this directory changes their size i.e. variable
/var data files. Data files including spool directories and files, administrative
and logging data, and transient and temporary files etc.

Central location for all your application program, x-windows, man pages,
documents etc are here. Programs meant to be used and shared by all of
the users on the system. Mostly you need to export this directory using
NFS in read-only mode. Some of the important sub-directories are as
follows:
/usr bin - sub-directory contains executables
sbin - sub-directory contains files for system administration i.e. binaries
include - sub-contains C header files
share - sub-contains contains files that aren't architecture-specific like
documents, wallpaper etc.
X11R6 - sub-contains X Window System
All shared library files are stored here. This libraries are needed to
/lib
execute the executable files (binary) files in /bin or /sbin.
/tmp Temporary file location

Special Device files are store here. For e.g. Keyboard, mouse, console,
/dev
hard-disk, cdrom etc device files are here.

Floppy disk, CD-Rom disk, Other MS-DOS/Windows partition mounted


/mnt in this location (mount points) i.e. temporarily mounted file systems are
here.
/boot Linux Kernel directory
/opt Use to store large software applications like star office.
This directory contains special files which interact with kernel. You can
gathered most of the system and kernel related information from this
directory. To clear your idea try the following commands:
/proc # vi cpuinfo
# vi meminf
# vi mounts
.

5.2 LINUX SWAP SPACE AND SWAP FILE

Linux divides its physical RAM (random access memory) into chucks of memory called
pages. Swapping is the process whereby a page of memory is copied to the preconfigured
space on the hard disk, called swap space, to free up that page of memory. The combined
sizes of the physical memory and the swap space are the amount of virtual memory
available.

Swapping is necessary for two important reasons. First, when the system requires more
memory than is physically available, the kernel swaps out less used pages and gives
memory to the current application (process) that needs the memory immediately. Second,
a significant number of the pages used by an application during its startup phase may
only be used for initialization and then never used again. The system can swap out those
pages and free the memory for other applications or even for the disk cache.

However, swapping does have a downside. Compared to memory, disks are very slow.
Memory speeds can be measured in nanoseconds, while disks are measured in
milliseconds, so accessing the disk can be tens of thousands times slower than accessing
physical memory. The more swapping that occurs, the slower your system will be.
Sometimes excessive swapping or thrashing occurs where a page is swapped out and then
very soon swapped in and then swapped out again and so on. In such situations the
system is struggling to find free memory and keep applications running at the same time.
In this case only adding more RAM will help.
Linux has two forms of swap space: the swap partition and the swap file. The swap
partition is an independent section of the hard disk used solely for swapping; no other
files can reside there. The swap file is a special file in the file system that resides amongst
your system and data files.

To see what swap space you have, use the command swapon -s. The output will look
something like this:

Filename Type Size Used Priority


/dev/sda5 partition 859436 0 -1

Each line lists a separate swap space being used by the system. Here, the ‘Type’ field
indicates that this swap space is a partition rather than a file, and from ‘Filename’ we see
that it is on the disk sda5. The ‘Size’ is listed in kilobytes, and the ‘Used’ field tells us
how many kilobytes of swap space has been used (in this case none). ‘Priority’ tells
Linux which swap space to use first. One great thing about the Linux swapping
subsystem is that if you mount two (or more) swap spaces (preferably on two different
devices) with the same priority, Linux will interleave its swapping activity between them,
which can greatly increase swapping performance.

To add an extra swap partition to your system, you first need to prepare it. Step one is to
ensure that the partition is marked as a swap partition and step two is to make the swap
file system. To check that the partition is marked for swap, run as root:

[root@localhost ~]#fdisk -l /dev/hdb

Replace /dev/hdb with the device of the hard disk on your system with the swap partition
on it. You should see output that looks like this:

Device Boot Start End Blocks Id System


/dev/hdb1 2328 2434 859446 82 Linux swap / Solaris
If the partition isn’t marked as swap you will need to alter it by running fdisk and using
the ‘t’ menu option. Be careful when working with partitions—you don’t want to delete
important partitions by mistake or change the id of your system partition to swap by
mistake. All data on a swap partition will be lost, so double-check every change you
make. Also note that Solaris uses the same ID as Linux swap space for its partitions, so
be careful not to kill your Solaris partitions by mistake.
Once a partition is marked as swap, you need to prepare it using the mkswap (make
swap) command as root:
[root@localhost ~]#mkswap /dev/hdb1
If you see no errors, your swap space is ready to use. To activate it immediately, type:
[root@localhost ~]#swapon /dev/hdb1
You can verify that it is being used by running swapon -s. To mount the swap space
automatically at boot time, you must add an entry to the /etc/fstab file, which contains a
list of file systems and swap spaces that need to be mounted at boot up. The format of
each line is:
Since swap space is a special type of file system, many of these parameters aren’t
applicable. For swap space, add:
/dev/hdb1 none swap sw 0 0
where /dev/hdb1 is the swap partition. It doesn’t have a specific mount point, hence none.
It is of type swap with options of sw, and the last two parameters aren’t used so they are
entered as 0.
To check that your swap space is being automatically mounted without having to reboot,
you can run the swapoff -a command (which turns off all swap spaces) and then swapon -
a (which mounts all swap spaces listed in the /etc/fstab file) and then check it with
swapon -s.
How to select swap size?
It is possible to run a Linux system without a swap space, and the system will run well if
you have a large amount of memory—but if you run out of physical memory then the
system will crash, as it has nothing else it can do, so it is advisable to have a swap space,
especially since disk space is relatively cheap.
The key question is how much? Older versions of Unix-type operating systems (such as
Sun OS and Ultrix) demanded a swap space of two to three times that of physical
memory. Modern implementations (such as Linux) don’t require that much, but they can
use it if you configure it. A rule of thumb is as follows: 1) for a desktop system, use a
swap space of double system memory, as it will allow you to run a large number of
applications (many of which may will be idle and easily swapped), making more RAM
available for the active applications; 2) for a server, have a smaller amount of swap
available (say half of physical memory) so that you have some flexibility for swapping
when needed, but monitor the amount of swap space used and upgrade your RAM if
necessary; 3) for older desktop machines (with say only 128MB), use as much swap
space as you can spare, even up to 1GB.
The Linux 2.6 kernel added a new kernel parameter called swappiness to let
administrators tweak the way Linux swaps. It is a number from 0 to 100. In essence,
higher values lead to more pages being swapped, and lower values lead to more
applications being kept in memory, even if they are idle. Kernel maintainer Andrew
Morton has said that he runs his desktop machines with a swappiness of 100, stating that
“My point is that decreasing the tendency of the kernel to swap stuff out is wrong. You
really don’t want hundreds of megabytes of BloatyApp’s untouched memory floating
about in the machine. Get it out on the disk, use the memory for something useful.”
One downside to Morton’s idea is that if memory is swapped out too quickly then
application response time drops, because when the application’s window is clicked the
system has to swap the application back into memory, which will make it feel slow.
The default value for swappiness is 60. You can alter it temporarily (until you next
reboot) by typing as root:
[root@localhost ~]# echo 50 > /proc/sys/vm/swappiness
If you want to alter it permanently then you need to change the vm.swappiness parameter
in the /etc/sysctl.conf file.
Creating a swap file
As well as the swap partition, Linux also supports a swap file that you can create,
prepare, and mount in a fashion similar to that of a swap partition. The advantage of swap
files is that you don’t need to find an empty partition or repartition a disk to add
additional swap space.
With Linux installed, you can use the following commands to create a swap file. The
command below creates a swap file of size 16416 blocks (about 16 Mb).
[root@localhost ~]# dd if =/dev/zero of =/swap bs=1024 count =16416
This command creates the swap file, /swap. The ``count='' parameter is the size of the
swap file in blocks.
[root@localhost ~]# mkswap /swap 16416
This command initializes the swap file. Again, replace the name and size of the swapfile
with the appropriate values.
[root@localhost ~]# sync
[root@localhost ~]# swapon /swap
Now the system is swapping on the file /swap. The sync command ensures that the file
has been written to disk.
To remove a swap file, first use swapoff, as in
[root@localhost ~]# swapoff /swap
Then the file can be deleted.
[root@localhost ~]# rm /swap
The entry for a swap file would look like this:
[root@localhost ~]# vi /etc/fstab
/swapfile none swap sw 0 0

5.3 MANAGING LINUX FILE SYSTEM


File system is broadly divided into two categories:
• User data - stores actual data contained in files
• Metadata - stores file system structural information such as super block , inodes,
directories .
File
File is collection of data items stored on disk. Or a device which can store the
information, data, music (mp3), picture, movie, sound, book etc. In fact what ever you
store in computer it must be inform of file. Files are always associated with devices like
hard disk, floppy disk etc. File is the last object in your file system tree.
Directory
Directory is group of files. Directory is divided into two types:
• Root directory - Strictly speaking, there is only one root directory in your system,
which is denoted by / (forward slash). It is root of your entire file system and can
not be renamed or deleted.
• Sub directory - Directory under root (/) directory is subdirectory which can be
created, renamed by the user.
Linux file system Superblock
Let us take an example of 20 GB hard disk. The entire disk space subdivided into
multiple file system blocks. And blocks used for what?
Linux file system blocks
The blocks used for two different purposes:
1. Most blocks stores user data aka files (user data).
2. Some blocks in every file system store the file system's metadata. So what the hell
is a metadata?
In simple words Metadata describes the structure of the file system. Most common
metadata structures are superblock, inode and directories.

Superblock
Each file system is different and they have type like ext2, ext3 etc. Further each file
system has size like 5 GB, 10 GB and status such as mount status. In short each file
system has a superblock, which contains information about file system such as:
File system type
Size
Status
Information about other metadata structures
If this information lost, you are in trouble (data loss) so Linux maintains multiple
redundant copies of the superblock in every file system. This is very important in many
emergency situations, for example you can use backup copies to restore damaged primary
super block. Following command displays primary and backup superblock location on
/dev/sda3:
[root@localhost ~]# dumpe2fs /dev/hda3 | grep -i superblock
Linux file system inodes
Each object in the file system is represented by an inode. A Linux file has following
attributes:
=> File type (executable, block special etc)
=> Permissions (read, write etc)
=> Owner
=> Group
=> File Size
=> File access, change and modification time
=> File deletion time
=> Number of links (soft/hard)
=> Extended attribute
=> Access Control List (ACLs)
All the above information stored in an inode. inode identifies the file and its attributes .
Each inode is identified by a unique inode number within the file system. Inode is also
known as index number.
An inode is a data structure Linux file system such as ext2 or ext3. An inode stores basic
information about a regular file, directory, or other file system object.

Checking inode number


You can use ls -i command to see inode number of file
[root@localhost ~]# ls -i /etc/shadow
3085392 /etc/shadow
You can also use stat command to find out inode number and its attribute:
[root@localhost ~]# stat /etc/passwd
Output:
File: `/etc/passwd'
Size: 2724 Blocks: 16 IO Block: 4096 regular file
Device: 802h/2050d Inode: 3085392 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2009-10-24 14:15:01.000000000 +0530
Modify: 2009-10-23 15:08:48.000000000 +0530
Change: 2009-10-23 15:08:48.000000000 +0530
First find out file inode number with any one of the following command:
Syntax:-stat {file-name}
[root@localhost ~]# cat >teji.doc
india is great
[root@localhost ~]# stat teji.doc
File: `teji.doc'
Size: 14 Blocks: 16 IO Block: 4096 regular file
Device: 802h/2050d Inode: 1770627 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2009-10-24 14:09:58.000000000 +0530
Modify: 2009-10-24 14:10:11.000000000 +0530
Change: 2009-10-24 14:10:11.000000000 +0530
OR
Syntax: ls -il {file-name}

Linux soft and hard links


inodes are associated with precisely one directory entry at a time. However, with hard
links it is possible to associate multiple directory entries with a single inode. To create a
hard link use ln command as follows:
[root@localhost ~]# ls -l teji.doc
-rw-r--r-- 1 root root 14 Oct 24 14:10 teji.doc
[root@localhost ~]# ln teji.doc teji1.doc
[root@localhost ~]# ls -l teji.doc
-rw-r--r-- 2 root root 14 Oct 24 14:10 teji.doc
Above commands create a link to file teji.doc.
Symbolic links refer to:
A symbolic path indicating the abstract location of another file.
Hard links refer to:
The specific location of physical data.
Difference between Hard link and Soft link
Hard links cannot links directories and cannot cross file system boundaries.
Soft links allows associating multiple filenames with a single file. However, symbolic
Symbolic link creation and deletion
[root@localhost ~]# ls -l teji.doc
-rw-r--r-- 1 root root 14 Oct 24 14:10 teji.doc
[root@localhost ~]# ln teji.doc teji1.doc
[root@localhost ~]# ls -l teji.doc
-rw-r--r-- 2 root root 14 Oct 24 14:10 teji.doc
If we delete the text file teji.doc, teji1.doc becomes a broken link and our data is lost. But
if we delete the link teji1.doc , data will be retained as shown below
[root@localhost ~]# rm teji1.doc
rm: remove regular file `teji1.doc'? y
[root@localhost ~]# cat teji.doc
india is great

Programs used to manage file systems


badblocks: Search a device for badblocks. The command "badblocks /dev/hda" will
search the first partition of the first IDE hard drive for badblocks.
cfdisk: A partition table manipulator used to create or delete disk partitions.
dosfsck: Used to check a msdos filesystem.
dumpe2fs: Lists the superblock and blocks group information on the device listed. Use
with a command like "dumpe2fs /dev/hda2". The filesystem on the device must be a
Linux file system for this to work.
fdformat: Performs s lowlevel format on a floppy disk. Ex: "fdformat /dev/fd0H1440".
fdisk: Used to add or remove partitions on a disk device. It modifies the partition table
entries.
fsck: Used to check and/or repair a Linux filesystem. This should only be used on
systems that are not mounted.
hdparm: Used to get or set the hard disk parameters.
mkdosfs: Used to create a msdos filesystem.
mke2fs: Create a Linux native filesystem which is called a second extended filesystem.
This creates the current version of the Linux filesystem.
mkfs: Used to make a Linux filesystem on a device. The command "mkfs /dev/hdb1"
will create a Linux filesystem on the first partition of the second IDE drive.
mkswap: Creates a Linux swap area on a device.
mount: Used to mount a filesystem. It supports many types of filesystems.
stat(1u): Used to print out inode information on a file. Usage: stat filename
swapoff: Used to de-activate a swap partition.
swapon: Used to activate a swap partition.
tune2fs: Used to adjust filesystem parameters that are tunable on a Linux second
extended filesystem. The filesystem must not be mounted write when this operation is
performed. Can adjust maximum mount counts between filesystem checks, the time
between filesystem checks, the amount of reserved blocks, and other parameters.
umount: Unmount a filesystem.

Creating a File system


creating a swap partition
type "mkswap -c /dev/hda3 10336"
The -c has swap check for bad blocks. The 10336 is the size of the partition in blocks,
about 10M. The system enables swap partitions at boot time, but if installing a new
system you can type "swapon /dev/hda3" to enable it immediately.
Creating an ext2 file system on a floppy
1. fdformat /dev/fd0H1440
2. mkfs -t ext2 -c /dev/fd0H1440
Other file systems:
A normal hard drive can have many types of filesystems on it. To create an ext2 file
system, type "mke2fs -c /dev/hda2 82080" to create an 82 MB filesystem.
Checking a File system
fsck - To check the consistency of the file system and repair it if it is damaged, you can
use file system checking tools. fsck checks and repairs a Linux file system. e2fsck is
designed to support ext2 and ext3 file systems, whereas the more generic fsck also works
on any other file systems. The ext2 and ext3 file systems are the file systems normally
used for Linux hard disk partitions and floppy disks.
Syntax: fsck -t type device
[root@localhost ~]# fsck -t ext2 /dev/hda3

List of file system types supported by Linux


File System Types
Types Description
auto Attempts to detect the file system type automatically.
minux Minux file systems (filenames are limited to 30 characters).
ext Earlier version of Linux files system, no longer in use.
ext3 Standard Linux file system supporting long filenames and large
file sizes. Includes journaling.
ext2 Older standard Linux file system supporting long filenames and
large file sizes. Does not have journaling.
xiaf Xiaf file system.
msdos File system for MS-DOS partitions (16-bit).
vfat File system for Windows 95, 98, and Millennium partitions (32-
bit).
reiserfs A ReiserFS journaling file system.
xfs A Silicion Graphics (SGI) file system.
ntfs Windows NT, Windows XP, and Windows 2000 file systems
(read-only access, write access is experimental and dangerous).
smbfs Samba remote file systems, such as NFS.
hpfs File system for OS/2 high-performance partitions.
nfs NFS file system for mounting partitions from remote systems.
umsdos UMS-DOS file system.
swap Linux swap partition or swap file.
sysv Unix System V file systems.
iso9660 File system for mounting CD-ROM.
proc Used by operating system for processes (kernel support file
system).
devpts Unix 98 Pseudo Terminals (ttys, kernel interface file system).
shmfs and tmpfs Linux Virtual Memory, POSIX shared memory maintenance
access (kernel interface file system).

Disk Quota - Setting disk space for particular user


Enabling Disk Quotas
In order to use disk quotas follow the steps given below.
1. Modify following file
[root@localhost ~]# vi /etc/fstab

# This file is edited by fstab-sync - see 'man fstab-sync' for details


LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/sda1 /mnt/dump auto defaults 0 0
In the above example you can see /home file system has both user and group quotas
enabled.
2. Remount the file system
To Remount the file system, you can just reboot the system
3. Running quotacheck
After remounting the filesystem, check whether the filesystem is ready to support the
quota. you can check it by using the command
[root@localhost ~]# quotacheck -auvg
It will say whether the filesystem is ready or not, (i.e) The quota is enabled or not.
If the quto is not enabled then you can start is using the command
[root@localhost ~]# quotaon -a
4. Assigning quotas
[root@localhost ~]# edquota -u <username>
The above command will take you to set diskquota with default editor
Filesystem blocks soft hard inodes soft hard
/home 68 1400 1500 0 0 0
Hard Limit
The hard limit sets the maximum amount of disk space that a user or group can use.
Soft Limit
The soft limit defines the maximum amount of disk space that can be used. just like hard
limit, the soft limit can be exceeded for a certain amount of limit. (i.e) till the hard
Limit.

mount command
Now to mount device (CDROM,FLOPPY DISK etc) use following mount command
Syntax : mount option {device} {mount-point}
Option:
-t type To specify file system being mounted.
For MS-DOS use msdos
For Windows 9x use vfat (Long file name support under windows 95/NT etc)
iso9660 type is default and used by cd-roms.
-v it’s called verbose mode which gives more information when you mount
the file system.

How to Use CD-ROM disk?


[root@localhost ~]# mount /mnt/cdrom
[root@localhost ~]# cd /mnt/cdrom
[root@localhost ~]# ls
How to Use C: under Linux partition under Linux disk?
Use following commands for MS-DOS FAT 16 Partition:
[root@localhost ~]#mkdir dos
[root@localhost ~]# fdisk -l
[root@localhost ~]# mount -t msdos /dev/hda1
[root@localhost ~]#cd dos
[root@localhost dos]#ls
Use following commands for MS-Windows FAT 32 Partition.
[root@localhost ~]#mkdir win
[root@localhost ~]# mount -t vfat /dev/hda1 win
[root@localhost ~]#cd win
[root@localhost win]#ls
Use following commands for MS-Windows NTFS Partition.
[root@localhost ~]#mkdir winnt
[root@localhost ~]# mount -t ntfs /dev/hda1 winnt
[root@localhost ~]#cd winnt
[root@localhost win]#ls
Umount
Process of attaching device to directory is called mounting and process which removes
attached device from directory is called unmounting.
How to Unmount device?
Syntax: umount { device }
Syntax: umount -t type { device }
How to unmount CD-ROM disk?
[root@localhost ~]#umount /mnt/cdrom

How to unmount DOS partition (C:)?


[root@localhost ~]#cd ~
[root@localhost ~]#umount -t msdos /dev/hda1

5.4 LINUX BOOT LOADERS


What is a boot loader?
A boot loader loads the operating system. When your machine loads its operating system,
the BIOS read the first 512 bytes of your bootable media (which is known as the master
boot record, or MBR). You can store the boot record of only one operating system in a
single MBR, so a problem becomes apparent when you require multiple operating
systems. Hence the need for more flexible boot loaders.
The master boot record itself holds two things -- either some of or all of the boot loader
program and the partition table (which holds information regarding how the rest of the
media is split up into partitions). When the BIOS loads, it looks for data stored in the first
sector of the hard drive, the MBR; using the data stored in the MBR, the BIOS activates
the boot loader.
Due to the very small amount of data the BIOS can access, most boot loaders load in two
stages. In the first stage of the boot, the BIOS loads a part of the boot loader known as the
initial program loader, or IPL. The IPL interrogates the partition table and subsequently
is able to load data wherever it may exist on the various media. This action is used
initially to locate the second stage boot loader, which holds the remainder of the loader.
The second stage boot loader is the real meat of the boot loader; many consider it the
only real part of the boot loader. This contains the more disk-intensive parts of the loader,
such as user interfaces and kernel loaders.
Boot loaders are usually configured in one of two ways: either as a primary boot loader or
as a secondary boot loader. Primary boot loaders are where the first stage of the boot
loader is installed on the MBR. Secondary boot loaders are where the first stage of the
boot loader is installed onto a bootable partition. A separate boot loader must then be
installed into the MBR and configured to pass control to the secondary boot loader.

LILO - The Linux Loader


LILO is a versatile boot loader for Linux. It does not depend on a
Specific file system, can boot Linux kernel images from floppy disks
and hard disks, and can even boot other operating systems. One of up
to sixteen different images can be selected at boot time. Various
Parameters, such as the root device, can be set independently for each
kernel. LILO can even be used as the master boot record.
The master boot record is the first sector of your hard disk. This is where your computer's
BIOS will look for instructions on how to load an operating system. If you just had
Win95 or DOS installed it would just be an instruction to load that one OS. As stated
above, LILO gives us the versatility to load more then one OS. You can configure LILO
by manually editing the lilo.conf file. LILO configuration is all done through a
configuration file located in /etc/lilo.conf.
[root@localhost ~]# vi /etc/ lilo.conf
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=100
compact
default=Linux
image=/boot/vmlinuz-2.4.18-14
label=Linux
root=/dev/hdb3
read-only
password=linux
other=/dev/hda
label=WindowsXP
The options are:
• The boot= line tells LILO where to install the boot loader. In the previous example,
this will install it to the MBR of first hard disk. You could alternatively install
LILO in /dev/hdb3 (the Linux partition in the example), which would then require
you to install another boot loader into /dev/hda that points it to the LILO boot
loader; then you just let LILO act as a secondary boot loader. In general, /dev/hda
is the most common place for your boot loader to reside. You can also make a
LILO floppy boot disk by pointing this parameter to the floppy drive, most
commonly /dev/fd0.
• map= points to the map file used by LILO internally during bootup. When you
install LILO using the /sbin/lilo command, it automatically generates this file,
which holds the descriptor table (among other things). My advice is to leave this
as it is!
• install= is one of the files used internally by LILO during the boot process. This
holds both the primary and secondary parts of the boot loader. A segment of this
boot.b file is written to the MBR (the primary part of the boot loader), which then
points to the map and subsequently points to the secondary boot loader. Again,
leave this as it is!
• prompt= tells LILO to use the user interface (giving you in this example two
selections -- Linux and WindowsXP). In addition using the prompt/user interface,
you get the option to specify specific parameters for the Linux kernel or others if
appropriate. If you do not specify this option in the configuration file, LILO will
boot into the default OS with no user interaction and no waiting. (It's worth
noting, though, that if you hold the SHIFT key down during boot, you can get the
prompt up anyway, which is quite useful if you don't want the average Joe to be
exposed to the boot loader).
• timeout= is the number of tenths of a second that the boot prompt will wait before
automatically loading the default OS, in this case Linux. If prompt is not specified
in the lilo.conf, this parameter is ignored.
• The compact option magically makes the boot process quicker by merging adjacent
disk read requests into a single request. It can be a mixed blessing, though, as I've
seen a number of posts on forums regarding issues with this option. This option
especially useful if you wish to boot from a floppy.
• The default= option tells LILO which image to boot from by default, such as after
the timeout period. This relates to a label of one of the images in the lilo.conf file.
If you don't specify this option in the configuration file, it will boot the first image
specified in the file.
• For each version of Linux you want to make available for users to boot into, you
should specify image= and the following three options. The image option specifies
the kernel version you wish to boot to.
• label= identifies the different OS you want to boot from at the user interface at
runtime. In addition, this label is used for specifying the default OS to boot from.
(Note: Avoid spaces in the label name; otherwise, you will get an unexpected
error when loading the file.)
• The root= option tells LILO where the OS file system actually lives. In our
example, it is /dev/hdb3, which is the third partition of the second disk.
• read-only tells LILO to perform the initial boot to the file system read only. Once
the OS is fully booted, it is mounted read-write.
• The password= option allows you to set a password for the specific OS you are
booting into. In the example this password is held in the lilo.conf file as readable
text, so is easily accessible for all to read. Alternatively if you set password=""
you can set the password when the bootloader is installed. These can be set on
each of the operating systems you wish to boot from if required (in our example
we only set a password on the Linux boot).
• other= acts like a combination of the image and root options, but for operating
systems other than Linux. In our example, it tells LILO where to find the
Windows OS, which resides on the first disk in the first partition. This will
usually be the case if you have installed Windows first, then Linux.
The initial boot process
When LILO initially loads, it brings up in order each of the letters -- L-I-L-O. If all the
letters come up, the first stage boot was successful. Anything less indicates a problem:
• L: The first stage boot loader has been loaded. If LILO stops here, there were
problems loading the second stage boot loader. This is usually accompanied by an
error code. The common problems at this stage are media problems or incorrect
disk parameters specified in your lilo.conf file.
• LI: The second stage boot loader has been loaded. LILO halting at this point
indicates the second stage boot loader could not be executed. Again, this can be
due to problems similar to just L: loading or if the boot.b file has been corrupted,
moved, or deleted.
• LIL: The second stage boot loader has now been executed. At this point, media
problem could again be responsible or the map file (as specified in the lilo.conf
file) could have had problems finding the descriptor tables.
• LIL?: Loaded to the same point as above. This usually means the second stage
boot loader loaded at an incorrect address, caused most likely by boot.b being in a
different place than specified in the lilo.conf file.
• LIL-: Loaded to the same point as above. Problem loading the descriptor table,
most likely due to a corrupt descriptor table.
Run LILO
The next step is to run lilo so it can write out your lilo.conf and make your system
bootable. All it takes is to run lilo from the boot prompt. When I do it using the above
lilo.conf I get this nice output:
[root@localhost ~]# lilo
Added linux *
Added old
Added win98
root ttyp2 ~ [252]:
The * next to the first item (Linux) signifies the image that will be booted at boot time.
If there is an error in your lilo.conf it will show up here. And you can go in and re-edit
Removing LILO
The easiest way to do this is to use Win xp's fdisk program. You'll need to use the flag that
replaces the master boot record (/mbr).
C:\STUFF>fdisk /mbr

Unfortunately, fdisk doesn't give you any output to tell you it did it correctly. Just reboot
and you'll go straight into Win98, with lilo all removed.

GRUB- Grand Unified Boot loader


GRUB (Grand Unified Boot loader) is an advanced boot loader that is capable of booting
multiple operating systems on a single machine. It can load *nix as well as other
proprietary operating systems. The folks from the MS Windows platform are
unfortunately ignorant about the concept of boot loaders. Proprietary operating system
like Windows often hide the background features of a system, like boot loaders, from the
user. With the help of a boot loader you can theoretically load hundreds of operating
system. Most familiar Linux bistros currently ship with GRUB (Figure1), by default. IN
short, GRUB is what is displayed immediately after the BIOS. It enables a user to select
which OS the machine should boot from a list, by using the arrow keys. One of the
biggest benefits of GRUB is that it is dynamically configurable. Lilo is another boot
loader, which was once the default and has now been depreciated by most bistros. More
recently, the GRand Unified Boot loader is developed by the Free Software Foundation
and based on the original GRUB program, originally created by Erich Stefan Boleyn.

HOW GRUB WORKS


When a computer boots, the BIOS passes the control to the first-boot device- it may be
the hard disk, CD-ROM, floppy disk, or Flash drive. MBR is the first sector of the hard
disk and is only 512 bytes in size (Figure2). This sector consists of code required to boot
a PC. MBR consists of 446 bytes of primary boot loader code and 64 bytes of the
partition table. The partition table records the information regarding the primary and
extended partitions. Boot loading is implemented in GRUB as Stage 1, Stage 2, Stage 1.5
(optional), etc. The primary boot loader area (446 bytes) contains Stage 1, which in turn
directs you to Stage 2 (i.e., the menu.lst configuration file, which has the list of operating
systems on the machine).

Installing and configuration GRUB


you can get the latest release of GRUB, as follows:
[root@localhost ~]# wget ftp://alpha.Gnu.org/gnu/grub/grub-1.96.tar.9z
[root@localhost ~]# tar –xzvvf grub-1.96.tar.gz –c
[root@localhost ~]# cd grub-1.96
[root@localhost ~]#./configure; make
[root@localhost ~]# sudo make install

The next step is to configure GRUB by properly editing the menu.lst file. You can find
the GRUB Stage2 configuration file at /boot/grub/menu.lst.
Configuring GRUB
GRUB configuration is all done through a configuration file located in
[root@localhost ~]#vi /boot/grub/grub.conf.
default=0
timeout=10
splashimage=(hd1,2)/grub/splash.xpm.gz
password --md5 $1$opeVt0$Y.br.18LyAasRsGdSKLYlp1
title Red Hat Linux
password --md5 $1$0peVt0$Y.br.18LyAasRsGdSKLYlp1
root (hd1,2)
kernel /vmlinuz-2.4.18-14 ro root=LABEL=/
initrd /initrd-2.4.18-14.img
title Windows XP
password --md5 $1$0peVt0$Y.br.18LyAasRsGdSKLYlp1
rootnoverify (hd0,0)
chainloader +1

The options are:


• The default= option signals to GRUB which image to boot from by default after the
timeout period. This relates to one of the images in the grub.conf file. 0 is the first
specified, 1 is the second specified, etc. If you don't specify this option in the
configuration file, it will boot the first image specified in the file.
• timeout= is the number of seconds the boot prompt will wait before automatically
loading the default OS, in this case, Red Hat Linux.
• splashimage= is the location of the image to be used as the background for the
GRUB GUI.
• The password option specifies the MD5-encrypted password used to gain access to
GRUB's interactive boot options. Note this does not stop users loading your
defined OS choices; this needs to be set on a per-title basis. To generate an md5
password, run the tool grub-md5-crypt (as root), which comes with GRUB. It will
prompt you for the password you want to encrypt. It then will output the MD5-
encrypted password. Copy this into your grub.conf after password -md5 but on the
same line. Usually this password can be set to the root password, since it is only
root who can read the grub.conf file anyway.
• title identifies the specific OS that will be booted from at the user interface at
runtime. Unlike with LILO, you can include spaces in this name.
• password is set in the same way as the password above. Do not set this password to
the root password if you are planning on sharing this machine with other users.
• The root option tells GRUB where the OS file system actually lives. As you can
see, GRUB references the media in a different way than LILO. In our LILO
example, /dev/hdb3 is the third partition of the second disk. Grub references this
disk as (hd1,2), again the third partition of the second disk (disk 0 being the first
disk, partition 0 being the first partition).
• kernel: vmlinuz-X.X.XX-XX is the name of the default boot kernel image within
your root directory.
• initrd: initrd-X.X.XX-XX.img is the name of the default initrd file within your root
directory.
• title is the same as all other title options.
• password: See other password options.
• The rootnoverify option tells GRUB to not try to vary the root of the OS. This saves
load errors if the file system is not a supported by GRUB.
• chainloader +1 tells GRUB to use a chain loader to load this OS, which is required
for loading Windows.
Going back to the snippet again, you will notice that the ‘root’ entry is given us.(hd0,4).
This is the standard GRUB naming convention, where:

hd0 stand for the primary master hard disk


hd1 stand for the primary slave hard disk.
hd2 stand for the secondary master hard disk
hd3 stand for the secondary slave hard disk normally it would be hd0, where:
(hd0,0) represent /dev/sd1. the first partition of the primary master hard disk
(hd1,0) represent /dev/sdb1, the first partition of the primary slave hard disk

Other proprietary operating system like Windows can be loaded by a process called chain
loading, as follow (without specifying the kernel or other such parameters-we will only
specify the partition in which it is installed):
The initial boot process
When GRUB initially loads, like LILO it loads its first stage from the MBR. Once this
has loaded, it then enters an intermediate stage between the common boot loader stages
one and two (or for argument's sake, Stage 1.5). Stage 1.5 is present to enable regular file
system access to the GRUB configuration files in /boot/grub rather than accessing using
disk blocks. We then enter stage two of the boot loader where GRUB loads the grub.conf
file.
Handling boot failures and MBR overwriting

A usual scenario all dual- boot (Linux and Windows) users face is when installing
Windows after Linux; this causes MBR (the GRUB boot loader) to be overwritten by
Windows. Following this, the computer straight away boots Windows. Without
displaying the entries for the other installed operating systems- this is why it is always
advisable to install Linux after Windows. However, even if you’ve encountered a
situation where you’ve lost GRUB, you can fix it easily.
Collect some GNU. Linux live CD like Knoppix and boot from it. If the live CD display
a GRUB menu, it is even easier. Press C to enter the GRUB command line:

Grub> find /boot/grub/ stagel


Find /boot /grub/ stagel
(hd0,4)
(hd0,8)
Grub>

The output of the find command in the above snippet says that it has found two Linux
installations on the system. Now, in order to install GRUB from either of this Linux
installation, run the following set of commands:

Grub> root (hd0,4)


Root hd0,4)
File system type is reiserfs, partition type 0x83

Grub> setup (hd0)


Grub setup (hd0_
Checking if “/boot /grub/stage1” exist…yes
Checking if “/boot grub/stage2” exists…yes
Checking if “/boot /grub/ reiserfs stagel_5”exists…yes
Running “embed /boot grub /reiserfs_5 (hd0)”…19 sectors are embedded.

Succeeded
Running “install /boot grub/stage (hd0)1+19 p (hdo,4) /boot/grub stage2 /boot grub
grub.conf”…succeeded done.

Grub>
That is. Your Grub is now restored back into the MBR. Alternately, you can boot into the
live CD and get a root prompt:
# mkdirt /mnt/fixroot
# mounts /dev/sda6 /mnt/fixroot
# mount –bind/dev/ /mnt/ fixroot/dev
# grub- installs /dev/sda
What have to be done above is as follows. Mount the route device (/dev/sda6)
to/mnt/fixroot. The devices currently available to the live system are then bound to/dev/of
of the route partition (dev/sda6) at /mnt/fixroot/dev. Finally, we temporarily change-root
to the execute grub-install to fix MBR. (Of course, do not forget to change/dev /sdas5 to
the correct Linux partition on your system.).

Forgot your root password?


if you have forgotten the root password of your Linux system, there no need to panic the
fix is quite sample . Reboot your system. At the GRUB graphical menu, press E to edit
and add the following parameters to the kernel argument. To the kernel line, we are
telling Linux to immediately enter a bash prompt after boot the kernel. You can now reset
the root password using the password command, as follows.

bash 3.0# passwd


Now, you may wonder that if it’s so simple to reset the root password, then ordinary
users can use this feature to their own advantage. The next section deals with how to
pasword protect GRUB, so that unauthorized users can’t reset root passwords.

Password protection GRUB


[root@localhost ~]# /sbin/grub-md5-crypt
Password:
Retype password:
$1$tiwKkSK22wLi3Kmzjassimf7k.sh/

Now, append the MD5 hash to your/boot/grup/menu. 1st file as follows, at the top of the
file after the command texts:
#menu1st-see: grup (8), info grup, update-grup (*)
# grup-install (*), grub-floppy (*),
# grub-md5-crypt, /usr/share/doc/grub
# And /user/share/doc/grub-doc/
Password—md5 $1$tlwkk$k2zwli3kmzjssimf7k.sh/
If GRUB is password protected, you won’t be able to enter the edit more by pressing e
.Rather; you have to enter the password by pressing P First and E after word to edit the
menu.
Advantages of GRUB over LILO
LILO is older and less powerful. Originally LILO did not include a GUI menu choice
(but did provide a text user interface). To work with LILO an administrator has many
tasks to perform in addition to editing the configuration files.
GRUB is a bit easier to administer because the GRUB loader is smart enough to locate
the /boot/grub/grub.conf file when booting. An administrator only needs to install GRUB
once, using the "grub-install" utility. Any changes made to grub.conf will be
automatically used when the system is next booted. In contrast, any changes made to
lilo.conf are not read at boot time. The MBR needs to be "refreshed."
Like GRUB does, LILO has no interactive command interface and does not support
booting from a network. If LILO MBR is configured correctly, the LILO system becomes
unbootable. If the GRUB configuration file is configured incorrectly, it will default to the
GRUB command-line interface without risking of making the system unbootable.
LILO and GRUB allows users—the root users—to boot into single-user mode. Both have
a password protection feature with a difference. While GRUB allows for MD5 encrypted
passwords, LILO manages only text passwords, which anyone can read from the lilo.conf
file with the command vi /etc/lilo.conf.
LILO has no interactive command interface, whereas GRUB does.
LILO does not support booting from a network, whereas GRUB does.
LILO stores information regarding the location of the operating systems it can to load
physically on the MBR. If you change your LILO config file, you have to rewrite the
LILO stage one boot loader to the MBR. Compared with GRUB, this is a much more
risky option since a mis configured MBR could leave the system unbootable. With
GRUB, if the configuration file is configured incorrectly, it will simply default to the
GRUB command-line interface
Unlike LILO's configuration file, grub.conf is read at boot time, and the MBR does not
need to be refreshed when this is changed.

5.5 LINUX SCHEDULING


Scheduling is an important concept that refers to the way processes are assigned priorities
in a priority queue. Scheduling is an art of using processor effectively and efficiently.
Any operating system is platform dependent .But the open source nature of Linux did
away with the very concept of platform dependency. Today Linux has been made
available to many platforms that serve a range of applications-from supercomputing to
simple gaming system. The open nature of Linux has helped a lot in its evolution,
whether it was for newer applications or to improve existing algorithms inside kernel.
The latest kernel release, 2.6.16.20, offer vast improvements; one of them is 2.6
scheduler, which is a masterpiece. In this article we will discuss the basics of Linux
scheduling. In the article that follows under this series, we will explore Linux scheduling
in considerable detail, covering theory, system calls and kernel level details.
Process scheduling
In Linux systems, there can be numerous processes running at any point of time. When
the number of processes is more than the available CPUs (which is most likely the case),
it is essential for the operating system to provide services to all processes. Basically, the
operating system should share memory space between various processes and allot a fixed
time for each of them. The scheduler, a kernel sub system, does this scheduling.
Scheduling is all about choosing a process and restoring / saving their context. For
effective scheduling the scheduler should enforce a policy for scheduling that describes
how the scheduler is going to handle the switching of processes. The scheduling policy is
nothing but an instruction on how the process will be prioritized, how the time slices will
be allocated and what type of processes will be favored. The policy should take the
following points into consideration for the benefit of all the processes:
• The scheduler gives chances to all the processes to run. It should not favor any one
of processes or a group of processes while processes are waiting to run.
• The CPU should be used effectively and should not remain idle when there are
processes to run.
• Response time should be low, as this increase the interactivity of application that
are running on the system. When the response time is low, the user feels that the
application is more interactive and responsive.
• The waiting time of processes to get slotted should as low as possible. Since many
processes use the processor co-operatively, it is essential that the waiting time for
each process should be minimal so that all processes progress over a period of
time.
• The throughput of the system should be more, as that will increase the number of
completed processes.
The scheduling policy and algorithm should be chosen so that all of the above five points
are addressed.
The scheduling algorithm tells how the scheduling policy will be enforced by the kernel.
Once the policy is adopted, the kernel should enforce the policy with the proper
scheduling algorithm. Falling to do may end up in a poor response, throughput, turn
around time and also lower the efficiency of operating system.

Context switches

Processes

Scheduler

Cpu 1 Cpu2 Cpu3


Segregating Processes

Apart from choosing a process, it is also the responsibility of the scheduler to save the
context of the current process and restore the context of next process scheduled to run.
This called context switch. This is essential for the proper functioning of the processes.
The kernel should make sure that context switching is invisible to user. Figure 1 depicts a
scenario where there are many processes contending for process time and scheduler
chooses one based on the scheduling policy.
Linux is a multi-purpose operating system. Though operating system is specific to
platform, Linux is not. Don’t misinterpret this statement. With other operating systems,
for example, the code for Intel processor will not work for SPARC architecture. But the
good thing about Linux is that it continuously ported for many hardware platforms. The
open source nature of Linux is the main reason for this. Another important reason is the
enthusiasm of kernel developers and most importantly Linux users.
Linux powers a wide range of systems, right from small systems like mobile phones and
digital organizers to huge ones like production server clusters and supercomputer
clusters. Since there is difference in the various types of systems it supports, the
application that runs on these different kinds of systems should also be different. The
nature of applications that run on Linux varies from highly demanding real time
processes to the ones that dump output of the command ls, which are not that demanding.
We can classify the processes into three types:
• CPU bound or CPU intensive processes
• Input / output (I/O) bound processes
• Real time processes

CPU-Intensive processes: CPU-intensive processes are involved in executing


instructions. Favoring CPU bound processes leads to hogging, and the system becomes
less interactive. If enough time is not given to
CPU bound processes: It may lead to the throughput degradation of system. Complex
scientific calculations, searching and sorting are examples of CPU bound operations. The
scheduling decision should address the need for the system not to be hogged by CPU
bound processes.
I/O bound processes: I/O processes are mostly occupied with waiting for signals or
interrupts from I/O devices. They spend less time in the CPU. All the processes that
interact with user, like the shell are good examples of I/O bound processes. These
processes run momentarily. Once the user data is processed, the I/O goes to sleep again
and waits for the next user input. The pattern repeat itself until the user voluntarily stops
inputting. Retrieving mails from your E-mail server is another good example of I/O
bound processes, wherein the application spawns a thread and waits till the server on the
network responds. In the first case, the process (shell) waits for an event that is internal to
the system, whereas in the case of the email client, the process waits for an event that
depends on the server in the network. But as far as the kernel is concerned, both the
processes are identical: they are waiting for an event.

Real-time processes:These are highly demanding and all the critical operation come
under this category. A typical example would be an embedded system that monitors the
heartbeat of a patient. These processes demand processor time. The moment they come to
running state they need the CPU. Deferring the scheduling of such processes can’t be
entertained. So we have now segregated the processes and, by doing so, understood that
though these processes are different, they still have to co-exists and run successfully.
Keeping this in mind, let us now try and understand the Linux scheduler.

The Linux scheduling policy

In this section, we will discuss Linux scheduling policy and how scheduling decisions
improve the Linux system performance.
1) Linux has priority based scheduling. Higher priority processes are processed
before lower priority processes.
2) Interactive processes are given more priority than a CPU bound process. This
improves the interactivity of system.
3) Linux has support for real time processes. Linux is a soft real system. All real
time processes are serviced before normal, conventional processes.

Based on this, Linux scheduling has three scheduling classes. These are
SCHED_FIFO
This scheduling class for real time processes. When the kernel puts the puts the
processes of the SCHED_FIFO class on the processor, it never takes it back until a
real time process of higher priority process arrives or it voluntarily relinquishes the
processor. The process is not taken back even if a real time process of same priority
arrives.
SCHED_RR
This is same as first one, except that the processes are chosen on round robin basis
if they are of same priority. This class is also used by real time processes.
SCHED_OTHER
This is the scheduling class for normal, non real time processes.
The scheduling of the processes is based on the scheduling class.

5.6 INSTALLING RED HAT LINUX


Preparation before the Installation:
There are a few things that we need to have before we can start. Obviously, we need a
copy of both Windows XP and Linux (either Mandrake 8.2 or Redhat 7.2).
The easiest way to obtain a copy of Linux is to download the bootable ISO’s from a site
and burn them to a CD-R. The official list of mirror sites to download from for Mandrake
8.2 can be found here and the list of mirror sites for Redhat 7.2 can be found here. I
recommend the use of the mirror sites over the actual companies’ site, because the
companies’ FTP server tends to be really slow.

In order to burn the ISO’s into a CD, you must have a third-party CD burning program.
The native burner which is built into Windows XP DOES NOT have the ability to do
this. It will only copy the ISO onto the cd which does you no good (Look at the very
bottom of this tutorial for a brief explanation on this).

The final preparation step is to back up all your files(Don't forget your e-mails!! - I lost
over a thousand e-mails from my first semester because I forgot to back them up!).

Partitioning, Formatting, and Installing XP


Once, you’ve backed up all of your files that you wish to save(don’t forget your e-
mails!), you’re ready to begin. It is good practice to install Windows XP first since it does
not have the level of customization that Linux does (XP has the nasty habit of taking the
first partition it finds).

Pop in the XP installation CD and restart. When your Computer’s start up screen comes
up, enter the BIOS setup by pressing the necessary key(s). Set it up to have your
computer boot from your CD drive. When the CD boots up, you’ll be greeted with the
standard Windows XP set-up screen.

The next step is to use the fdisk partitioning utility and carve your hard drive into
partitions. Fdisk comes with XP. After you ok the installation for the first time it willlet
you delete your current hard disk and partition it. How you partition your hard disk is up
to you and what your system can handle. I did the following on my 40 GB hard drive:

15GB for XP and applications (NTFS)


1GB for XP Swap/Paging File (NTFS – not necessary for XP)
16.5GB for Data and Files (FAT32)
7.5 GB for Linux (Will be formatted during the Linux install)

I decided to make the Data and Files partition FAT32; because that way both Linux and
XP can read and write to it (Linux can read from NTFS, but writing to it is still
“experimental” and not suggested). After you partition the hard drive you format the
partition XP on using NTFS (make sure that your XP partition is the FIRST partition).

Continue with the XP setup and finish installing. Once you’re done, you can format the
other partitions (not necessary for the Linux partition though) by going to Start ->
Control Panel -> Administrative Tools -> Computer Management. There are tools in
there that you can use to format your partitions and make them recognizable and usable
in Windows.
Partitioning, Formatting, and Installing Linux:
The Linux install is only slightly more complicated, but is faster (~20min). Pop-in the 1st
Linux CD and restart again. Your system should still be set to boot from CD, so the
Linux installation screen should come up. You want to do just an install. In Redhat 7.2
this brings up the installation GUI for Linux. When it asks you what kind of installation
you would like to perform you want to select “custom” or “expert” (custom is the easier
of the two, only use expert if you are sure that you know what you are doing!), since
you’ll need to tell Linux what partition to install on.

Setup will eventually ask you what program you want to use to partition your hard drive.
I chose to use Disk Druid. It’s a pretty easy utility to use and it has a GUI (unlike fdisk
which is command line). Linux will recognize your Windows partitions and show them.
If Windows decided to format your Linux partition to FAT16 (mine did for some weird
reason…) just delete it. Out of the free space left on your hard drive you need to make the
following partitions:

50MB for Boot Partition, listed under “\boot”


2x your current amount of RAM for Linux Swap/Paging File(ex: 256MB RAM ->
512MB Paging File, and its REQUIRED for Linux), listed under “\swap”
3GB or more for the Linux OS Partition, listed under “\” which is called “root”

Once this is done, you can continue through the rest of the setup, just make sure that
when you reach the point of what Boot loader you want to use, pick GRUB and make
sure it installs to the MBR(Master Boot Record). This will cause Linux to boot instead of
XP the next time your computer boots from the hard disk.

Once your done picking and choosing what you want installed along with the Linux OS,
setup will do the rest for you. Make sure you make a user account for yourself other than
“root” when prompted by the Linux setup, because when you are logged into the “root”
account you can do anything and everything to the partition Linux resides on, including
DELETING THE WHOLE PARTITION by accident.

Post Linux Installation:


Now we have both Operating systems installed. Once Linux is finished setting up you
can remove the CD, and restart. This time go back into the BIOS setup and set the
computer to boot from the hard drive again. GRUB will appear and give you a choice
between either Linux and DOS or just Linux to boot to. Select Linux with your keyboard.
Linux will boot and bring you to a login screen. Login under the “root” account, using the
login name “root” and your “root” password. - This is the only way to modify the
necessary file to get Windows XP to boot from GRUB.

Once Linux is finished loading, bring up the Command Line Shell .Once the Command
Shell appears, you need to navigate to the following directory and file:

/etc/grub.conf
If you’ve never used the command line interface in Linux before , go to the “Sessions”
button and click on the “Root Midnight Commander” (the non-“Root” version won’t
allow you to edit the file). This will bring up a very basic GUI (similar to the BIOS setup
in use) that will allow you to easily navigate your Linux files.

Once you open up the file, either through the command line or through Midnight
Commander, go all the way down to the bottom of the file and add the following lines
after the rest of the lines in the file:

title Windows XP
[TAB]root (hd0,0)
[TAB]makeactive
[TAB]chainloader +1

PLEASE NOTE that the word TAB inside the brackets indicates where you should indent
with a TAB. Make sure that you type the bolded lines in EXACTLY as written above
with the spaces and capitalizations all correct. Save it and exit. The next time you restart
and the GRUB boot loader comes up, Windows XP should be listed under Linux. Just
select it with the keyboard and hit enter. Windows XP should boot up and your system is
now dual-booting Windows XP and Linux!! If the Windows XP title does not appear on
your GRUB boot loader screen then you know that you did it wrong. Just go into Linux
under your root account, and make sure the added lines look EXACTLY like the bolded
lines above. There are NO semi-colons or anything ending the lines.

5.7 REMOVING LINUX


Windows based solution

There is a simple procedure to uninstall or delete Linux by following a step by step


procedure illustrated below and it is tested by me and safe also.

Requirements:
You need a windows 98 startup disk or windows 98 bootable floppy .

How to do it:
1. Boot up in Windows xp.
2. Start>>control panel>>administrative tools>>computer management
3. Go to disk management under “storage”
4. Select your hard disk and then the Linux partition.
5. Delete the Linux partition this will delete Linux and grub.
6. Now reboot your laptop with windows 98start up disc or floppy and type the command
“fixmbr” .
7. Above command will repair your boot loader and rewrite ntldr which will replace
corrupted grub.
8. That’s it done now boot your laptop or desktop normally it will be booted by default in
windows xp.
Linux based solution: boot Linux using first cd. Type Linux rescue.

Then simply invoke "dd"


dd if=/dev/zero of=/dev/hda bs=512 count=1
This fills up the MBR with zeros. Obviously, you have to be root to do this.
DOS based solution. Boot with a DOS floppy that has "debug" on it; run
debug
At the '-' prompt, "block-fill" a 512-byte chunk of memory with zeroes:
f 9000:0 200 0
Start assembly mode with the 'a' command, and enter the following code:
mov dx,9000
mov es,dx
xor bx,bx
mov cx,0001
mov dx,0080
mov ax,0301
int 13
int 20

Press <Enter> to exit assembly mode, take a deep breath - and press “g” to execute, then
“q” to quit “debugs". Your HD is now in a virgin state, and ready for partitioning and
installation.

5.8 USER AND GROUP QUOTA


Quotas are used to limit a user's or a group of users' ability to consume disk space. This
prevents a small group of users from monopolizing disk capacity and potentially
interfering with other users or the entire system. You have two ways to set quotas for
users. You can limit users by inodes or by kilobyte sized disk blocks. Every Linux file
requires an inode. Therefore, you can limit users by the number of files or by absolute
space. You can set up different quotas for different file systems. For example, you can set
different quotas for users on the /home directory on which they are mounted.
Quota Activation in /etc/fstab
The file /etc/fstab tells Linux which file systems to mount during the boot process. The
options column of this file configures how Linux mounts a directory. You can include
quota settings in /etc/fstab for users and/or groups.
Device Mount point Filesys Options dump Fsck
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/sdb1 /home ext3 defaults, 1 2
usrquota
devpts /dev/pts devpts gid=5, 0 0
mode=620
tmpfs /dev/shm proc tmpfs 0 0
proc /proc proc defaults 0 0
sysfs /sys proc sysfs 0 0
/dev/sda3 swaps swaps defaults 0 0
2. Either Reboot the System or remount the partition
[root@localhost /]# mount -o remount /home
3. Check the Quota
[root@localhost /]# quotacheck –Muv /home
Where M=Manage
U=User
V=Verbose
4. ON the Quota which we create
[root@localhost /]# quotaon –a
5. Give the permission to the /home directory
[root@localhost /]# chmod 777 /home
6. Set the quota to the user
[root@localhost /]# edquota –u username
Specified the Soft limit and hard limit on opened file. Soft limit specify the limit to
generate warnings to users and hard limit can't cross by the user.
7.Command to monitor the quota information and to check quota
[root@localhost /]# repquota –uv

HOW TO REMOVE THE QUOTA


1) Remove the entry from file /etc/fstab:
[root@localhost ~]#vi /etc/fstab
Device Mount point Filesys Options dump
Fsck
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/sdb1 /home ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm proc tmpfs 0 0
proc /proc proc defaults 0 0
sysfs /sys proc sysfs 0 0
/dev/sda3 swaps swaps defaults 0 0

2. Either Reboot the System or remount the partition.


[root@localhost /]# mount -o remount /home
3. OFF the Quota which we create
[root@localhost /]# quotaoff –a
4. Reboot the system
[root@localhost /]# init6

5.9 LVM (LOGICAL VOLUME MANAGER)


LVM is a logical volume manager for the Linux kernel; it manages disk drives and
similar mass-storage devices, in particular large ones. The term "volume" refers to a
disk drive or partition.
LVM is suitable for:
• Managing large hard disk farms by letting you add disks, replace disks, copy and
share contents from one disk to another without disrupting service.
• On small systems (like a desktop at home), instead of having to guess how big a
partition needs to be, LVM allows you to resize your disk partitions easily as
needed.
• Making backups by taking "snapshots."
• Creating single logical volumes of multiple physical volumes or entire hard disks
(somewhat similar to RAID 0, but more similar to JBOD ), allowing for dynamic
volume resizing.
One can think of LVM as a thin software layer on top of the hard disks and partitions,
which creates an illusion of continuity and ease-of-use for managing hard-drive
replacement, repartitioning, and backups.

1)To create a LVM we have to first create a partition, after that create physical
volume from this partition
[root@localhost]# pvcreate /dev/sda6(6 is partition no)
2)Now create a volume group to use this physical volume.
[root@localhost] #vgcreate vg0 /dev/sda6 (vg0 is volume group name
3) Create a lvm partition from this.
[root@localhost]# lvcreate –L 100m –n lv0 /dev/vgo
(100m is size of Lvm and lvo is name of lvm)
4)Now format this LVM usinf mkss.ext3
[root@localhost] mkfs.ext3 /dev/vg0/lv0
5) Create a mounting point for this partition
[root@localhost] mount /dev/vg0/lvo /mnt
This mount will be temporary
To mount it permanent open vi /etc/fstab file
[root@localhost] vi /etc/fstab
/dev/vgo/lvo /mnt ext3 defaults 0 0
Finally lvm will be created
6) To view lvm, volumegroup.physical volume
[root@localhost]# lvdispaly
[root@localhost]# vgdisplay
[root@localhost]# pvdisplay

5.10 RAID
RAID, an acronym for Redundant Array of Independent Disks (formerly Redundant
Array of Inexpensive Disks), is a technology that provides increased storage reliability
through redundancy, combining multiple relatively low-cost, less-reliable disk drives
components into a logical unit where all drives in the array are interdependent.
Creating a raid device
1)Firstly verify all the previously created raid partitions.
[root@localhost] mdam - -detail /dev/md0
Or
[root@localhost] cat /proc/mdstat

2) Raid devices are basically used for databackup. Data will be safe even if any hard
disks get fail from raid devices.
To test it first copies some data in /data directory. Which is mount point for raid devices?
[root@localhost]# cp * /data
[root@localhost]# ls /data

3) Now assume one hard disk form raid device gets fail. This can be done using mdam
command with – fail options.
[root@localhost]# mdam /dev/mdo - -fail /dev/sd7

4) We can verify the fail device status via mdam command with - -details switch also
[root@localhost]# mdam - -detail /dev/md0

5) Now run time time process can be checked via /proc/mdstat


[root@localhost]# cat /proc/mdstat

6) At this point we have understood what exactly raid device do? raid device keep
your data safe even if hard disk got crashed. It’s give you enough time to replace
faulty hard disk without losing data.Now remove our faulty devices.
[root@localhost]# mdam /dev/md0 - -remove /dev/sda7

7)Now add new hard disk in raid device.


[root@localhost]# mdam /dev/md0 - -add /dev/sda7

8)Finally we have created raid device and changed faulty hard disk with new one.
Deleting a raid device
Umount the raid partition from /data directory
[root@localhost] #umount /data
Now stop the /mdo devices and remove it from mdam
[root@localhost] #mdam - -stop /dev/md0
[root@localhost] #mdam - -remove /dev/md0
Remove the entry from /etc/fstab file.
[ root@localhost] #vi /etc/fstab
And remove the entry.
Finally we have deleted a raid device partition.
Chapter 6
Process Management
6.0 PROCESS MANAGEMENT
A process is a program in execution. Every time you invoke a system utility or an
application program from a shell, one or more "child" processes are created by the shell
in response to your command. All LINUX processes are identified by a unique process
identifier or PID. An important process that is always present is the init process. This is
the first process to be created when a LINUX system starts up and usually has a PID of 1.
All other processes are said to be "descendants" of init.

As with any multitasking operating system, Linux executes multiple, simultaneous


processes. Well, they appear simultaneous, anyway. Actually, a single processor
computer can only execute one process at time but the Linux kernel manages to give each
process its turn at the processor and each appears to be running at the same time.

6.1 Introduction to process


Since UNIX/LINUX is a multitasking system, more than one process can at a time.
Typically hundreds of or even thousands of processes can run in a large system. All
programming capabilities are enhanced by a process. Figure 1 shows that what it does
with a single program:-

A Simple Process
The above fig. illustrate that firstly the program is load into memory after that it can be
run on a typical shell environment after that finally it can be exit. The part of the system
that managers the processes is called kernel.

Definition:
“A process is defined as an instance of a running program” Every program running on
your system is running in its own process. In fact every copy of every program has its
own process. For example if you startup an editor twice, without closing the first
invocation before starting the second, you will have two processes running that editor.
Basic Idea:
Most programs give rise to a corresponding process; usually have the same name as the
program itself. For example when you execute the grep command, a process named grep
is created; a program can also give rise to multiple processes. When you run a shell script
or a group of commands in a pipeline, you are running one job alright-but multiple
processes.
Attributes of a process:
A process has the following attributes associated with it:
1. PID(Process ID )
2. environment
3. Memory area
4. signal Handling
5. File Descriptions
6. Resource Scheduling
7. Security information
8. Synchronization
9. State
1. PID: Each process has a unique ID. It occurs only once when the process ID. It
occurs only once when the process is created and can be reused when system
remains online for a long time. The PID is the primary way of identifying a
process.
2. Memory area: Each process also has a Memory area associated with it. This
area holds:
i) The code for the program that is running in that process.
ii) The data (variables) for that particular program.

3. File Descriptions: There are 3 types of File Descriptions:


i) standard input
ii) standard output
iii) standard error
These File Descriptions are opened by default for your program in most
situations.
4. Security Information: Some Security Information is associated with process as
well. At a minimum, processes record the user and the group of the person which
generally is the person that started it.
5. Environment: it holds things such as environment variables and the command
line used to invoke the program that is running in the process
6. Signal Handling: A process can send and receive signals and act based on them.
These enable standard execution to be interrupted to carry out a special task.
Signal reception is based on security of the process.
7. Resource Scheduling: A process is also the unit for Scheduling system resources
for access. For instance if 20 programs are running on a system with a single
CPU, the Linux Kernel alternates between each of them, giving them each a small
amount of time to run and then rapidly switching to the next .thus each process
goes a time slice. Also on process priority bases resources scheduling is done.
8. Synchronization: Synchronization with other program is also done on per-
process level. Process may request and check for locks on certain files to ensure
that only one process is modifying the file at any given time. Process also may use
shared memory or semaphores to communicate with and synchronize between
each other.
9. State: Each process has a state. These states are :
i) The Real user id of the user who created the process means owner.
ii) The Real group id of the owner of the process. Both user id and group
id are stored in /etc/passwd.
iii) The current directory from where the process was run.

6.2 STARTING AND STOPPING PROCESS


Like a file a process is also a sequence of bytes interpreted as instructions to be run by the
CPU. This is often an executable program, which contains the Binary code to be
executed, along with data that would be require for the program to run. This data
comprises the variables and arrays that you find in program code.
The process image viewed by the kernel runs in its own user address space a protected
space which can not be distributed by other users. The address space has a no. of
segments:
TEXT SEGMENT: This contains the executable code of a program. Since several users
may be using the same program, this area generally remains fixed.
DATA SEGMENT: all variables and arrays the program uses are held here. They
include those variables and arrays that have values assigned to them during program
execution.
USER SEGMENT: This contains all the process attributes such as UID’s and GUID’s
and the current directory. The kernel uses the information stored in this segment to
manage all processes.

6.3 CREATING A PROCESS


There are three distinct phases in the creation of a process using three important system
calls:
1) fork()
2) exec()
3) wait()
FORK: A process in Linux/Unix created with the fork system call which creates a copy
of the process that invokes in when you enter a command at the prompt, the shell first
creates a copy of itself. The image is practically identical to the calling process except
that the forked process gets a new PID. The forking mechanism is responsible for the
multiplication of process in the system.
Working
When you fork a process, the system creates another process running the same program
as the current process. In fact the newly created process, called the child process, has all
the data connection and so on as the parent process and execution continious at the same
place. The single difference between the two is the returns value from the fork() system
call ,which returns the PID of the child to the parent a value of 0 in the child . therefore
common practice is to examine the return value of the call in both processes , and do
different things based on it.
EXAMPLE: Basic Forking:
#include<sdio.h>
#include<unistd.h>
#include<sys/type.h>
int main(void)
{
pid_t pid;
pid=fork();
if ( pid= =0 )
{
printf(“HELLO FROM THE CHILD PROCESS ! \n”);
}
else if ( pid != = -1 )
{
printf(“HELLO FROM THE PARENT.I FORKED PROCESS %d \n” .pid);
}
else
{
printf(“ERROR \n”);
}
}

This code will fork. If the return value is 0, it means that the current process is the child
from the fork .If the value is not -1 it means that the fork was successful and return value
indicates the PID of the new process. On the other hand, if the value is -1 then the fork
was failed.
Output:
When you run this program, you will get two messages – one from the parent and one
from the child, because these are separate process:
$ . / ch12-1
HELLO FROM THE PARENT.I FORKED PROCESS 267
HELLO FROM THE CHILD PROCESS!
$ . / ch12-1
HELLO FROM THE CHILD PROCESS
HELLO FROM THE PARENT.I FORKED PROCESS 269
EXEC: The parent then overwrites the image it has just created with the copy of the
program that has to be executed. This is done with the exec system call and the parent is
said to be exec this process. No additional process is created here; the existing program’s
text and data areas are simply replaced (or overlap) with the ones of the new program.
This process has the same PID as the child that was just forked.

Working
Besides forking, you will often have a need to invoke other programs. This is done with
the family of functions. When you run exec, your process’s current image is replaced
with that of the new program.
Example: In this example, a program in which in which the program in the process is

#include<sdio.h>
#include<unistd.h>
int main(void)
{
printf(“Hello sample program \n”);
execlp(“ls”,”ls”,”/proc”,NULL);
printf(“This code run after exec call \n”);
printf(“You should never see the message when exec failed \n”);
Return 0 ;
}

This is fairly simple program. it starts out by displaying a message on the screen. Then it
calls one of the exec families of functions.
execlp : the ‘l’ in the name means to use an argument list passed to it . And the ‘p’ means
to search a path.
Output: when you run this:
$ . / ch12-1
Hello,this is a sample program

1 198 250 267 323 347 for file system mem inf slab inf
114 2 255 268 324 348 404 fs mmc slat

Details Of exec ( ): The system provides you with many options for executing new
programs. The man pages list the following:

int execl (const char *file, const char *arg,…..);


int execlp (const char *file, const char *arg,…..);
int execle (const char *file, const char *arg,….. char *const envp[]);
int execv (const char *file, char * const arg v[],…..);
int execvp (const char *file, char * const arg v[],…..);
int execve (const char *file, char * const arg v[],char * const envp[]);

The above calls are prototyped in unistd-h.

p- execlp() and execpvp() : search the path for the file if it can not be located immediately
llfunctions: the three ll functions execl(),execlp(),execle() takes a list of the argument for
the program on the command line. After last argument, you must specify the special
values NULL . for instance you might use the following to invoke ls:
execl(“/bin/ls”, “/bin/ls”,”l”,”/etc”,NULL);
vv functions: The execv(),execvp(),execve() use a pointer to an array of strings for the
argument list. This is the same format as is passed into your program in argv in main().
The last item must be NULL.
e-functions: execle() and execve() enables you to customize the specific environment
variables received by your child process .

WAIT: The parent then executes the wait system call to keep waiting for the child
process to complete when the child has completed execution; it sends a termination signal
to the parent. The parent is then free to continue with its other functions.
Working: problem is that, when a process exits, its entry in the process table does not
completely go away. This is because the operating system is waiting for a parent process
to fetch some information about why the child process expired. This could include a
return value, a signal, or something else along those lines. A process whose program
terminated but still remains because its information was not yet collected is dubbed a
zomble process.
Example:

#include<sdio.h>
#include<unistd.h>
#include<sys/types.h>
int main(void)
{
pid_t pid;
pid = fork();
if ( pid= =0 )
{
printf(“Hello from Child process ! \n”);
printf(“The Child is existing Now \n”);
}
else if ( pid != = -1 )
{
printf(“Hello from the parent,pid %d \n”,getpid( ));
printf(“The parent has forked process %d \n”,pid);
sleep(60);
printf(“the parent is existing Now \n”);
}
else
{
printf(“There was an Error with forking \n”);
}
}

There is now a sleep call in the parent to delay its exit for a minute so you can
examine the state of the system’s process table in another window. when you run the
program :

$ . / ch12-3
Hello from the parent, pid 267
Hello from Child process!
The Child is existing now
The parent has forked process 269

Now in a separate window or terminal. Take the process ID of the child. In this case it is
269. Find its entry with a command like this:
$ ps aux / grep / grep –v grep
Puneet 269 0.0 0.0 0 opts /0 z 08:18 0:00
State of the process is indicated as z – that is zoom able process.

Details of Wait:
There are a number of variants of the wait functions in linux, just as there are a no. of
variants of the exec calls. Each call has its own special feature and syntax. The various
wait functions are declared as follows:
1) pid_t wait ( int *status)
2) pid_t waitpid ( pid_t pid , int *status , int options);
3) pid_t wait3 ( int *status , int options , struct rusage *rusage);
4) pid_t wait4 ( pid_t pid , int *status , int options , struct rusage *rusage);

The first two calls require system/types.h and system/wait.h . And last two requires
system/resources.h . Each of these functions returns the PID of the process that exited, 0
if they were told to be non blocking and no matching process was found , and -1 if there
was an Error.
6.4 PROCESS STATES
The state field of the process descriptor describes the current condition of the process.
Each process on the system is in exactly one of five different states. This value is
represented by one of five flags:
TASK_RUNNING The process is runnable; it is either currently running or on a run queue
waiting to run. This is the only possible state for a process executing in user-space; it can
also apply to a process in kernel-space that is actively running.
TASK_INTERRUPTIBLE The process is sleeping (that is, it is blocked), waiting for some
condition to exist. When this condition exists, the kernel sets the process's state to
TASK_RUNNING. The process also awakes prematurely and becomes runnable if it
receives a signal.
TASK_UNINTERRUPTIBLE This state is identical to TASK_INTERRUPTIBLE except
that it does not wake up and become runnable if it receives a signal. This is used in
situations where the process must wait without interruption or when the event is expected
to occur quite quickly. Because the task does not respond to signals in this state,
TASK_UNINTERRUPTIBLE is less often used than TASK_INTERRUPTIBLE.
TASK_ZOMBIE—The task has terminated, but its parent has not yet issued a wait4()
system call. The task's process descriptor must remain in case the parent wants to access
it. If the parent calls wait4(), the process descriptor is deallocated.
TASK_STOPPED—Process execution has stopped; the task is not running nor is it
eligible to run. This occurs if the task receives the SIGSTOP, SIGTSTP, SIGTTIN, or
SIGTTOU signal or if it receives any signal while it is being debugged.
6.5 PROCESS CREATION
A new process is created because an existing process makes an exact copy of itself. This
child process has the same environment as its parent, only the process ID number is
different. This procedure is called forking.

After the forking process, the address space of the child process is overwritten with the
new process data. This is done through an exec call to the system.

The fork-and-exec mechanism thus switches an old command with a new, while the
environment in which the new program is executed remains the same, including
configuration of input and output devices, environment variables and priority. This
mechanism is used to create all UNIX processes, so it also applies to the Linux operating
system. Even the first process, init, with process ID 1, is forked during the boot
procedure in the so-called bootstrapping procedure.

There are a couple of cases in which init becomes the parent of a process, while the
process was not started by init, as we already saw in the pstree example. Many
programs, for instance, demonize their child processes, so they can keep on running when
the parent stops or is being stopped. A window manager is a typical example; it starts an
xterm process that generates a shell that accepts commands. The window manager then
denies any further responsibility and passes the child process to init. Using this
mechanism, it is possible to change window managers without interrupting running
applications.

Every now and then things go wrong, even in good families. In an exceptional case, a
process might finish while the parent does not wait for the completion of this process.
Such an unburied process is called a zombie process.

6.6 ENDING PROCESSES


When a process ends normally (it is not killed or otherwise unexpectedly interrupted), the
program returns its exit status to the parent. This exit status is a number returned by the
program providing the results of the program's execution. The system of returning
information upon executing a job has its origin in the C programming language in which
UNIX has been written.

The return codes can then be interpreted by the parent, or in scripts. The values of the
return codes are program-specific. This information can usually be found in the man
pages of the specified program, for example the grep command returns -1 if no matches
are found, upon which a message on the lines of "No files found" can be printed. Another
example is the Bash built-in command true, which does nothing except return an exit
status of 0, meaning success.
6.7 EXPLORING THE PROCESS FILE SYSTEM
Linux provides us with a mechanism called the proc file system through which we can
get all sorts of information from the kernel, at runtime.

Process named after its process ID (PID). It also provides an interface to pass control
options to the kernel at routine by writing into specific files that reside inside the /proc
directory. The files inside/ proc are created on the fly- exactly why it is called a virtual
file system -and can be viewed using tools such as cat, less, tail, head, etc. let us now
explore the proc file system.

Hardware status
Some files in the /proc directory give information about the hardware. These files can be
used to check your hardware configuration.
[root@localhost ~]# # cat /proc/meminfo

The/proc/meminfo file contains the information regarding the memory of the system. The
MemTotal field gives the amount of the total physical memory available on the system.
MemTotal , when subtracted by Memfree, will give the amount of physical memory
being used by the kernel.
[root@localhost ~]# cat /proc/cpuinfo

The above file starts with the processor field that gives us information about the
processor number followed by specific details about the processor. So for a dual core
processor, the whole information is displayed twice-first forprocessor0, and then for
processor1. we also get to check other handy information like the vendor ID, model
name, clock speed, cache size fields, etc. the fpu field indicates whether the floating –
point unit is present or absent in the processor.
[root@localhost ~]# cat/proc/devices
The /proc/devices file lists the registered character and the block devices drivers in the
system. The first column contains the major number of the devices drivers and the second
column contains the name of the device.

The /proc/ide/subdirectory contain information about all the kernel –registered IDE
devices. There is one directory for each IDE controller and asoft link for each IDE
devices pointing to the device directory.
[root@localhost ~]# cd / proc/ide
[root@localhost ~]# 1s – al
******************************

The drivers file gives the details of the IDE drivers with the version. The IDE controller
subdirectory (ide*/) contains information about the IDE channel in the channel file ,
chipset of IDE controller in the model file, the name of the partner controller in the
mate file and a subdirectory (/proc/ide/ide*/hd*) for the IDE devices. This subdirectory
can be looked up for the devices specific information, such as the type of media, model
number, devices settings and unique identifier of the devices.

THE /PROC FILES DESCRIBING THE SYSTEM STATE


/proc/stat Gives the statistics of the system.
/proc/loadavg Average load on the system in 1,5 and 15 minutes.
/proc/uptime Provides the system uptime.
/proc/version The version of Linux and what gcc version was used to compile it.
/proc/filesystem Supported file system.
/proc/cmdline Kernel boot parameters.
/proc/diskstats Shows the disk I/O static’s, where column 1 is the ‘major’
number, column2 is minor, and column3 is the device
/proc/partitions Shows the disk partitions.
/proc/mounts Shows the mounted partitions.
/proc/kmsg messages output by the kernel

The files listed above will provide you with a great deal of information regarding the
hardware.
System status
Apart from the hardware information, the /proc can be probed to know about the system
status.
The /proc/kcore file is the snapshot of the kernel in physical memory. Exploring kcore
provides sound knowledge about the processes running on the system. Listing the kcore
file under /proc shows that its size is not 0.
[root@localhost ~]# cd /proc
[root@localhost ~]# 1s -lh
***************************
The size of the kcore file is the size of kernel in memory. The file can only be accessed
by the root user. To view the kcore file, we have to use the following strings command:
[root@localhost ~] # head –n 100 /proc/kcore| Strings
Core
Core
vm1inux
root=/dev/hda5 vga= 0x317 resume =/dev /hda6 splash=silent showts
core
head
-terminal
20061115(prerelease) (SUSE Linux )
GCC: (GNU) 4.1.2 20061115 (prerelease)(SUSE Linux)
GCC: (GNU) 4.1.2 20061115 ( prerelease) (SUSE Linux)
<<Output truncated>>

The output contains the details about the kernel boot parameters (/proc/cmidline), the
release version of gcc. /proc/kcore can also be used to view details such as the files used
when a user logged in to bash shell.
[root@localhost ~]# head –n 100000 /proc/ kcore | Strings | grep bash
Lubumbashi
Rbash
/ etc/ bash.bashrc
~/.bash_profile
~/.bash_login
~/.bashrc
Use the ‘bashbug’ command to report bugs.

/use/local/lib/bashdb/bashdb-main.inc
~/.bash_history
[email protected]
Cannot allocate new file descriptor for bash input from fd %d
Save_bash_input : buffer already exists for new fd %d
GNU bash, version %( %s)
<< Output truncated>>
Note: - if while exploring kcore your terminal becomes unreadable, then you need to
stop the process (can be done by pressing ctrl+c) and use the ‘reset’ command.

Process status:
The /proc/<PID>/ are the numbered subdirectories corresponding to an actual process ID.
Process IDs can be viewed by executing ps –ae command from shell. Process details can
be obtained by looking at the associated file in the /proc/<PID>/ directory. To view the
details of your bash shell, execute the following commands

[root@localhost ~]#cd /proc


[root@localhost ~]#ls –I |grep $$

Files under the /proc/<PID>/ Directory


/proc/<PID>/cmdline Command line arguments
/proc/<PID>/cwd Current working directory
/proc/<PID>/environ Associated environment variables
/proc/<PID>/exe Path of executable file of the process
/proc/<PID>/maps Memory maps to the executables and libraries
/proc/<PID>/mounts Mounted file systems
/proc/<PID>/root Root directory for the process
/proc/<PID>/stat Process status
/proc/<PID>/status Process status in the readable form

The /proc/<PID>/fd/ subdirectory contains the file descriptors used by the process.
Execute the cat command to start a process in one of the terminals. List the
/proc/<PID>/fd directory from another shell prompt
[root@localhost ~]#ps –ae
***********************************
[root@localhost ~]#ls –l /proc/***/fd
*************************
The 0, 1 and 2 are the links for the standard input, output and error device.
Tips:- The /proc/sys/kernel directory accounts for the general behavior of the kernel. It
also contains information related to the kernel. We can use the files in this directory for
administrative purposes, though only the root user can modify these files by using the
echo command. The contents of hostname, ostype and osrelease files are self-
explanatory.
[root@localhost ~]# hostname
ram
[root@localhost ~]# echo “mohan” > /proc/sys/kernel/hostname
[root@localhost ~]# hostnamemohan

The /proc/sys/kernel/panic files describe the action to be taken by the kernel on a kernel
panic. A 0 in this file indicates that there will be no auto reboot after a kernel panic,
whereas a non-zero numeric represents the time, in seconds, before an auto reboot. The
/proc/sys/kernel/pid_max file contains an integer defining the maximum number of PIDs
that can be issued by the kernel on the system. Likewise, administrators can restrict the
number of processes running on the system by echoing the required number of PIDs in
this file. The type of shutdown is defined by the ctrl + alt +Del file –if it is 1, then the
system will shutdown without un mounting the file system (in other words, an improper
shutdown); a 0 indicates clean shutdown. The entry in the /proc/sys/kernel/pty/nr file
gives the number of pts terminals in use, while the /proc/sys/kernel/pty/max file defines
the maximum number of pts terminals.
The /proc/sys/net directory contain network related information. Network configuration
can be changed during runtime by echoing into the corresponding file. For example, to
make your machine invisible to the network, type following commands:

[root@localhost ~]# echo “1” > /proc/sys/net/ipv4/icmp_echo_ignore_all


[root@localhost ~]# echo “1” > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
[root@localhost ~]# echo “0” > /proc/sys/net/ipv4/conf/all/accept_redirects

Or, in order to make your Linux box forward IP packets and act as a router, you can
execute the following command
[root@localhost ~]# echo “1” > /proc/sys/net/ipv4/ip_forward
The packets received by your computer can be routed back to the originating system
(which may be outside your network) and thus leak your network information to
attacker’s .in order to safeguard your computer from such attacks, execute the following
command:

[root@localhost ~]# echo “0” > /proc/sys/net/ipv4/conf/all/accept_source_route

To avoid the kernel log files from becoming bulky with invalid responses or bad error
messages, execute the following command

[root@localhost ~]# echo “1” > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses

To get information on the Time-To-Live parameters of packets, execute the following


command
[root@localhost ~]# cat /proc/sys/net /ipv4/ip_default_tt1

The performance of a system with lots of TCP connections can be increased by lowering
the number of retries before closing a connection, execute the following command
[root@localhost ~]# echo “0” > /proc/sys/net/ipv4/tcp_orphan_retries

Source addresses verification for all the IP addresses on your Linux machine.
Execute the following command
[root@localhost ~]# echo “1” > /proc/sys/net/ipv4/conf/all/rp_filter
This is also known as reverse path filtering, and the above example echo 1 to rp_filter
File. This is considered a safeguard against IP spoofing attacks. Now kernel rejects even
valid packets if you have multiple network interfaces or multiple IP addresses on single
interfaces.

The /proc/acpi/ directory provide the ACPI (advance configuration and power interface)
related information. To check whether ACPI is activated or not by executing

[root@localhost ~]# dmesg | grep ACPI


*************
The /proc/acpi/thermal_zone/*/temperature displays temperature in degree Celsius. The
value in /proc/acpi/thermal_zone/*/cooling _mode can be either passive or active –the
CPU performance decreases in passive cooling mode ,while power conservation
increases in with active cooling mode . The /proc/acpi/thermal_zone/*/trip_points file
defines the temperature limits and the policies on reaching the defined temperature. The
policies are active (starts cooling devices such as fan), passive (lowers the power
consumption) and critical (system shutdown). These policies are defined in
/proc/acpi/dsdt file.
Files and directories in /proc/ACPI/ directory
info version number
alarm wake up time for a system from the sleep state
dsdt Differentiated system description table. Can be viewed by acpidump
command.
sleep supported sleep states.
wakeup sleep state and status of motherboard devices.
button/ information about battery states.
battery/ information about battery states.

Note:Changes made in /proc/sys/ directory are not permanent. The echoed values get lost
on a reboot. To make changes in the /proc/sys/ directory permanent, we have to make an
entry of ‘variable = value’ pair in the /etc/sysctl.conf file. For example, to enable packet
forwarding permanently on your system, make an entry like ‘net.ipv4.ip_forward = 0’ in
your /etc/sysctl.conf file. Then execute ‘sysctl –p’ as the root to load the values from the
updated file. But how can we predict the variables names?
Execute following
# truncate /pro/sys
#replace “/” with “.”
So, the /proc/sys/kernel/hostname file has the variable name kernel. hostname and
/proc/sys/net/ipv4/ip_default_ttl file has the variable name net.ipv4.ip_default_ttl for the
/etc/sysctl.conf file.

6.8 CRON
cron is the memory-resident scheduler daemon that can execute commands and scripts at
regular intervals. The jobs it runs are listed in a crontab file which is edited using the
crontab utility. It is a utility written by Brian Kernighan.Cron stores it's entries in the
crontab (cron table) file. This is generally located in your /etc directory. As well, each
user on your system can have their own crontab which would be stored in
/var/spool/cron/.

Below is a table of what each field does.

Field Meaning
1 Minute (0-59)
2 Hour (2-24)
3 Day of month (1-31)
4 Month (1-12, Jan, Feb, etc)
5 Day of week (0-6) 0 = Sunday, 1 = Monday etc or Sun, Mon, etc)
6 User that the command will run as
7 Command to execute

When cron is used:


• If you run a website that deals with a large number of images and you want to
create thumbnails automatically during a given time period every day or week.
• If you want to keep track of your back-ups synchronized easily without much pain
and effort.
• Most importantly, if you want to run file downloads and torrents in a specified
time.
• If scheduling of automatic system updates is required.

How to create a scheduled job


open the terminal and enter the following command:
#vi cronjobs
This will invoke the vi text editor. After creating a file called cronjobs. Enter the
following line in the file:

00 * * * * date >>/tmp/hourlytime.log
Save and exit. Now, back in the terminal, run the following commands:
#crontab cronjobs
#crontab -l

This will now list the following:


00 * * * * date >>/tmp/hourlytime.log

Environment settings
DISPLAY=:0
00 10 *** /usr/bin/gedit
DISPLAY environment variable line recommended on your crontab entry if you are
scheduling to run applications that need X (GUI); else, it will not work.

Example:
Take a look at the following sample cron job entry:
00 02 * * * /usr/bin/ktorrent
What does the entry mean?
• 00 – 0th minute
• 02 – 2nd hour
• * - any day of month
• * - any month
• * - any weekday

Execute /usr/bin/ktorrent at the 0th minute of 2 a.m., on any day of the month, any month,
any day of the week.

# Open gedit at 12 am
00 12 * * * gedit /home/staff.doc

To start downloads at 7 a.m. and stop them at 10 a.m., execute the following:
00 07 * * * cd/home/slynux/distros/; wget-c http:// example.com/ubuntu.iso
00 10 * * * killall wget –s 9

crontab commands
To remove crontab for a specific user:
[root@localhost ~]# crontab –u username –r

To know what tasks cron shall be executing you can use the following command
[root@localhost ~]# crontab -l
This command shall provide a detailed list of all the jobs that cron shall execute. It would
basically be showing you your crontab file. So you need to know the meaning of the 6
fields to make sense of the output

To remove the current crontab you can use the following command
[root@localhost ~]# crontab -r
This shall remove whatever entries you added to cron using your own crontab file.

To edit the crontab to enter or remove tasks type the following command
[root@localhost ~]# crontab -e

cron tips
To run a task every 5 hours
If you want to do the above then in the crontab file entry, in the hour field you can enter a
Step Value as follows '*/5' (without the inverted commas). This would cause the task to
be executed every 5 hours.

To run a task in the first 10 days of a month and then last 10 days of the month you can
use Ranges as follows
Enter '1-10,21-30' (without the inverted commas) in the 'Days' field.

To run a task every alternate day for the first 15 days of the month you can enter '1-15/2'
in the 'Days' field. This would run the task on the following days of the month
(1,3,5,7,9,11,13,15)

If you want to enter Comments, all you have to do is to add a # at the beginning of the
line. Remember comments are not allowed on the same line with a cron command. So
comments need to be entered on a separate line starting with a # (first character should be
#)
Chapter 7
Shell programming

7.0 Shell programming


Shell
The shell provides you with an interface to the UNIX system. It gathers input from you
and executes programs based on that input. When a program finishes executing, it
displays that program's output.
A shell is an environment in which we can run our commands, programs, and shell
scripts. There are different flavors of shells, just as there are different flavors of operating
systems. Each flavor of shell has its own set of recognized commands and functions.

Shell Prompt
The prompt, $, which is called command prompt, is issued by the shell. While the prompt
is displayed, you can type a command.
The shell reads your input after you press Enter. It determines the command you want
executed by looking at the first word of your input. A word is an unbroken set of
characters. Spaces and tabs separate words.
Following is a simple example of date command which displays current date and time:
$date
You can customize your command prompt using environment variable PS1 explained in
Environment tutorial.

Shell Types
In UNIX there are two major types of shells:
1. The Bourne shell. If you are using a Bourne-type shell, the default prompt is the $
character.
2. The C shell. If you are using a C-type shell, the default prompt is the % character.
There are again various subcategories for Bourne Shell which are listed as follows:
• Bourne shell ( sh)
• Korn shell ( ksh)
• Bourne Again shell ( bash)
• POSIX shell ( sh)
The different C-type shells follow:
• C shell ( csh)
• TENEX/TOPS C shell ( tcsh)
The original UNIX shell was written in the mid-1970s by Stephen R. Bourne while he
was at AT&T Bell Labs in New Jersey.
The Bourne shell was the first shell to appear on UNIX systems, thus it is referred to as
"the shell".
The Bourne shell is usually installed as /bin/sh on most versions of UNIX. For this
reason, it is the shell of choice for writing scripts to use on several different versions of
UNIX.
Shell Scripts
The basic concept of a shell script is a list of commands, which are listed in the order of
execution. A good shell script will have comments, preceded by a pound sign, #,
describing the steps.
There are conditional tests, such as value A is greater than value B, loops allowing us to
go through massive amounts of data, files to read and store data, and variables to read and
store data, and the script may include functions.
Shell scripts and functions are both interpreted. This means they are not compiled.
We are going to write a many scripts in the next several tutorials. This would be a simple
text file in which we would put our all the commands and several other required
constructs that tell the shell environment what to do and when to do it.

Example Script

Assume we create a test.sh script. Note all the scripts would have .sh extension. Before
you add anything else to your script, you need to alert the system that a shell script is
being started. This is done using the shebang construct. For example:
#!/bin/sh
This tells the system that the commands that follow are to be executed by the Bourne
shell. It's called a shebang because the # symbol is called a hash, and the ! symbol is
called a bang.
To create a script containing these commands, you put the shebang line first and then add
the commands:
#!/bin/bash
ls –l
mkdir teji

Uses of Shell Script


1) Shell script can take input from user, file and output them on screen.
2) Useful to create our own commands.
3) Save lots of time.
4) To automate some task of day today life.
5) System Administration part can be also automated.
6) Whenever you find yourself doing the same task over and over again you should
use shell scripting, i.e., repetitive task automation.
7) Since scripts are well tested, the chances of errors are reduced while configuring
services or system administration tasks such as adding new users.
Advantages
1. Easy to use.
2. Quick start and interactive debugging.
3. Time Saving.
4. System Admin task automation.
5. Shell scripts can execute without any additional effort on nearly any
modern UNIX / Linux / BSD / Mac OS X operating system as they are written an
interpreted language.
Disadvantages
1. Compatibility problems between different platforms.
2. Slow execution speed.
3. A new process launched for almost every shell command executed.

How to create and execute a shell script


1. Use a text editor such as vi or vim. Insert required Linux commands and logic in
the file.
2. Save and close the file (Press Esc key +: wq).
3. Make the script executable (use chmod +x scriptname.sh or chmod 0700
scriptname.sh) and Run shell script by using ./scriptname.sh
OR simply use #sh scriptname.sh to execute the script. OR you can also run the
script directly as follows without setting the script executes permission
bash script.sh
. script.sh

UNIX/Linux Variables
Variables are a way of passing information from the shell to programs when you run
them. Programs look "in the environment" for particular variables and if they are found
will use the values stored. Some are set by the system, others by you, yet others by the
shell, or any program that loads another program.
Standard UNIX variables are split into two categories, environment variables and shell
variables. In broad terms, shell variables apply only to the current instance of the shell
and are used to set short-term working conditions; environment variables have a farther
reaching significance, and those set at login are valid for the duration of the session. By
convention, environment variables have UPPER CASE and shell variables have lower
case names. It is used to store data and configuration options. Two type of variables.
System variables: Created and maintained by Linux bash shell. system variables are
defined in CAPITAL LETTERS. You can configure aspects of the shell by modifying
system variables such as PS1, PATH, LANG, HISTSIZE, and DISPLAY etc.
Example: To view all System Variables
Type the following command at terminal:
[root@localhost]#set
OR
[root@localhost]#env
OR
[root@localhost]#printenv
Variable Types:
When a shell is running, three main types of variables are present:
• Local Variables: A local variable is a variable that is present within the current
instance of the shell. It is not available to programs that are started by the shell.
They are set at command prompt.
• Environment Variables: An environment variable is a variable that is available
to any child process of the shell. Some programs need environment variables in
order to function correctly. Usually a shell script defines only those environment
variables that are needed by the programs that it runs.
• Shell Variables: A shell variable is a special variable that is set by the shell and is
required by the shell in order to function correctly. Some of these variables are
environment variables whereas others are local variables.

User defined variable


Created and maintained by user. Creating and setting variables within a script is fairly
simple. Use the following syntax:
varName=assignValue
assignValue is assigned to given varName and assignValue must be on right side of =
(equal) sign. If assignValue is not given, the variable is assigned the null string.
Example
Define your home directory:
homeDir="/home/teji"
echo "$homeDir "
Set file path:
path="/home/teji/emp.doc"
echo "Input file $path"
Store current date:
NOW=$(date)
echo $NOW
Variable defining rules
1) Variable name must begin with alphanumeric character or underscore character _,
followed by one or more alphanumeric or underscore characters.
Examples:
HomeDir
Path_name
num
2) Do not put spaces on either side of the equal sign when assigning value to variable.
Valid variable declaration:
num=80
Invalid variable declaration:
num =10
num= 10
num = 10
3) Variables names are case-sensitive, so all next declared variables are differnt.
num=60
Num=55
NUM=27
nUM=28
NuM=56
echo "$num" # print 60
echo "$Num" # print 55
echo "$NUM" # print 27
echo "$nUM" # print 28
echo “$NuM” #print 56
4) User can define a NULL variable as follows
var=
var=""
Try to print its value
echo $var
5) No special character is allowed to name your variable.
*num=80 #invalid
Out?put=/home/teji/emp.txt #invalid
_GREP=/usr/bin/grep #valid
echo "$_GREP"

variable examples
The following examples are valid variable names:
_teji
var_a
VAR_1
Var_2
Following are the examples of invalid variable names:
3_val
-VAL
VAL1-VAL2
VAL_A!
The reason you cannot use other characters such as !,*, or - is that these characters have a
special meaning for the shell.

Defining Variables
Variables are defined as follows::
variable_name=variable_value
For example:
EMPNAME="POOJA SHARMA"
Above example defines the variable NAME and assigns it the value " POOJA SHARMA
". Variables of this type are called scalar variables. A scalar variable can hold only one
value at a time.
The shell enables you to store any value you want in a variable. For example:
VAL1="VIPUL GUPTA"
VAL2=500

Accessing Values
To access the value stored in a variable, prefix its name with the dollar sign ( $):
For example, following script would access the value of defined variable NAME and would print it on
STDOUT: #!/bin/sh
EMPNAME="POOJA SHARMA"
echo $EMPNAME
This would produce following value:
POOJA SHARMA

Read-only Variables
The shell provides a way to mark variables as read-only by using the readonly command.
After a variable is marked read-only, its value cannot be changed.
For example, following script would give error while trying to change the value of
NAME: #!/bin/sh
EMPNAME=" VIPUL GUPTA "
readonly NAME
EMPNAME="POOJA SHARMA"
This would produce following result:
/bin/sh: EMPNAME: This variable is read only.

Unsetting Variables
Unsetting or deleting a variable tells the shell to remove the variable from the list of
variables that it tracks. Once you unset a variable, you would not be able to access stored
value in the variable.
Following is the syntax to unset a defined variable using the unset command:
unset variable_name
Above command would unset the value of a defined variable. Here is a simple example:
#!/bin/sh
EMPNAME=" VIPUL GUPTA "
unset EMPNAME
EMPNAME="POOJA SHARMA"
echo $EMPNAME
Above example would not print anything. You cannot use the unset command to unset
variables that are marked readonly

Environment Variables
An example of an environment variable is the OSTYPE variable. The value of this is the
current operating system you are using. Type
% echo $OSTYPE
More examples of environment variables are
USER (your login name)
HOME (the path name of your home directory)
HOST (the name of the computer you are using)
ARCH (the architecture of the computers processor)
DISPLAY (the name of the computer screen to display X windows)
PRINTER (the default printer to send print jobs)
PATH (the directories the shell should search to find a command)
Finding out the current values of these variables.
ENVIRONMENT variables are set using the setenv command, displayed using the
printenv or env commands, and unset using the unsetenv command.
To show all values of these variables, type
% printenv | less

Shell Variables
An example of a shell variable is the history variable. The value of this is how many shell
commands to save, allow the user to scroll back through all the commands they have
previously entered. Type
% echo $history
More examples of shell variables are
cwd (your current working directory)
home (the path name of your home directory)
path (the directories the shell should search to find a command)
prompt (the text string used to prompt for interactive commands shell your login shell)
Finding out the current values of these variables.
SHELL variables are both set and displayed using the set command. They can be unset
by using the unset command.
To show all values of these variables, type
% set | less
So what is the difference between PATH and path?
In general, environment and shell variables that have the same name (apart from the case)
are distinct and independent, except for possibly having the same initial values. There
are, however, exceptions.
Each time the shell variables home, user and term are changed, the corresponding
environment variables HOME, USER and TERM receive the same values. However,
altering the environment variables has no effect on the corresponding shell variables.
PATH and path specify directories to search for commands and programs. Both variables
always represent the same directory list.
Commonly Used Shell Variables
System Variable Meaning
BASH_VERSION Holds the version of this instance of bash.
HOSTNAME The name of your computer.
CDPATH The search path for the cd command.
HISTFILE The name of the file in which command history is saved.
HISTFILESIZE The maximum number of lines contained in the history file.
HISTSIZE The number of commands to remember in the command history.
The default value is 500.
HOME The home directory of the current user.
The Internal Field Separator that is used for word splitting after
IFS expansion and to split lines into words with the read builtin
command. The default value is <space><tab><newline>.
Used to determine the locale category for any category not
LANG
specifically selected with a variable starting with LC_.
The search path for commands. It is a colon-separated list of
PATH
directories in which the shell looks for commands.
PS1 Your prompt settings.
The default timeout for the read builtin command. Alsom in an
interactive shell, the value is interpreted as the number of seconds to
TMOUT
wait for input after issuing the command. If not input provided it
will logout user.
TERM Your login terminal type.
SHELL Set path to login shell.
DISPLAY Set X display name
EDITOR Set name of default text editor.

Getting full list of environment variables


In case you feel like exploring, you can use the env command to get a full list of
currently set environment variables (the output in this example is abridged):
[root@localhost ~]# env
TERM=xterm
SHELL=/bin/bash
USER=teji
MAIL=/var/mail/teji
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/game
PWD=/home/greys
EDITOR=vim
..

Using and setting variables


Each time you login to a UNIX host, the system looks in your home directory for
initialization files. Information in these files is used to set up your working environment.
The C and TC shells uses two files called .login and .cshrc (note that both file names
begin with a dot).
At login the C shell first reads .cshrc followed by .login
.login is to set conditions which will apply to the whole session and to perform actions
that are relevant only at login.
.cshrc is used to set conditions and perform actions specific to the shell and to each
invocation of it.
The guidelines are to set ENVIRONMENT variables in the .login file and SHELL
variables in the .cshrc file.
WARNING: NEVER put commands that run graphical displays (e.g. a web browser) in
your .cshrc or .login file.
Setting shell variables in the .cshrc file
For example, to change the number of shell commands saved in the history list, you need
to set the shell variable history. It is set to 100 by default, but you can increase this if you
wish.
% set history = 200
Check this has worked by typing
% echo $history
However, this has only set the variable for the lifetime of the current shell. If you open a
new xterm window, it will only have the default history value set. To PERMANENTLY
set the value of history, you will need to add the set command to the .cshrc file.
First open the .cshrc file in a text editor. An easy, user-friendly editor to use is nedit.
% nedit ~/.cshrc
Add the following line AFTER the list of other commands.
set history = 200
Save the file and force the shell to reread its .cshrc file buy using the shell source
command.
% source .cshrc
Check this has worked by typing
% echo $history

Setting the path


When you type a command, your path (or PATH) variable defines in which directories
the shell will look to find the command you typed. If the system returns a message saying
"command: Command not found", this indicates that either the command doesn't exist at
all on the system or it is simply not in your path.
For example, to run units, you either need to directly specify the units path
(~/tejidata/bin/ulp), or you need to have the directory ~/ tejidata/bin/ in your path.
You can add it to the end of your existing path (the $path represents this) by issuing the
command:
% set path = ($path ~/tejidata/bin/ulp)
Test that this worked by trying to run units in any directory other that where units is
actually located.
% cd
% units
To add this path PERMANENTLY, add the following line to your .cshrc. AFTER the list
of other commands.
set path = ($path ~/tejidata/bin/ulp)

Shell comments
Single line comments A word or line beginning with # causes that word and all
remaining characters on that line to be ignored is known as comments.These lines aren't
executeable and shell simply ignores them.Comments makes source code easier to
understand and used as help for users and other sys admins.It provides help for sys
admins to understand code, logic and support to modify the script.
Example:#!/bin/bash
#This is my first shell script
During exection all the lines containing # in beginging will be treated as comments and
are’t excutable.

Multiple line comments when a detailed help or comments more than one line is reqired
than it is better to use multiple line comments.
Example:
<<COMMENT1
This shell script is used to add and delete users.
Also provide support to set various other properties.
For more information login to www.nutansolution.com
COMMENT1

Debugging Shell Scripts


To debug a script run shell script with -x option from the command line.
bash -x script-name
OR
bash -xv script-name
Example:
[root@localhost]#vi sample.sh
#!/bin/bash -x
echo "Hello Dear ${LOGNAME}"
echo "Today is $(date)"
echo "calendar " cal 10 2010
(Save and quit)
[root@localhost]#bash -x sample.sh
Use of set builtin command
Bash shell offers debugging options which can be turn on or off using set command.
▪ set -x : Display commands and their arguments as they are executed.
▪ set -v : Display shell input lines as they are read.
▪ set -n : Read commands but do not execute them. This may be used to check a
shell script for syntax errors.
#!/bin/bash
### Turn on debug mode ###
set -x

# Run shell commands


echo "Hello Dear ${LOGNAME}"
echo "Today is $(date)"
echo "calendar " cal 10 2010
### Turn OFF debug mode ###
set +x
echo and read
The backslash quoted characters in echo
\c suppress the new line
\n new line
\r return
\t tab
read
synatax: read variable1 [variable2 …]
Read one line of standard input
Assign each word to the corresponding variable, with the leftover words assigned to last
variables
If only one variable is specified, the entire line will be assigned to that variable.

test
Command test is a built-in command. The test command evaluate an expression and
Returns a condition code indicating that the expression is either true (0) or false (not
0)Argument.
Syntax: test expression

Expression criteria:
Logical AND operator to separate two criteria: -a
Logical OR operator to separate two criteria: -o
Negate any criterion: !
Group criteria with parentheses
Separate each element with a SPACE
Test Criteria
Shell Special Variables
Variable Description
$0 The filename of the current script.
These variables correspond to the arguments with which a script was
invoked. Here n is a positive decimal number corresponding to the position
$n
of an argument (the first argument is $1, the second argument is $2, and so
on).
$# The number of arguments supplied to a script.
All the arguments are double quoted. If a script receives two arguments, $*
$*
is equivalent to $1 $2.
All the arguments are individually double quoted. If a script receives two
$@
arguments, $@ is equivalent to $1 $2.
$? The exit status of the last command executed.
$$ The process number of the current shell. For shell scripts, this is the process
ID under which they are executing.
$! The process number of the last background command.

Shell Operators
Arithmetic Operators
Supoose for examples used in table variable x holds 5 and variable y holds 10
Operator Description Example
Addition - Adds values on either side of
+ `expr $x + $y` will give 15
the operator
Subtraction - Subtracts right hand operand
- `expr $x - $y` will give -5
from left hand operand
Multiplication - Multiplies values on
* `expr $x * $y` will give 150
either side of the operator
Division - Divides left hand operand by
/ `expr $y / $x` will give 2
right hand operand
Modulus - Divides left hand operand by
% `expr $y % $x` will give 0
right hand operand and returns remainder
Assignment - Assign right operand in left z=$y would assign value of y
=
operand into z
Equality - Compares two numbers, if both
== [ $x == $y ] would return false.
are same then returns true.
Not Equality - Compares two numbers, if
!= [ $x != $y ] would return true.
both are different then returns true.
Relational Operators
Supoose for examples used in table variable x holds 5 and variable y holds 10
Operator Description Example
Checks if the value of two operands is
-eq equal or not, if yes then condition [ $x -eq $y ] is false.
becomes true.
Checks if the value of two operands are
-ne equal or not, if values are not equal then [ $x -ne $y ] is true.
condition becomes true.
Checks if the value of left operand is
-gt greater than the value of right operand, if [ $x -gt $y ] is false.
yes then condition becomes true.
Checks if the value of left operand is less
-lt than the value of right operand, if yes then [ $x -lt $y ] is true.
condition becomes true.
Checks if the value of left operand is
greater than or equal to the value of right
-ge [ $x -ge $y ] is false.
operand, if yes then condition becomes
true.
Checks if the value of left operand is less
-le than or equal to the value of right operand, [ $x -le $y ] is true.
if yes then condition becomes true.

String Operators:
Supoose for examples used in table variable x holds “teji”and variable y holds “sema”
Operator Description Example
Checks if the value of two operands are
= equal or not, if yes then condition [ $x = $y ] is false.
becomes true.
Checks if the value of two operands are
!= equal or not, if values are not equal then [ $x != $y ] is true.
condition becomes true.
Checks if the given string operand size is
-z zero. If it is zero length then it returns [ -z $x ] is not true.
true.
Checks if the given string operand size is
-n non-zero. If it is non-zero length then it [ -z $x ] is not false.
returns true.

Boolean Operators

Supoose for examples used in table variable x holds 5 and variable y holds 10
Operator Description Example
This is logical negation. This inverts a
! [ ! false ] is true.
true condition into false and vice versa.
This is logical OR. If one of the operands
-o [ $x -lt 10 -o $y -gt 500 ] is true.
is true then condition would be true.
File Test Operators
Assume a variable fname holds an existing file name "check" whose size is 50 bytes and
has read, write and execute permissions.
Operator Description Example
Checks if file is a block special file if yes
-b file [-b $file] is false.
then condition becomes true.
Checks if file is a character special file if
-c file [-b $file] is false.
yes then condition becomes true.
Check if file is a directory if yes then
-d file [-d $file] is false.
condition becomes true.
Check if file is an ordinary file as opposed
-f file to a directory or special file if yes then [-f $file] is true.
condition becomes true.
Checks if file has its set group ID (SGID)
-g file [-g $file] is false.
bit set if yes then condition becomes true.
Checks if file has its sticky bit set if yes
-k file [-k $file] is false.
then condition becomes true.
Checks if file is a named pipe if yes then
-p file [-p $file] is false.
condition becomes true.
Checks if file descriptor is open and
-t file associated with a terminal if yes then [-t $file] is false.
condition becomes true.
Checks if file has its set user id (SUID) bit
-u file [-u $file] is false.
set if yes then condition becomes true.
Checks if file is readable if yes then
-r file [-r $file] is true.
condition becomes true.
Check if file is writable if yes then
-w file [-w $file ] is true.
condition becomes true.
Check if file is execute if yes then
-x file [-x $file] is true.
condition becomes true.
Check if file has size greater than 0 if yes
-s file [-s $file] is true.
then condition becomes true.
Check if file exists. Is true even if file is a
-e file [-e $file] is true.
directory but exists.
Shell Decision Making

The if...fi statement

Syntax
if [ expression ]
then
Statement(s) to be executed if expression is true
fi

Example1
[root@localhost Shell]# vi wordmatch.sh
if test “$word1” = “$word2” then
echo “Match”
fi

Example2
[root@localhost Shell]# vi test.sh
if test $# -eq 0
then
echo “ you must supply at least one arguments”
exit 1
fi

The if...else...fi statement


Syntax
if [ expression ]
then
Statement(s) to be executed if expression is true
else
Statement(s) to be executed if expression is not true
fi

Example1:
#!/bin/bash
echo -p "Enter a password"
read password
if test "$password" = "teji"
then
echo "Password verified."
else
echo "Access denied."
fi
Example2:
#!/bin/bash
echo -ne "Enter number : "
read num
if test $num -ge 0
then
echo "$num is positive number."
else
echo "$num number is negative number."
fi

The if...elif...fi statement


Syntax
if [ expression 1 ]
then
Statement(s) to be executed if expression 1 is true
elif [ expression 2 ]
then
Statement(s) to be executed if expression 2 is true
elif [ expression 3 ]
then
Statement(s) to be executed if expression 3 is true
else
Statement(s) to be executed if no expression is true
fi

Example:
#!/bin/bash
echo –ne "Enter a number := "
read num
if [ $num -gt 0 ] then
echo "$num is a positive."
elif [ $num -lt 0 ]
then
echo "$num is a negative."
elif [ $num -eq 0 ]
then
echo "$num is zero number."
else
echo " $num is not a number."
fi

The case...esac Statement


Syntax:
case word in
pattern1)
Statement(s) to be executed if pattern1 matches
;;
pattern2)
Statement(s) to be executed if pattern2 matches
;;
pattern3)
Statement(s) to be executed if pattern3 matches
;;
esac

Example:
#!/bin/sh
echo “\n Command MENU\n”
echo “ a. Current data and time”
echo “ b. Users currently logged in”
echo “ c. Name of the working directory\n”
echo “Enter a,b, or c: \c”
read choice
echo
case “$choice” in
a)
date
;;
b)
who
;;
c)
pwd
;;
*)
echo “There is no selection: $choice”
;;
esac

Shell Loop Types


The while Loop
Syntax
while command
do
Statement(s) to be executed if command is true
done

Example:

#!/bin/sh
a=1
while [ $a -lt 10 ]
do
echo $a
a=`expr $a + 1`
done
Output:
1
2
3
4
5
6
7
8
9
10
The for… in loop
Syntax:
for loop-index in argument_list
do
commands
done

Example:
for file in *
do
if [ -d “$file” ]; then
echo $file
fi
done

The for Loop


Syntax
for var in word1 word2 ... wordN
do
Statement(s) to be executed for every word.
done

The until Loop


Syntax
until command
do
Statement(s) to be executed until command is true
done

Example:
secretname=jenny
name=noname
until [ “$name” = “$secretname” ]
do
echo “ Your guess: \c”
read name
done

Example:
#!/bin/sh
a=0
until [ ! $a -lt 10 ]
do
echo $a
a=`expr $a + 1`
done

The select Loop


Syntax
select var in word1 word2 ... wordN
do
Statement(s) to be executed for every word.
done

Example:
#!/bin/sh
select COLOR in red blue pink white green all none
do
case $ COLOR in
red|pink|white|all)
echo "Good choice"
;;
blue|green)
echo "bad choice"
;;
none)
break
;;
*) echo "wrong choice ,Try again!!!!"
;;
esac
done
Shell Loop Control
break and continue
Interrupt for, while or until loop
The break statement:Transfer control to the statement AFTER the done statement and
Terminate execution of the loop.
The continue statement:Transfer control to the statement to the done statement and
Skip the test statements for the current iteration Continues execution of the loop.

Example:
for index in 1 2 3 4 5 6 7 8 9 10
do
if [ $index –le 3 ] then
echo “continue”
continue
fi
echo $index
if [ $index –ge 8 ] then
echo “break”
break
fi
done

Example:
#!/bin/sh
NUMS="1 2 3 4 5 6 7"
for NUM in $NUMS
do
Q=`expr $NUM % 2`
if [ $Q -eq 0 ]
then
echo "Number is an even number!!"
continue
fi
echo "Found odd number"
done

Special characters used in patterns


Pattern Matches

* Matches any string of characters.

? Matches any single character.

[…] Defines a character class. A hyphen specifies a range of


characters
| Separates alternative choices that satisfy a particular branch of
the case structure

Built-in: exec: Execute a command:


Syntax: exec command argument
Run a command without creating a new process
Quick start
Run a command in the environment of the original process
Exec does not return control to the original program
Exec can be the used only with the last command that you want to run in a script

Example:
[root@localhost ~]# exec who
Redirect standard output, input or error of a shell script from within the script
exec < infile
exec > outfile 2> errfile

Example:
[root@localhost ~]# more redirect.sh
exec > /dev/tty
echo "this is a test of redirection"
[root@localhost ~]#./redirect.sh 1 > /dev/null 2 >& 1
This is a test of redirection

Catch a signal: built in trap:


Syntax: trap ‘commands’ signal-numbers
Shell executes the commands when it catches one of the signals.Then resumes executing
the script where it left off.
Just capture the signal, not doing anything with it
trap ‘ ‘ signal_number
Often used to clean up temp files
Signals
• SIGHUP 1 disconnect line
• SIGINT 2 control-c
• SIGKILL 9 kill with -9
• SIGTERM 15 default kill
• SIGSTP 24 control-z
Example:
#!/bin/sh
trap 'echo PROGRAM INTERRUPTED' 2
while true
do
echo "program running."
sleep 1
done
Example on list of built-in functions:
bg, fg, jobs job control
break, continue change the loop
cd, pwd working directory
echo, read display/read
eval scan and evaluate the command
exec execute a program
exit exit from current shell
export, unset export/ remove a val or fun
test compare arguments
kill sends a signal to a process or job
set sets flag or argument
shift promotes each command line argument
times displays total times for the current shell and
trap traps a signal
type show whether unix command, build-in, function
umask file creation mask
wait waits for a process to terminate.
ulimit print the value of one or more resource limits

Functions
A shell function is similar to a shell script. It stores a series of commands for execution at
a later time. The shell stores functions in the memory. Shell executes a shell function in
the same shell that called it.
Where to define
In .profile
In your script
Or in command line
Remove a function
Use unset built-in

Syntax
function_name()
{
commands
}
Example:
[root@localhost ~]# whoson()
{
date
echo "users currently logged on"
who
}
[root@localhost ~]# whoson
Tue Feb 1 23:28:44 EST 2005
users currently logged on
teji :0 Jan 31 12:08
puneet pts/1 Jan 31 01:54 (:0.0)
pooja pts/2 Jan 31 05:02 (:0.0)

Example:-
[root@localhost~]#more .profile
setenv()
{
if [ $# -eq 2 ]
then
eval $1=$2
export $1
else
echo "usage: setenv NAME VALUE" 1>&2
fi
}

7.1 Example on shell programming


1)#!/bin/sh
#shell script to perform arithmetic operations on two numbers
echo "Enter First Number="
read num1
echo "Enter Second Number="
read num2
echo "Addition is="
expr $num1 + $num2
echo "Subtraction is="
expr $num1 - $num2
echo "Multiplication is="
expr $num1 \* $num2
echo "Remainder is="
expr $num1 % $num2
echo "Quotient is="
expr $num1 / $num2

OUTPUT IS:
[root@localhost ~]# sh shell1.sh
Enter First Number=
12
Enter Second Number=
13
Addition is=
25
Subtraction is=
-1
Multiplication is=
156
Remainder is=
12
Quotient is=
0

2)#!/bin/sh
#shell script to perform floating arithmetic operations on two numbers
echo "Enter First Number="
read num1
echo "Enter Second Number="
read num2
echo "Addition is="
echo $num1 + $num2|bc
echo "Subtraction is="
echo $num1 - $num2|bc
echo "Multiplication is="
echo $num1 \* $num2|bc
echo "Remainder is="
echo $num1 % $num2|bc
echo "Quotient is="
echo $num1 / $num2|bc

OUTPUT IS:-
[root@localhost ~]# sh shell2.sh
Enter First Number=
12.5
Enter Second Number=
2.3
Addition is=
14.8
Subtraction is=
10.2
Multiplication is=
28.7
Remainder is=
1.0
Quotient is=
5

3) #!/bin/sh
#shell script to find the sum of digits of a given number
echo "enter number"
read a
i=10000
sum=0
while [ $i -ge 1 ]
do
sum=`expr $sum+$a/$i|bc`
a=`expr $a%$i|bc`
#new dividend=a previous remainder is new dividend
i=`expr $i/10|bc`
done
echo "sum=$sum"

4)#!/bin/sh
#shell script that displays all the links to a file specified as the first argument to the
#script. The second argument which is optional can be used to specify in which the
#search is to begin .If this second argument is not present, the search is to begin in
#current working directory. In either case the starting directory as well as all the
#subdirectories at all levels must be searched and the script need not check error
#massage.

touch rtemp
if [ $# -lt 1 ]
then
echo "no arguments"
else
s=`ls -l "$1" | tr -s " " | cut -d " " -f2`
if [ $s > 1 ]
then
echo "hard links are"
x=`ls -ilR $1 | cut -d " " -f1`
echo "inode=$x"
ls -ilR | grep "$x"
else
echo "no hard links"
fi
ls -ilR | grep "$1" > rtemp
z=`wc -l "rtemp"`
p=`echo "$z" | cut -d " " -f1`
if [ $p -gt 1 ]
then
echo "soft link are"
ls -ilR | grep "$1$"
else
echo "no soft link"
fi
fi
rm rtemp
Output1:
[root@localhost~]#sh links.sh
No arguments
Output2:
[root@localhost~]#sh links.sh rev.sh
Hard links are
233333 -rw-rw-r—2 teji teji 321 Apr 9 10:30 rev.sh
233333 –rw-rw-r—2 teji teji 321 Apr 9 10:30 test2
No soft links

5) #!/bin/sh
#shell script to merge one file to another and display the contents of #file
echo "enter filename 1="
read fname1
echo "enter second filename="
read fname2
echo "enter data= "
cat> $fname1
echo -ne "\n enter data="
cat >$fname2
echo -ne "\n enter filename3="
read fname3
echo -ne "\n after merging file3 is="
cat >>$fname1 $fname2
mv $fname1 $fname3
cat $fname3

OUTPUT :
[root@localhost ~]# sh merge.sh
enter filename 1=
text1.txt
enter second filename=
text2.txt
enter data=
hi all
how r u

enter data=
hi i am an Indian
what abt u
enter filename3=text3.txt
after merging file3 is=
hi all
how r u

hi i am an indian
what abt u

6)#!/bin/sh
# Shell script that accepts one or more file names as arguments and converts all of
them #to uppercase ,provided they exist in current directory.
if [ $# -lt 1 ]
then
echo "no arguments"
else
for i in $*
do
if [ ! -e $i ]
then
echo " File $i Does not exists"
else
x=`echo $1 | tr '[a-z]' '[A-Z]'`
echo $i ::: $x
fi
done
fi
Output:
[root@localhost~]#sh links.sh rev.sh ntfile.sh
links.sh
rev.sh
File ntfile.sh does not exist

7) #!/bin/sh
#shell script to find largest of three numbers
echo "Enter three numbers"
read a
read b
read c
if [ $a -gt $b ] && [ $a -gt $c ]
then
echo "first number is largest"
elif [ $b -gt $a ] && [ $b -gt $c ]
then
echo "second number is largest"
else
echo "third number is largest"
fi

Output:
[root@localhost ~]# sh large.sh
enter three numbers
20
10
2
First number is largest

8)#!/bin/sh
#Non recursive shell script which accept any number of argument and print them in
the #reverse order (ex:if the script is named rags then executing rags 1 2 3 4 should
produce # 4 3 2 1 on the standard output)

if [ $# -eq 0 ]
then
echo "no arguments"
else
for i in $*
do
echo $i >> temp
done
i=$#
while [ $i -ne 0 ]
do
head -$i temp | tail -1
i=`expr $i - 1`
done
fi

Output:
[root@localhost~]#sh rev.sh 1 2 3 4
4
3
2
1

9) #!/bin/sh
#shell script to find average of n numbers
echo "Enter the range of numbers"
read n
i=1
sum=0.0
avg=0.0
echo "enter the numbers"
while [ $i -le $n ]
do
read a
sum=`expr $sum+$a|bc`
i=`expr $i+1|bc`
done
avg=`expr $sum/$n|bc`
echo "the average of numbers is "
echo $avg

10) #!/bin/sh
#shell script which accepts valid login name as arguments and prints their
#corresponding home directories if no arguments are specified print a suitable error
#massage.
if [ $# -lt 1 ]
then
echo "no arguments"
else
for i in $*
do
x=`cat /etc/passwd | cut -d ":" -f6 | grep -w "$i"`
if [ -z $x ]
then
echo " no user of the name "$i
else
echo "Home directory of $i is "$x
fi
done
fi
Output1:
[root@localhost~]#sh homedir.sh teji
There is no user of the name teji

Output2:
[root@localhost~]#sh homedir.sh ram
The home directory of ram is /home/ran

11) #!/bin/sh
#shell script to find the factorial of a given number
echo "Enter the number"
read num
i=2
fact=1
while [ $i -le $num ]
do
fact=`expr $fact*$i|bc`
i=`expr $i+1|bc`
done
echo "The factorial of the number is:"
echo $fact

Output:
[root@localhost ~]# sh fact.sh
Enter the number
5
The factorial of the number is:
120
12) #!/bin/sh
#shell script to perform following file operations copy, edit, rename, remove a file
while true
do
echo -ne "\n\n\t********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT"
echo -ne "\n Enter Your Choice="
read choice
case $choice in
1)echo "Enter source filename="
read sname
echo "Enter data and press ^d"
cat >$sname
echo "Enter target filename="
read tname
cp $sname $tname
echo "After copying target file is= "
cat $tname
;;
2)echo "Enter source filename="
read sname
echo "Enter data and press ^d"
cat >$sname
vi $sname
echo "After editing source file is= "
cat $sname
;;

3)echo "Enter source filename="


read sname
echo "Enter data and press ^d"
cat >$sname
echo "Enter target filename="
read tname
mv $sname $tname
echo "After rename target file is= "
cat $tname
echo "After rename source file is= "
cat $sname
;;
4)echo "Enter source filename="
read sname
echo "Enter data and press ^d"
cat >$sname
rm $sname
echo "After deletion message is= "
cat $sname
;;
5)exit
;;
*)echo "INVALID CHOICE"
;;
esac
done

OUTPUT is:-
[root@localhost ~]# sh fileo.sh
********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT
Enter Your Choice=1
Enter source filename=
text1.txt
Enter data and press ^d
hi how r u
i m an indian
east or west india is best
Enter target filename=
text2.txt
After copying target file is=
hi how r u
i m an indian
east or west india is best

********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT
Enter Your Choice=2
Enter source filename=
text1.txt
Enter data and press ^d
hi i m an indian
wht abt u
After editing source file is=
hi i m an indian
wht abt u
east or west india is the best
i m proud to be an indian

********MENU**********
1.COPY
2.EDIT
3.RENAME
4.MOVE
5.REMOVE
6.EXIT
Enter Your Choice=3
Enter source filename=
text1.txt
Enter data and press ^d
hi i m an indian
what abt u
Enter target filename=
text2.txt
After rename target file is=
hi i m an indian
what abt u
After rename source file is=
cat: text1.txt: No such file or directory
********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT
Enter Your Choice=4
Enter source filename=
text1.txt
Enter data and press ^d
hi all
linux operating system
After deletion message is=
cat: text1.txt: No such file or directory
********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT
Enter Your Choice=k
INVALID CHOICE
********MENU**********
1.COPY
2.EDIT
3.RENAME
4.REMOVE
5.EXIT
Enter Your Choice=5
[root@localhost ~]#

13)#!/bin/sh
#shell script that accept two filenames as arguments checks if the permissions are
#identical and if the permissions are identical ,output common permissions otherwise
#output each filename followed by its permissions.

if [ $# -lt 1 -o $# -gt 2 ]
then
echo " arguments are not valid "
else
if [ -e $1 -a -e $2 ]
then
x=`ls -l $1 | cut -d " " -f 1`
y=`ls -l $2 | cut -d " " -f 1`
if [ $x == $y ]
then
echo " permission of $1 and $2 are equal"
echo " common permissions: $x"
else
echo " Permissions are not same"
echo "$1 has : $x "
echo "$2 has : $y "
fi
else
echo " file does not exists"
fi
fi
output1:
[root@localhost~]#sh ram.sh
arguments are not valid
output2:
[root@localhost~]#sh links.sh rev.sh
Permissions of links.sh and rev.sh are equal.
Common permission is : -rw-r--r--
Output3:
[root@localhost~]#sh fileo.sh links.sh
Permission are not same
fileo.sh has: -rw-r-xrwx
links.sh has: -rw-r--r--

14) #!/bin/sh
#shell script to generate the Fibonacci series
echo "Enter the number of terms in the Fibonacci series"
read n
s=0
a=1
b=1
echo "The Series is :"
echo $s
echo $a
echo $b
s=`expr $a+$b|bc`
echo $s
n=`expr $n-3|bc`
while [ $n -gt 0 ]
do
a=$b
b=$s
s=`expr $a+$b|bc`
echo $s
n=`expr $n-1|bc`
done
Output:
[root@localhost Shell]# sh fibno.sh
Enter the number of terms in the fibonacci series
6
The Series is :
0
1
1
2
3
5

15)#!/bin/sh
#shell script that accepts a path name and creates all the components in the path name
#as directories (ex:teji/seema/nonu/ram should creates a directory teji, teji/seema,
teji/seema/nonu, teji/seema/nonu/ram)
if [ $# -lt 1 ]
then
echo " no arguments"
else
echo $1 | tr "/" " " > temp
for i in $temp
do
mkdir $i
cd $i
done
echo "Directories are created"
fi
Output
[root@localhost~]#sh dircreat.sh teji/seema/nonu/ram
Directories are created.

16) #!/bin/sh
#shell script to design a menu driven calculator
while true
do
echo -ne "\n\n\t********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT"
echo -ne "\n Enter Your Choice="
read choice
case $choice in
1)echo "Enter first Number="
read num1
echo "Enter Second Number="
read num2
echo "Addition is= "
echo $num1 + $num2|bc
;;
2)echo "Enter first Number="
read num1
echo "Enter Second Number="
read num2
echo "Subtraction is= "
echo $num1 - $num2|bc
;;
3)echo "Enter first Number="
read num1
echo "Enter Second Number="
read num2
echo "Multiplication is= "
echo $num1 \* $num2|bc
;;
4)echo "Enter first Number="
read num1
echo "Enter Second Number="
read num2
echo "Remainder is= "
echo $num1 % $num2|bc
;;
5)echo "Enter first Number="
read num1
echo "Enter Second Number="
read num2
echo "Quotient is= "
echo $num1 / $num2|bc
;;
6)exit
;;
*)echo "INVALID CHOICE"
;;
esac
done

OUTPUT IS:-
[root@localhost ~]# sh calculator.sh
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=1
Enter first Number=
12.4
Enter Second Number=
3.6
Addition is=
16.0
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=2
Enter first Number=
12.8
Enter Second Number=
6.7
Subtraction is=
6.1
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=3
Enter first Number=
12.9
Enter Second Number=
5.8
Multiplication is=
74.8
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=4
Enter first Number=
45.7
Enter Second Number=
3.6
Remainder is=
2.5
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=5
Enter first Number=
37.9
Enter Second Number=
23.5
Quotient is=
1
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
Enter Your Choice=9
INVALID CHOICE
********MENU**********
1.ADDITION
2.SUBTRACTION
3.MULTIPLICATION
4.REMAINDER
5.QUOTIENT
6.EXIT
17)#!/bin/sh
#shell script that takes a valid directory name as an argument and recursively descend
#all the subdirectories find its maximum length of any file in that hierarchy and writes
#this maximum value to the second output.
if [ $# -lt 1 ]
then
echo "arguments are not valid"
else
if [ -d $1 ]
then
ls -lR $1 | tr -s " " | sort -t " " -n -r -k 5 | grep "^[^d]" | head -1 |
cut -d " " -f 5,9
else
echo " Directory does not exist"
fi
fi
Output1:
[root@localhost~]#sh dirchk.sh ram
Directory does not exist
Output2:
[root@localhost~]#sh dirchk.sh teji
983 teji

18) #!/bin/sh
#shell script to search a word from a file
until false
do
echo "Enter filename="
read fname
echo "enter data and press ^d"
cat >$fname
echo "Enter word to be searched="
read word
grep $word $fname
if test -z "$word"
then
echo "word not found try other"
continue
fi
echo "given word is =$word"
break
done

19) #!/bin/sh
#Menu driven shell script to listing files, present working directory ,process running,
#display current system date
choice=y
while test choice=y
do
echo -e "\t\t****MENU*****
l.listing files
p. Present working directory
ps.process running
d.display current system date
e.exit"
echo -e "\n enter ur choice="
read option
case $option in
l)
echo "enter path="
read path
echo "contents is="
ls -al $path
;;
p)echo "present working directory is="
pwd
;;
ps)ps
;;
d)date
;;
e)exit
;;
*) echo "WRONG CHOICE!!!!!"
esac
done

OUTPUT IS:-
[root@localhost ~]# sh comand.sh
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
l
enter path=
/bin
contents is=
total 6796
drwxr-xr-x 2 root root 4096 Oct 23 00:26 .
drwxr-xr-x 23 root root 4096 Oct 23 05:38 ..
-rwxr-xr-x 1 root root 4760 May 4 2005 arch
lrwxrwxrwx 1 root root 4 Aug 27 16:58 awk -> gawk
-rwxr-xr-x 1 root root 17248 May 25 2005 basename
-rwxr-xr-x 1 root root 686520 May 10 2005 bash
-rwxr-xr-x 1 root root 21104 May 25 2005 cat
-rwxr-xr-x 1 root root 38828 May 25 2005 chgrp
-rwxr-xr-x 1 root root 38464 May 25 2005 chmod
-rwxr-xr-x 1 root root 41852 May 25 2005 chown
-rwxr-xr-x 1 root root 63256 May 25 2005 cp
-rwxr-xr-x 1 root root 103888 May 17 2005 cpio
lrwxrwxrwx 1 root root 4 Aug 27 17:00 csh -> tcsh
-rwxr-xr-x 1 root root 32312 May 25 2005 cut
-rwxr-xr-x 1 root root 47684 May 25 2005 date
-rwxr-xr-x 1 root root 34540 May 25 2005 dd
-rwxr-xr-x 1 root root 38696 May 25 2005 df
-rwxr-xr-x 3 root root 61424 May 3 2005 zcat
-rwxr-xr-x 1 root root 459884 Jan 17 2005 zsh
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
p
present working directory is=
/root
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
ps
PID TTY TIME CMD
4703 pts/1 00:00:00 bash
5372 pts/1 00:00:00 sh
5376 pts/1 00:00:00 ps
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
d
Mon Oct 23 09:03:34 IST 2006
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
m
WRONG CHOICE!!!!!
****MENU*****
l.listing files
p.presnt working directory
ps.process running
d.display current system date
e.exit

enter ur choice=
e
[root@localhost ~]#

20)#!/bin/sh
#shell script that reports the logging in of a specified user within one minute after
#he/she login. The script automatically terminate if specified user does not log in
#during a specified period of time.
echo “ enter the login name of the user “
read name
period=0
until who| grep –w”$name 2> /dev/null /* search for the user error are send to special
file */
do
sleep 60
period=`expr $period + 1`
if [ $period –gt 1 ]
then
echo “ $name has not login since 1 minute “
exit
fi
done
echo “ $name is now logged in “
Output1:
[root@localhost~]#sh login.sh
Enter the login name of the user
teji
teji is now logged in
21) #!/bin/sh
#shell script to copy one file to another and display the contents
echo "Enter source filename ="
read source
echo "Enter data and press ^d"
cat >$source
echo -e "\n Enter target file name="
read target
if cp $source $target
then
echo "file copied successfully"
echo "contents of target file is="
cat $target
else
echo "Failed to copied the file"
fi

output is:-
[root@localhost ~]# sh cp.sh
Enter source filename =
test.txt
Enter data and press ^d
linux programming
shell programming
c programing
Enter target file name=
tested.txt
file copied successfully
contents of target file is=
linux programming
shell programming
c programing

22)#!/bin/sh
#shell script that accepts two integers as its argument and compute the value of first
#number raised to the power of second number.
if [ $# -eq 0 ]
then
echo “not sufficient arguments”
else
x=$1
y=$2
if [ $y –eq 0 ]
then
mul=1
else
mul=1
I=1
if [ $y –le 0 ]
# if the power is less than 0 then this operation can be done
then
y=`expr $y \* -1
while [ $i –le $y ]
do
mul=`expr mul \* $x`
i=`expr $i + 1`
done
echo “the $x to the power $y is=1/mul”
else
while [ $i -le $y ]
do
mul=`expr $mul \* $x`
i=`expr $i + 1`
done
echo “the $x to the power of $y= $mul”
fi
fi
fi
Output:
[root@localhost~]#sh pow.sh 2 4
2 raised to the power of 4 is 16

23) #!/bin/sh
#Menu driven shell script to generate the students DMC
echo "Enter student name="
read sname
echo "Enter Fname="
read fname
echo "Enter student roll no="
read rollno
echo "Enter university registration no="
read uregno
echo "Enter address="
read address
echo "Enter marks in Linux="
read lin
echo "Enter marks in BBC="
read bbc
echo "Enter marks in Ecomerce="
read ec
echo "Enter marks in AI="
read ai
echo "Enter marks in CD="
read cd
sum=`expr $lin + $bbc + $ec + $ai + $cd`
avg=`expr $sum / 5`

echo -ne "\n\n\n\t\t *******DETAILED MARK CARD**********"


echo -ne "\nStudent Name :=$sname"
echo -ne "\nstudent Father Name :=$fname"
echo -ne "\nStudent Roll NO :=$rollno"
echo -ne "\nUniversity Registration NO :=$sname"
echo -ne "\nStudent Address :=$address"
echo -ne "\nStudent $sname result is :="
if test $avg -gt 60
then
echo "FIRST DIVISION"
elif test $avg -lt 60 -a $avg -gt 50
then
echo "SECOND DIVISION"
elif test $avg -lt 50 -a $avg -ge 40
then
echo "THIRD DIVISION"
else
echo "FAIL"
fi

OUTPUT1:
[root@localhost ~]# sh dmc.sh
Enter student name=
ram kumar
Enter Fname=
sh. jai kumar
Enter student roll no=
1203501
Enter university registration no=
96 gny 219
Enter address=
#213 ,KKR
Enter marks in Linux=
56
Enter marks in BBC=
67
Enter marks in Ecomerce=
77
Enter marks in AI=
77
Enter marks in CD=
66

*******DETAILED MARK CARD**********


Student Name :=ram kumar
student Father Name :=sh. jai kumar
Student Roll NO :=1203501
University Registration NO :=ram kumar
Student Address :=#213 ,KKR
Student ram kumar result is :=FIRST DIVISION

OUTPUT2:
[root@localhost ~]# sh dmc.sh
Enter student name=
ram kumar
Enter Fname=
sh. jai kumar
Enter student roll no=
123
Enter university registration no=
96 gny 219
Enter address=
#213 KKR
Enter marks in Linux=
56
Enter marks in BBC=
45
Enter marks in Ecomerce=
66
Enter marks in AI=
58
Enter marks in CD=
72

*******DETAILED MARK CARD**********


Student Name :=ram kumar
student Father Name :=sh. jai kumar
Student Roll NO :=123
University Registration NO :=ram kumar
Student Address :=#213 KKR
Student ram kumar result is :=SECOND DIVISION

OUTPUT3:
[root@localhost ~]# sh dmc.sh
Enter student name=
ram kumar
Enter Fname=
sh. jai kumar
Enter student roll no=
1203501
Enter university registration no=
96 gny 219
Enter address=
#213 KKR
Enter marks in Linux=
47
Enter marks in BBC=
45
Enter marks in Ecomerce=
56
Enter marks in AI=
36
Enter marks in CD=
55
*******DETAILED MARK CARD**********
Student Name :=ram kumar
student Father Name :=sh. jai kumar
Student Roll NO :=1203501
University Registration NO :=ram kumar
Student Address :=#213 KKR
Student ram kumar result is :=THIRD DIVISION

OUTPUT4:
[root@localhost ~]# sh dmc.sh
Enter student name=
ram kumar
Enter Fname=
sh. jai kumar
Enter student roll no=
1203501
Enter university registration no=
96 gny 219
Enter address=
#213 KKR
Enter marks in Linux=
34
Enter marks in BBC=
20
Enter marks in Ecomerce=
56
Enter marks in AI=
36
Enter marks in CD=
23

*******DETAILED MARK CARD**********


Student Name :=ram kumar
student Father Name :=sh. jai kumar
Student Roll NO :=1203501
University Registration NO :=ram kumar
Student Address :=#213 KKR
Student ram kumar result is :=FAIL

24) #!/bin/sh
#shell script to generate electricity bill
echo "Enter Customer Bill Account Number="
read custno
echo "Enter Customer Name="
read custname
echo "Enter Customer Address="
read custadd
echo "Enter Previous Reading="
read pread
echo "Enter Current Reading="
read cread
consunits=`expr $cread - $pread`
if test $consunits -le 100
then
bill1=$consunits
elif test $consunits -gt 100 -a $consunits -le 200
then
unitrem=`expr $consunits - 100`
bill=`expr $unitsrem \* 2`
bill1=`expr $bill + 100`
elif test $consunits -gt 200
then
unitsrem=`expr $consunits - 200`
bill=`expr $unitrem \* 3`
bill1=`expr $bill + 300`
fi
dbill=`expr $bill1 + 20`
echo $dbill
echo -ne "\n\n******************ELECTRICITY BILL******************"
echo -ne "\nCUSTOMER NAME :=$custname"
echo -ne "\nCUSTOMER ACCOUNT NUMBER :=$custno"
echo -ne "\nCUSTOMER ADDRESS :=$custadd"
echo -ne "\nPREAVIOUS READING :=$pread"
echo -ne "\nCURRENT READING :=$cread"
echo -ne "\nUNITS CONSUMED :=$consunits"
echo -ne "\nBILL PAYED(ON DATE) :=$bill1"
echo -ne "\nBILL PAYED(AFTER DATE) :=$dbill"

Output1:
[root@localhost ~]# sh bill.sh
Enter Customer Bill Account Number=
1234
Enter Customer Name=
Ram Kumar
Enter Customer Address=
#1232 ,KKR
Enter Previous Reading=
100
Enter Current Reading=
196
******************ELECTRICITY BILL******************
CUSTOMER NAME :=Ram Kumar
CUSTOMER ACCOUNT NUMBER :=1234
CUSTOMER ADDRESS :=#1232 ,KKR
PREAVIOUS READING :=100
CURRENT READING :=196
UNITS CONSUMED :=96
BILL PAYED(ON DATE) :=96
BILL PAYED(AFTER DATE) :=116
Output2:
[root@localhost ~]# sh bill.sh
Enter Customer Bill Account Number=
1234
Enter Customer Name=
Ram Kumar
Enter Customer Address=
#1232 ,KKr
Enter Previous Reading=
100
Enter Current Reading=
296
312

******************ELECTRICITY BILL******************
CUSTOMER NAME :=Ram Kumar
CUSTOMER ACCOUNT NUMBER :=1234
CUSTOMER ADDRESS :=#1232 ,KKr
PREAVIOUS READING :=100
CURRENT READING :=296
UNITS CONSUMED :=196
BILL PAYED(ON DATE) :=292
BILL PAYED(AFTER DATE) :=312

Output3:
[root@localhost ~]# sh bill.sh
Enter Customer Bill Account Number=
1234
Enter Customer Name=
Ram Kumar
Enter Customer Address=
#1232,KKR
Enter Previous Reading=
100
Enter Current Reading=
478
854

******************ELECTRICITY BILL******************
CUSTOMER NAME :=Ram Kumar
CUSTOMER ACCOUNT NUMBER :=1234
CUSTOMER ADDRESS :=#1232,KKR
PREAVIOUS READING :=100
CURRENT READING :=478
UNITS CONSUMED :=378
BILL PAYED(ON DATE) :=834
BILL PAYED(AFTER DATE) :=854

25)#!/bin/sh
#shell script to implement terminal locking (similar to the lock command) .it should
#prompt the user for the password .after accepting the password entered by the user it
#must prompt again for the matching password as confirmation and if match occurs it
#must lock the keyword until a matching password is entered again by the user ,note
#that the script must be written to disregard BREAK,control-D. No time limit need be
#implemented for the lock duration.
echo "terminal locking script"
echo "enter a password"
stty -echo
read password1
stty -echo
echo "re-enter the password"
stty -echo
read password2
stty echo
if [ $password1!=$password2 ]
echo "mismatch in password"
echo "terminal cannot be locked"
exit
fi
echo "terminal locked"
stty intr ^-
stty quit ^-
stty kill ^-
stty eof ^-
stty stop ^-
stty susp ^-
echo "Enter the password to unlock the terminal"
stty -echo
read password3
if [ $password3!=$passowrd1 ]
then
stty echo
echo "incorrect password"
fi
while [ $password3!=$passowrd1 ]
do
echo "Enter the password to unlock the terminal"
stty -echo
read password3
if [ $password3!=$passowrd1 ]
then
stty echo
echo "incorrect password"
fi
done
stty echo
stty sane
Output1 :
[root@localhost~]#sh passchk.sh
enter a password
re-enter the password
mismatch in password
terminal cannot be locked
Output2:
[root@localhost~]#sh passchk.sh
enter a password
re-enter the password
Terminal locked

26) #!/bin/sh
#Menu driven shell script to search a word from a given sentence
while true
do
echo -ne "\n\n\t********MENU**********
1.Enter sentence
2.search a given word
3.exit"
echo -ne "\n Enter Your Choice="
read choice
case $choice in
1)echo "Enter file name="
read fname
echo "Enter data and press ^d = "
cat >$fname
;;
2)echo "Enter file Name="
read fname
echo "Enter word to be searched="
read word
echo "Result After searching="
grep $word $fname || echo "word not found"
;;
3)exit
;;
*)echo "INVALID CHOICE"
;;
esac
done

OUTPUT IS:-
[root@localhost ~]# sh search.sh
********MENU**********
1.Enter sentence
2.search a given word
3.exit
Enter Your Choice=1
Enter file name=
test.txt
Enter data and press ^d =
hi i m an indian
india is a great country
india is best
hello world
hi all
********MENU**********
1.Enter sentence
2.search a given word
3.exit
Enter Your Choice=2
Enter file Name=
test.txt
Enter word to be searched=
india
Result After searching=
hi i m an indian
india is a great country
india is best

********MENU**********
1.Enter sentence
2.search a given word
3.exit
Enter Your Choice=2
Enter file Name=
test.txt
Enter word to be searched=
delhi
Result After searching=
word not found
********MENU**********
1.Enter sentence
2.search a given word
3.exit
Enter Your Choice=y
INVALID CHOICE
********MENU**********
1.Enter sentence
2.search a given word
3.exit
Enter Your Choice=3

27)#!/bin/sh
#Create a script file called file properties that reads a file name entered and output its
#properties.
echo “enter filename”
read file
c=1
if [ -e $file ] #checks the existence of the file
then
for i in `ls –l $file | tr –s “ “`
# ‘tr –s “ “’ treats 2 or more spaces as a single space
do
case “$c” in #case condition starts
1) echo “file permission=” $i ;;
2) echo “No of link =” $i;;
3) echo “file belongs to owner =” $i;;
4) echo “file belongs togroup=”$i ;;
5) echo “file size=” $i ;;
6) echo “file created in the month of=” $i ;;
7) echo “file creation date=” $i ;;
8) echo “last modification time=” $i ;;
9) echo “file name=” $i ;;
esac #end of case condition
c=`expr $c + 1`
done
else
echo “file does not exist”
fi
Output:
[root@localhost~]#sh filepermis.sh
enter filename
teji.sh
file permission=-rw-r- -r- -
link=1
file belongs to owner=teji
file belongs to group=teji
file size =555
file created in the month of =march
file creation date=12
last modification time=12:16
file name=teji.sh

28)#!/bin/sh
#shell script that accepts a list of file names as its argument, count and report
#occurrence of each word that is present in the first argument file on other argument
#file.
if [ $# -eq 0 ]
then
echo "no arguments"
else
tr " " "\n" < $1 > temp
shift
for i in $*
do
tr " " "\n" < $i > temp1
y=`wc -l < temp`
j=1
while [ $j -le $y ]
do
x=`head -n $j temp | tail -1`
c=`grep -c "$x" temp1`
echo $x $c
j=`expr $j + 1`
done
done
fi

29) #!/bin/sh
#shell script to sort numbers
echo "Enter source filename="
read fname
echo -ne "\nEnter Numeric data and press ^d\n"
cat >$fname
sort -g $fname >sortedfile
echo -e "\nUnsorted file is=\n"
cat $fname
echo -ne "\nSorted file is=\n"
cat sortedfile
OUTPUT IS:-
[root@localhost ~]# sh numsort.sh
Enter source filename=
number.txt
Enter Numeric data and press ^d
123
233
12
78
345
674
234
884
11
88
33
Unsorted file is=
123
233
12
78
345
674
234
884
11
88
33
Sorted file is=
11
12
33
78
88
123
233
234
345
674
884

30)#!/bin/sh
#shell script that delete all lines containing a specific word in one or more file supplied
#as argument to it.
if [ $# -eq 0 ]
then
echo "no arguments"
else
echo "enter a deleting word or char"
read y
for i in $*
do
grep -v "$y" "$i" > temp
if [ $? -ne 0 ]
then
echo "pattern not found"
else
cp temp $i
rm temp
fi
done
fi

31) #!/bin/sh
#shell script to search a word from a file
echo "Enter file name="
read fname
echo "Enter data and press ^d = "
cat >$fname
echo -e "\nEnter word to be searched="
read word
echo "Result After searching="
grep $word $fname || echo "word not found"

OUTPUT:
[root@localhost ~]# sh search.sh
Enter file name=
test.txt
Enter data and press ^d =
hi all
hello world
hello how r u
say hello to all
Enter word to be searched=
hello
Result After searching=
hello world
hello how r u
say hello to all

32)#!/bin/sh
#shell script to compute the sum of numbers passed to it as argument on command line
#display the result .
if [ $# -lt 2 ] #check the number of argument
then
echo “enter minimum two argument”
else
sum=0
for i in $* #for loop starts
do
sum=`expr $sum + $i` #to find the sum
done # for loop ends
echo “sum of $* is =” $sum
fi

Output:
[root@localhost~]#sh sum.sh 12 14
sum of 12 1 4 is = 26

33)#!/bin/sh
#shell script that gets executed displays the message either “good morning” “good
afternoon” “good evening” depend upon time at which user logs in.
x=`who am i | tr -s " " | cut -d " " -f5`
#x=5
if [ $x -ge 05 -a $x -lt 12 ]
then
echo "good morning"
elif [ $x -ge 12 -a $x -lt 16 ]
then
echo "good afternoon"
elif [ $x -ge 16 -a $x -le 21 ]
then
echo "good evening"
fi
Output:
[root@localhost~]#sh wish.sh
good evening

34) #!/bin/sh
#shell script to sort the given list of names
echo "Enter source filename="
read fname
echo -ne "\nEnter data and press ^d\n"
cat >$fname
sort $fname >sortedfile
echo -e "\nUnsorted file is=\n"
cat $fname
echo -ne "\nSorted file is=\n"
cat sortedfile

OUTPUT :
[root@localhost ~]# sh sort.sh
Enter source filename=
name.txt
Enter data and press ^d
ramesh
ram kumar
parveen
amit
neha
richa
sivani
vikas
pooja
Unsorted file is=
ramesh
ram kumar
parveen
amit
neha
richa
sivani
vikas
pooja
Sorted file is=
amit
neha
parveen
pooja
ramesh
ram kumar
richa
sivani
vikas

35)#!/bin/sh
# shell script to display the calendar for current month with current date replaced by *
# or ** depending on whether the date has one digit or two.
n=`date +%d`
cal >temp
if [ $n -lt 10 ]
then
sed s/"$n"/*/g temp
else
sed s/"$n"/**/g temp
fi

36) #!/bin/sh
#Menu driven shell script to convert 1. LOWER TO UPPER
# 2. UPPER TO LOWER
# 3. CONVERT TAB INTO BLANK SPACE IN A FILE

until false
do
echo -e "\n\n\n\t *****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT"
echo -e "\nEnter ur choice="
read choice
case $choice in
1)
echo "Enter source filename="
read source
echo "Enter sentence in lowercase "
vi $source
echo "Enter target filename="
read target
dd if=$source of=$target conv=ucase
echo "contents of source file is="
cat $source
echo "Contents of target file after conversion="
cat $target
;;
2)
echo "Enter source filename="
read source
echo "Enter sentence in uppercase "
vi $source

echo "Enter target filename="


read target
dd if=$source of=$target conv=lcase
echo "contents of source file is="
cat $source
echo "Contents of target file after conversion="
cat $target
;;
3) echo "Enter source filename="
read source
echo "Enter tabed sentences "
cat > $source
echo -e "\n Contents of source file after conversion=\n"
expand --tab=2 $source
;;

4) exit
;;
*) echo "wrong choice"
esac
done

OUTPUT IS:-
[root@localhost ~]# sh stou.sh

*****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT
Enter ur choice=
1
Enter source filename=
text1.txt
Enter sentence in lowercase
Enter target filename=
text2.txt
contents of source file is=
hi i m an indian
who r u
what is ur name?
what abt others
0+1 records in
0+1 records out
Contents of target file after conversion=
HI I M AN INDIAN
WHO R U
WHAT IS UR NAME?
WHAT ABT OTHERS

*****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT

Enter ur choice=
2
Enter source filename=
text3.txt
Enter sentence in lowercase
Enter target filename=
text4.txt
contents of source file is=
INDIA IS A GREAT COUNTRY
INDIA IS BEST
0+1 records in
0+1 records out
Contents of target file after conversion=
india is a great country
india is best

*****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT

Enter ur choice=
3
Enter source filename=
test.txt
Enter tabed sentences
q u e s t i o n
a n s w e r
Contents of source file after conversion=

question
answer
*****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT

Enter ur choice=
k
wrong choice

*****MENU*******
1. LOWER TO UPPER
2. UPPER TO LOWER
3. CONVERT TAB INTO BLANK SPACE IN A FILE
4. EXIT

Enter ur choice=
4

37)#!/bin/sh
#Write a shell script that accepts as filename as argument and display its creation time
#if file exist and if it does not send output error massage.
if [ $# -eq 0 ]
then
echo "no arguments"
else
for i in $*
do
if [ ! -e $i ]
then
echo "file not exist"
else
ls -l $i | tr -s " " | cut -d " " -f7
fi
done
fi
Chapter 8
Basic Linux Administration
8.0 Basics of User and Group Administration
UID
This is an incremental number unique for every user and operating system uses it to
distinguished different users. UID for root is always 0. UID for normal user starts from
500. Maintenance/Service accounts use small number (typically less than 99).
GID
Identifies default group associated with user. The root group is always 0. Lower numbers
are used by system groups. Regular user numbers are listed in /etc/login.defs file.
Home directory:-This is where file and data of particular user resides.
Open /etc/login.defs in vi editor. It contains following information and we can edit this
information according to our requirement.
# Password aging controls:
# PASS_MAX_DAYS Maximum number of days a password may be used.
# PASS_MIN_DAYS Minimum number of days allowed between password changes.
# PASS_MIN_LEN Minimum acceptable password length.
# PASS_WARN_AGE Number of days warning given before a password expires.
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
# Min/max values for automatic uid selection in useradd
UID_MIN 500
UID_MAX 60000
# Min/max values for automatic gid selection in groupadd
GID_MIN 500
GID_MAX 60000

8.1 ADDING USER


useradd or adduser
Create a new account and you must be login as super user.
Syntax: useradd [-c comment] [-d home_dir]
[-e expire_date] [-f inactive_time]
[-g initial_group] [-G group[,...]]
[-m [-k skeleton_dir] | -M] [-n] [-o] [-p passwd] [-r]
[-s shell] [-u uid] login
useradd -D [-g default_group] [-b default_home]
[-e default_expire_date] [-f default_inactive]
[-s default_shell]
Options: -u - userid
-g - primary group
-s - shell
-d - home directory
-c - comment (Commonly used to specify full name)
-m - make the home directory if it doesn't already exist
-M - don't create the user's home directory regardless of the defaults
-G - a list of supplementary groups that the user will belong to (separate
with commas)
-n - don't create a group with the same name as the user
-r - create a system account (uid < UID_MIN in /etc/login.defs)
-D - displays defaults if no other options are given
-b - change default home (when used with -D)
-g - change default group (when used with -D)
-s - change default shell (when used with -D)

Example: Adding a new user with user id 515 , belongs to group teji , having bash
shell ,account expires on 3/1/2007, and having name tajinder
[root@jmit ~]# adduser -u 515 -g teji -s /bin/bash -e 3/1/2007 tajinder
passwd command:-This command is used to set password for specified user.
syntax :- passwd username
[root@jmit ~]# passwd tajinder
Changing password for user tajinder.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
NOTE: On 3rd January when you try to login with user name Tajinder the following
message will be displayed on your screen
Message is: Your session lasted only less than 10 seconds. If you have not logged out
yourself, this could be mean that there is some installation problem or that you may be out
of disk space .try logging in with failsafe session if you can fix this problem.
view details (~/.Xsession-errors file).
When a new user is created its entries are made in following two files
The /etc/passwd file contains basic user attributes. This is an ASCII file that
contains an entry for each user. Each entry defines the basic attributes applied to a user.
Another method of storing account information is with the shadow password
format. As with the traditional method, this method stores account information in the
/etc/passwd file in a compatible format. However, the password is stored as a single “x”
character (i.e. not actually stored as a single in this file). A second file, called
“/etc/shadow”, contains encrypted password as well as other information such as account
or password expiration values, etc. the /etc/shadow file is readable only by the root
account and is therefore less of a security risk.
[root@jmit ~]# vi /etc/passwd
tajinder:x:515:500::/home/tajinder:/bin/bash
Each field in a passwd entry is separated with “:” colon characters, and is as follows:
• Username up to 8 characters. Case-sensitive, usually all lowercase.
• An “X” in the password field. Passwords are stored in the “/etc/shadow” file.
• Numeric user id. This is assigned by the “adduser” script. UNIX uses this field,
plus the following group field, to identify which files belongs to the user.
• Numeric group id. Red Hat uses group id’s in a fairly unique manner for
enhanced file security. Usually the group id will match the user id.
• Full name user.
• User’s home directory. Usually /home /username (e.g./home /shah). All user’s
personal files, web pages, mail forwarding, etc. will be stored here.
• User’s “shell account “. Often set to /bin/ bash” to provide access to the bash shell
(my personal favorite shell).

[root@jmit ~]# vi /etc/shadow


tajinder:$1$vQs3BuGW$NKFzioQnrNRsqiEsQeJyt.:13516:0:99999:7:7::
Here each field is also separated with colon. First field is login user's name, next field is
password denoted by encrypted form if no password is given than!! is displayed ,
Next fileld is day the password last changed (it counts days from 1/1/1970 to current
date), Next field is number of days a user can change password, Next field is maximum
password age, Next field is number of days before password expires when warning starts
appear to change password, Next field is number of days after password expires to wait
before disable account (often remain blank) and last field is expire date (Often blank)
[root@jmit ~]# pwunconv

After executing this command the shadow file is empty and doesn't contain any data. It is
not recommended to use this command. But if it is done to recover from it we use
following command
[root@jmit ~]# pwconv

Criteria for selecting good passwords


Passwords in Red Hat Enterprise Linux are used to verify a user’s identity. This is why
password security is very important for protection of the user, the machine, and the
network. For security purposes, the system use Message-Digest Algorithm (MD5) and
shadow passwords. Shadow passwords eliminate attacks by storing the password hashes
in the file /etc/shadow, which is readable only by the root user.

Do not do the Following for Creating Strong Passwords


1) Do Not Use Only Words or Numbers — never use only numbers or words in a
password.
Examples
123456
Ram
JMIT
2) Do not use recognizable words — Words such as proper names, dictionary words, or
even terms from television shows or novels or movie names should be avoided
Examples
Tajinder
Diya aur baati
Indian
3) Do Not Use Words in Foreign Languages — Password cracking programs often check
against word lists that encompass dictionaries of many languages. Relying on foreign
languages for secure passwords is of little use.
Examples
cheguevara
4) Do not use hacker terminology — If you think you are elite because you use hacker
terminology — also called l337 (LEET) speak — in your password, think again. Many
word lists include LEET speak.
Examples
H4X0R
1337
5) Do not use personal information
Examples
Your name
The names of pets
The names of family members
Any birth dates
Your phone number or zip code
6) Do not Invert Recognizable Words — Good password checkers always reverse
common words, so inverting a bad password does not make it any more secure.
7) Do Not Write down Your Password — never store a password on paper. It is much
safer to memorize it.
8) Do Not Use the Same Password for All Machines — it is important to make separate
passwords for each machine. This way if one system is compromised, all of your
machines are not immediately at risk.

Do the following for selecting good passwords:


1) Make the Password At Least Eight Characters Long — The longer the password, the
better. If using MD5 passwords, it should be 15 characters or longer. With DES
passwords, use the maximum length (eight characters).
2) Mix Upper and Lower Case Letters — Red Hat Enterprise Linux is case sensitive, so
mix cases to enhance the strength of the password.
3) Mix Letters and Numbers — Adding numbers to passwords, especially when added to
the middle ,can enhance password strength.
4) Include Non-Alphanumeric Characters — Special characters such as &, $, and >
provides strength to password.
5) Pick a Password You Can Remember — The best password in the world does little
good if you cannot remember it; use acronyms or other mnemonic devices to aid in
memorizing passwords.

Example: Adding a new user with user id 517 , belongs to group ram , having sh
shell ,home directory /root, account expires on 13/1/2007, and having name sapna.
[root@jmit ~]# adduser -u 517 -g ram -d /root -s /bin/sh -e 13/1/2007 sapna
[root@jmit ~]# passwd sapna
Changing password for user sapna.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@jmit ~]# vi /etc/passwd
sapna:x:517:500::/root:/bin/sh
[root@jmit ~]# vi /etc/shadow
sapna:$1$rms3NFrh$L/ak5w8viFx3WeSh/DfGX.:13516:0:99999:7::13573:

8.2 MODIFYING USER


usermod
This command is used for changing the properties of existing user's. For Example if
some user is promoted to administrator. It is necessary to change user account from user
to root.
Syntax: usermod [-u uid [-o]] [-g group] [-G group,...]
[-d home [-m]] [-s shell] [-c comment] [-l new_name]
[-f inactive] [-e expire ] [-p passwd] [-L|-U] username

Example: To change teji's shell to /bin/ksh


[root@jmit ~]# vi /etc/passwd
teji:x:500:500::/home/teji:/bin/bash
After execution of this command the shell for user teji will be ksh.
[root@jmit ~]#usermod -s /bin/ksh teji
[root@jmit ~]# vi /etc/passwd
teji:x:500:500::/home/teji:/bin/ksh

Example: To change teji's home directory to /root


[root@jmit ~]# vi /etc/passwd
teji:x:500:500::/home/teji:/bin/ksh
After execution of this command the home directory for user teji will be /root.
[root@jmit ~]# usermod -d /root teji
[root@jmit ~]# vi /etc/passwd
teji:x:500:500::/root:/bin/ksh

Example: To change teji's UID.


[root@jmit ~]# vi /etc/passwd
teji:x:500:500::/home/teji:/bin/ksh
After execution of this command the user ID for user teji will be 561.
[root@jmit ~]# usermod -u 561 teji
[root@jmit ~]# vi /etc/passwd
teji:x:561:500::/root:/bin/ksh

8.3 REMOVING USERS


Removing users manually
1) User temporarily transferred to some location
In this case open /etc/passwd in vi editor and place # in front of line. This will make it a
comment and disable the account. When user come back simply remove the # symbol.
The account will become active.
2) Left the organization
In this case simply open the /etc/passwd and /etc/shadow file and remove the entries of
that particular user. Delete home directory also.

Removing users using command


userdel
This command is used to delete the user account .
Syntax: userdel username

Example: Deleting user teji's account.


[root@jmit ~]# userdel teji
Example: To remove user kiran's home directory and his mail spool
[root@jmit ~]# userdel -r kiran

8.4 ADDING GROUP


Creating group account manually
1) Open /etc/group in vi editor. Scroll down and create a new file.
2) Syntax: group name:x:groupID:group members
Type group name of your own choice. Leave password field blank, type x. Check group
identification number. Suppose it is 590. Then assign 591. Add group members to this
file. Group account will look like
Infotech: x: 591: ram, sapna, ravinder
1) Create a directory in which group can store files.
[root@jmit ~]# mkdir /usr/local/infotech
Creating group using command
groupadd
This command is used to add new group.
Syntax: groupadd [-g gid [-o]] [-r] [-f] group
-g - groupid
-r - create a system group
-f - exit with an error if group already exists
Creating Password for group
Syntax: gpasswd groupname
[root@jmit ~]# gpasswd ram
Changing the password for group ram
New Password:
Re-enter new password:

8.5 MODIFYING GROUP


groupmod
This command is used to modify existing group.
Syntax: groupmod groupname
Options: -g to specify new groupid
-n to add new group name
-o to allow a duplicate group id.
-p to specify password
-A to add user to group
-R to delete user from group
Example: To change the name of group ram to RAM
[root@jmit ~]# groupmod -n ram RAM

8.6 DELETING GROUP


Removing group manually
To remove a group open /etc/group file in vi editor and remove the entry of specified
user.

Removing group using command


groupdel: This command is used to delete the group from /etc/group.
[root@jmit ~]# groupdel RAM

Note: All next commands are already discussed in command section.


Granting access using chmod command
Changing ownership using chown command
Changing group using chgrp command.

8.7 IMPORTANT FILES IN LINUX


init files
Booting your System
When your Linux m/c boots, the first running process is init. If you want to see init
before, type ps -ef | grep init to see the PID of init. init performs tasks that are mentioned
in /etc/inittab file.
The format of inittab allows for comment lines beginning with a “#” symbol while
normal entries are “:” delimited. They follow the pattern id:runlevel:action:process
where id represents a user-defined and unique identifier, runlevel can be a combination of
the numbers 0-6 or just left blank, action comes from a keyword that describes how init
should treat the process, and process is the command to execute.
Common keywords UNIX platforms include:
• initdefault—defines the runlevel to enter once the system has booted.
• wait—a process that will be executed once (when the runlevel is entered). The init
process will wait for this process to terminate.
• boot—defines a process that is executed at boot time.
• bootwait—similar to boot but init waits for the process to terminate before
moving on.
• sysinit—defines a process that is executed at boot time before any boot or
bootwait inittab entries.
The runlevel field designates system state. For example, a runlevel of 0 corresponds to a
halted system while a runlevel of 6 corresponds to a system reboot. Red Hat Linux,
support the following:
0. System halt
1. Single-user mode
2. Multiuser, without NFS
3. Complete multiuser mode
4. User defined
5. X11 (XDM login)
6. Reboot
For each runlevel, there is a corresponding directory in /etc/rc.d. For a runlevel of 5, the
directory /etc/rc.d/rc5.d exists and contains files related to tasks that need to be performed
when booting into that runlevel. Under Red Hat, these files are typically symbolic links to
shell scripts found in /etc/rc.d/init.d.
id:3:initdefault:
l3:3:wait:/etc/rc.d/rc 3
Once init is started, it reads /etc/inittab. From the first line, we know that init is going to
end up at a runlevel of 3 after the system boots. Once we reach that runlevel, the second
line tells init to run the script /etc/rc.d/rc three and waits for it to terminate before
proceeding.
The script rc in /etc/rc.d receives 3 as an argument. This 3 corresponds to a runlevel of 3.
As a result, the rc script executes all the scripts in the /etc/rc.d/rc3.d directory. It first
executes all the scripts that begin with the letter K (meaning “kill” the process or service)
with an argument of “stop”. Next, it runs all the scripts that begin with the letter S with an
argument of “start” to start the process or service. As one final note, the order of K and S
script execution is based on sort order; the script named S90mysql would execute before
the script named S95httpd.
It turns out the scripts in /etc/rc.d/rc3.d are actually symbolic links to scripts residing in
/etc/rc.d/init.d. While the UNIX administrator can place scripts in rc3.d, the common
practice under Red Hat is to first place all scripts in init.d, then create logical links to the
rc*.d directories. It doesn't take long to figure out the creation and maintenance of these
scripts and symbolic links could be quite the chore. That's precisely where chkconfig
steps in! The Red Hat chkconfig utility is specifically designed to manage the symbolic
links in /etc/rc.d/rc[0-6].d.

The /etc/init.d Directory


The init.d directory contains a number of start/stop scripts for various services on your
system. Everything from acpid to x11-common is controlled from this directory.
All scripts are located in /etc/init.d. Scripts for changing the runlevel are also found there,
but are called through symbolic links from one of the subdirectories (/etc/init.d/rc0.d to
/etc/init.d/rc6.d). This is just for clarity reasons and avoids duplicate scripts (e.g., if they
are used in several runlevels). Because every script can be executed as both a start and a
stop script, these scripts must understand the parameters start and stop. The scripts
understand, in addition, the restart, reload, force-reload, and status options.
Within each of these directories is a number of other scripts that control processes. These
scripts will either begin with a “K” or an “S”. All “K” scripts are run before “S” scripts.
And depending upon where the scripts are located will determine when the scripts
initiate. Between the directories the system services work together . But there are times
when you need to start or stop a process cleanly and without using the kill or killall
commands. For this, the /etc/init.d directory is used.

In Fedora you might find this directory in /etc/rc.d/init.d. To control any of the scripts in
init.d manually you have to have root access.

Structure of command:
# /etc/init.d/command OPTION
Where command is the actual command to run and OPTION can be one of the following:
Option Description
start Start service.
stop Stop service.
restart If the service is running, stop it and then restart it. If it is not running, start
it.
reload Reload the configuration without stopping and restarting the service.
force-reload Reload the configuration if the service supports this. Otherwise, do the
same as if restart had been given.
status Show current status of service.
Most often you will use start, stop, or restart. So if you want to stop your network you
can issue the command:
/etc/init.d/samba stop
Or if you make a change to your network and need to restart it, you could do so with the
following command:
/etc/init.d/samba restart
Some of the more common init scripts in this directory are:
• networking
• samba
• ftpd
• sshd
• mysql

rc Files
Linux reads the rc (run command) files when the system boots. The init program initiates
the reading of these files, and they usually serve to start processes such as mail, printers,
cron, and so on. They are also used to initiate TCP/IP connections. Most Linux systems
have the rc command files in the directory /etc/rc.d.

/etc/rc.local
This option is the /etc/rc.local script. This file runs after all other init level scripts have
run, so it’s safe to put various commands that you want to have issued upon startup. This
is also a good place to place “troubleshooting” script in. Even after checking to make sure
the Samba daemon was setup to initialize at boot up. Simply placed the line:
# /etc/init.d/samba start in the /etc/rc.local script.
Network Configuration Files
1)/etc/host
The main purpose of this file is to resolve hostnames that cannot be resolved any other
way. It can also be used to resolve hostnames on small networks with no DNS server.
Regardless of the type of network the computer is on, this file should contain a line
specifying the IP address of the loopback device (127.0.0.1) as localhost.localdomain.

Aliases
The biggest benefit of using the /etc/hosts file is its ability to assign various aliases to an
IP address. An alias can be almost any alphanumeric combinationt, easy to remember,
and simple to type. For example, the any name is much easier to remember than
192.168.0.23.

Open /etc/hosts.
#vi /etc/hosts
You will see the following line
127.0.0.1 localhost.localdomain localhost

Normally, this line is the first entry in the hosts file, and it should remain so. There’s an
IP address (in this case, the loopback address), a hostname.domain name, and an alias.
The hostname is the name of the computer that you’re using, the domain is the domain of
your network, and the alias is a user-defined name for the computer. Let's enter a new
alias for the machine that you’re using. Let's say that the IP address of your machine is
192.168.0.23, the name of the machine is itdeptt, the domain is jmit, and the alias that
you’ve chosen is enggclg. This entry would appear like:
192.168.0.23 itdeptt.jmit enggclg

With this file in place, you can perform all of the networking functions by replacing
192.168.0.23 with enggclg.

2)/etc/resolv.conf
This file specifies the IP addresses of DNS servers and the search domain. The
resolv.conf file is used by the name resolver program. It gives the address of your name
server (if you have one) and your domain name (if you have one). You will have a
domain name if you are on the Internet.

A sample resolv.conf file for the system teji. jmit.com has an entry for the domain name,
which is jmit.com (teji is the name of an individual machine):

domain jmit.com
If a name server is used on your network, you should add a line that gives its IP address:

domain jmit.com
nameserver 192.168.0.1
If there are multiple name servers, which is not unusual on a larger network, each name
server should be specified on its own line.
3) /etc/sysconfig/network-scripts/ifcfg-<interface-name>
For each network interface, there is a corresponding interface configuration script. Each
of these files provide information specific to a particular network interface.
/etc/sysconfig/network
The /etc/sysconfig/network file is used to specify information about the desired
network configuration. By default, it contains the following options:
NETWORKING=boolean
A Boolean to enable (yes) or disable (no) the networking. For example:
NETWORKING=yes
HOSTNAME=value
The hostname of the machine. For example:
HOSTNAME=jmit.radaur.com
GATEWAY=value
The IP address of the network's gateway. For example:
GATEWAY=192.168.1.0

4) /etc/host.conf
The system uses the host.conf file to resolve hostnames. It usually contains two lines that
look like this:
order hosts, bind
multi on
These tell the system to first check the /etc/hosts file, then check the nameserver (if one
exists) when trying to resolve a name. The multi entry lets you have multiple IP addresses
for a machine in the /etc/hosts file (which happens with gateways and machines on more
than one network).
If your /etc/host.conf file looks like these two lines, you don't need to make any changes
at all.

5) /etc/networks
The /etc/networks file lists names and IP address of your own network and other
networks you connect to frequently. This file is used by the route command. One
advantage of this file is that it lets you call remote networks by name, so instead of typing
149.23.24, you can type jmit_net.

The /etc/networks file should have an entry for every network that will be used with the
route command if you plan on using the network name as an identifier. If there is no
entry, errors will be generated, and the network won't work properly. On the other hand,
if you don't need to use a network name instead of its IP address, then you can skip the
/etc/networks file.

#vi /etc/networks
loopback 127.0.0.0

localnet 147.13.2.0

jmit_net 197.32.1.0
it_net 12.0.0.0
At a minimum, you should have a loopback and localnet address in the file.

6) /etc/protocols
UNIX systems use the /etc/protocols file to identify all the transport protocols available
on the system and their respective protocol numbers.Usually, this file is not modified but
is maintained by the system and updated automatically as part of the installation
procedure when new software is added.

The /etc/protocols file contains the protocol name, its number, and any alias that may be
used for that protocol. A sample /etc/protocols file looks like this:

# Internet protocols (IP)

ip 0 IP

icmp 1 ICMP

tcp 6 TCP

udp 17 UDP

7) /etc/services
The /etc/services file identifies the existing network services. This file is maintained by
software as it is installed or configured.

This file consists of the service name, a port number, and the protocol type. The port
number and protocol type are separated by a slash, following the conventions mentioned
in previous chapters. Any optional service alias names follow. Here's a short extract from
a sample /etc/services file:

# network services
echo 7/tcp
echo 7/udp
ftp 21/tcp
telnet 23/tcp
smtp 25/tcp mail mailx
tftp 69/udp

You shouldn't change this file at all, but you do need to know what it is and why it is
there to help you understand TCP/IP a little better.

User Accounts and Groups administration files


On Red Hat Enterprise Linux, information about user accounts and groups are stored in
several text files within the /etc/ directory. When a system administrator creates new
user accounts, these files must either be edited manually or applications must be used to
make the necessary changes. This section documents the files in the /etc/ directory that
store user and group information under Red Hat Linux.

User accounts adminstration files


1) /etc/passwd
The /etc/passwd file contains basic user attributes. This is an ASCII file that contains an
entry for each user. Each entry defines the basic attributes applied to a user.
Another method of storing account information is with the shadow password format. As
with the traditional method, this method stores account information in the /etc/passwd file
in a compatible format. However, the password is stored as a single “x” character (i.e. not
actually stored as a single in this file). A second file, called “/etc/shadow”, contains
encrypted password as well as other information such as account or password expiration
values, etc. the /etc/shadow file is readable only by the root account and is therefore less
of a security risk.
[root@jmit ~]# vi /etc/passwd
tajinder:x:515:500::/home/tajinder:/bin/bash
Each field in a passwd entry is separated with “:” colon characters, and is as follows:
• Username up to 8 characters. Case-sensitive, usually all lowercase.
• An “X” in the password field. Passwords are stored in the “/etc/shadow” file.
• Numeric user id. This is assigned by the “adduser” script. UNIX uses this field,
plus the following group field, to identify which files belong to the user.
• Numeric group id. Red Hat uses group id’s in a fairly unique manner for
enhanced file security. Usually the group id will match the user id.
• Full name user.
• User’s home directory. Usually /home /username (e.g./home /shah). All user’s
personal files, web pages, mail forwarding, etc. will be stored here.
• User’s “shell account “. Often set to /bin/ bash” to provide access to the bash shell
(my personal favorite shell).

2) /etc/shadow
[root@jmit ~]# vi /etc/shadow
tajinder:$1$vQs3BuGW$NKFzioQnrNRsqiEsQeJyt.:13516:0:99999:7:7::
Here each field is also separated with colon. First field is login user's name, next field is
password denoted by encrypted form if no password is given than!! is displayed ,Next
fileld is day the password last changed (it counts days from 1/1/1970 to current date),
Next field is number of days a user can change password, Next field is maximum
password age, Next field is number of days before password expires when warning starts
appear to change password, Next field is number of days after password expires to wait
before disable account (often remain blank) and last field is expire date (Often blank)

Groups adminstration files


1) /etc/group
/etc/group in vi editor. Scroll down and create a new file.
Syntax: group name:x:groupID:group members
Type group name of your own choice. Leave password field blank, type x. Check group
identification number. Suppose it is 590. Then assign 591. Add group members to this
file.
Here is an example line from /etc/group:
general:x:542:teji, seema,akshra
This line shows that the general group is using shadow passwords, has a GID of 542, and
that teji, seema, and akshra are members.

2) /etc/gshadow
The /etc/gshadow file is readable only by the root user and contains an encrypted
password for each group, as well as group membership and administrator information.
Just as in the /etc/group file, each group's information is on a separate line. Each of these
lines is a colon delimited list including the following information:
• Group name — The name of the group. Used by various utility programs as a
human-readable identifier for the group.
• Encrypted password — The encrypted password for the group. If set, non-
members of the group can join the group by typing the password for that group
using the newgrp command. If the value of this field is !, then no user is allowed
to access the group using the newgrp command. A value of !! is treated the same
as a value of ! — however, it also indicates that a password has never been set
before. If the value is null, only group members can log into the group.
• Group administrators — Group members listed here (in a comma delimited list)
can add or remove group members using the gpasswd command.
• Group members — Group members listed here (in a comma delimited list) are
regular, non-administrative members of the group.
Here is an example line from /etc/gshadow:
general:!!:seema:teji,akshra
This line shows that the general group has no password and does not allow non-members
to join using the newgrp command. In addition, seema is a group administrator, and teji
and akshra are regular, non-administrative members.
Chapter 9
Compression, decompression, Backing up and restoring files in Linux
9.1 Compression and Decompression
Compression programs
Compression programs are sometimes used in conjunction with transport programs to
reduce the size of backups. This is not always a good idea. Adding compression to a
backup adds extra complexity to the backup and as such increases the chances of
something going wrong.
compress
compress is the standard UNIX compression program and is found on every UNIX
machine (well, I don't know of one that doesn't have it). The basic format of the compress
command is
[root@localhost]# compress filename
The file with the name filename will be replaced with a file with the same name but with
an extension of .Z added, and that is smaller than the original (it has been compressed).
A compressed file is uncompressed using the uncompress command or the -d switch of
compress.
[root@localhost]# uncompress filename or compress -d filename
Example:
[root@localhost]# ls -l ext349*
-rw-r----- 1 teji 17340 Jul 16 14:28 ext349
[root@localhost]# compress ext349
[root@localhost]# ls -l ext349*
-rw-r----- 1 teji 5572 Jul 16 14:28 ext349.Z
[root@localhost]# uncompress ext349.Z
[root@localhost]# ls -l ext349*
-rw-r----- 1 teji 17340 Jul 16 14:28 ext349
By convention, files compressed with gzip are given the extension .gz, files compressed
with bzip2 are given the extension .bz2, and files compressed with zip are given the
extension .zip.Files compressed with gzip are uncompressed with gunzip, files
compressed with bzip2 are uncompressed with bunzip2, and files compressed with zip
are uncompressed with unzip.

Bzip2 and Bunzip2


To use bzip2 to compress a file, type the following command at a shell prompt:
[root@localhost]# bzip2 filename
The file will be compressed and saved as filename.bz2.
To expand the compressed file, type the following command:
[root@localhost]# bunzip2 filename.bz2
The filename.bz2 is deleted and replaced with filename.
You can use bzip2 to compress multiple files and directories at the same time by listing
them with a space between each one:
[root@localhost]# bzip2 filename.bz2 file1 file2 file3 /usr/work/school
The above command compresses file1, file2, file3, and the contents of the
/usr/work/school directory (assuming this directory exists) and places them in a file
named filename.bz2.

gzip and gunzip


gzip is a new addition to the UNIX compression family. It works in basically the same
way as compress but uses a different (and better) compression algorithm. It uses an
extension of .z and the program to uncompress a gzip archive is gunzip
To use gzip to compress a file, type the following command at a shell prompt:
[root@localhost]# gzip filename
The file will be compressed and saved as filename.gz.
To expand the compressed file, type the following command:
[root@localhost]# gunzip filename.gz
The filename.gz is deleted and replaced with filename.
You can use gzip to compress multiple files and directories at the same time by listing
them with a space between each one: gzip -r filename.gz file1 file2 file3 /usr/work/school
The above command compresses file1, file2, file3, and the contents of the
/usr/work/school directory (assuming this directory exists) and places them in a file
named filename.gz.
Example:
[root@localhost]# gzip ext349
[root@localhost]# ls -l ext349*
-rw-r----- 1 jonesd 4029 Jul 16 14:28 ext349.z
[root@localhost]# gunzip ext349

Zip and Unzip


To compress a file with zip, type the following command:
[root@localhost]# zip -r filename.zip filesdir
In this example, filename.zip represents the file you are creating and filesdir represents
the directory you want to put in the new zip file. The -r option speci_es that you want to
include all files contained in the filesdir directory recursively.
To extract the contents of a zip file, type the following command:
[root@localhost]# unzip filename.zip
You can use zip to compress multiple files and directories at the same time by listing
them with a space between each one: zip -r filename.zip file1 file2 file3 /usr/work/school
The above command compresses file1, file2, file3, and the contents of the
/usr/work/school directory (assuming this directory exists) and places them in a file
named filename.zip.

9.2 BACKING UP AND RESTORING FILES


Characteristics of good backup strategy
Ease of use
If backups are easy to use, you will use them. AUTOMATE!! It should be as easy as
placing a tape in a drive, typing a command and waiting for it to complete. In fact you
probably shouldn't have to enter the command, it should be automatically run.
Time efficiency
Obtain a balance to minimize the amount of operator, real and CPU time taken to carry
out the backup and to restore files. The typical tradeoff is that a quick backup implies a
longer time to restore files. Keep in mind that you will in general perform more backups
than restores.
On some large sites, particular backup strategies fail because there aren’t enough hours in
a day. Backups scheduled to occur every 24 hours fail because the previous backup still
hasn't finished. This obviously occurs at sites which have large disks.
Ease of restoring files
The reason for doing backups is so you can get information back. You will have to be
able to restore information ranging from a single file to an entire file system. You need to
know on which media the required file is and you need to be able to get to it quickly. This
means that you will need to maintain a table of contents and label media carefully.
Tolerance of faulty media
A backup strategy should be able to handle faults in the media, and Physical dangers.
Portability to a range of platforms
There may be situations where the data stored on backups must be retrieved onto a
different type of machine. The ability for backups to be portable to different types of
machine is often an important characteristic.

Consideration for a backup strategy


The available commands
The characteristics of the available commands limit what can be done.
Available hardware
The capacity of the backup media to be used also limits how backups are performed. In
particular how much information can the media hold?
Maximum expected size of file systems
The amount of information required to be backed up and whether or not the combination
of the available software and hardware can handle it. A suggestion is that individual file
systems should never contain more information than can fit easily onto the backup media.
Importance of the data
The more important the data is, the more important that it be backed up regularly and
safely.
Level of data modification
The more data being created and modified, the more often it should be backed up. For
example the directories /bin and /usr/bin will hardly ever change so they rarely need
backing up. On the other hand directories under /home are likely to change drastically
every day.

The components of backup


There are basically three components to a backup strategy. The
Scheduler
Decides when the backup is performed.
Transport
The command that moves the backup from the disks to the backup media.
Media
The actual physical device on which the backup is stored.
Scheduler
The scheduler is the component that decides when backups should be performed and how
much should be backed up. The scheduler could be the root user or a program, usually
cron .The amount of information that the scheduler backs up can have the following
categories.
Full backups
all the information on the entire system is backed up. This is the safest type but also the
most expensive in machine and operator time and the amount of media required.
Partial backups
only the busier and more important file systems are backed up. One example of a partial
backup might include configuration files (like /etc/passwd), user home directories and the
mail and news spool directories. The reasoning is that these files change the most and are
the most important to keep a track of. In most instances this can still take substantial
resources to perform.
Incremental backups
only those files that have been modified since the last backup are backed up. This method
requires less resources but a large amount of incremental backups make it more difficult
to locate the version of a particular file you may desire.
Transport
The transport is a program that is responsible for placing the backed-up data onto the
media. There are quite a number of different programs that can be used as transports.
Some of the standard UNIX transport programs are examined later in this chapter.
There are two basic mechanisms that are used by transport programs to obtain the
information from the disk image, through the file system.
Image transports
An image transport program bypasses the file system and reads the information straight
off the disk using the raw device file. To do, this the transport program needs to
understand how the information is structured on the disk. This means that transport
programs are linked very closely to exact file systems since different file systems
structure information differently.
Once read off the disk, the data is written byte by byte from disk onto tape. This method
generally means that backups are usually quicker than the "file by file" method. However
restoration of individual files generally takes much more time.
Transport programs that use the method include dd, volcopy and dump.
File by file
Commands performing backups using this method use the system calls provided by the
operating system to read the information. Since almost any UNIX system uses the same
system calls, a transport program that uses the file by file method (and the data it saves) is
more portable. File by file backups generally take more time but it is generally easier to
restore individual files. Commands that use this method include tar and cpio.

Backing up FAT and EXT2 file systems


If you are like most people using this text then chances are that your Linux computer
contains both FAT and EXT2 file systems. The FAT file systems will be used by the
version of Windows you were originally running while the EXT2 file systems will be
those used by Linux. Of course being the trainee computing professional you are backups
of your personal computer are performed regularly. It would probably be useful to you to
be able to backup both the FAT and EXT2 file systems at the same time, without having
to switch operating systems. Well doing this from Windows isn't going to work.
Windows still doesn't read the EXT2 file system. So you will have to do it from Linux.
Which type of transport do you use for this, image or file by file? Well here's a little
excerpt from the man page for the dump command, the one of the image transports
available on Linux. It might be considered a bug that this version of dump can only
handle ext2 file systems. Specifically, it does not work with FAT file systems.If you
think about it this shortcoming is kind of obvious. The dump command does not use the
kernel file system code. It is an image transport. This means it must know everything
about the file system it is going to backup. How are directories structured, how are the
data blocks for files store on the system, how is file metadata (e.g. permissions, file
owners etc) stored and many more questions. The people who wrote dump included this
information into the command. They didn't include any information about the FAT file
system. So dump can't backup FAT file systems. File by file transports on the other hand
can quite happily backup any file system which you can mount on a Linux machine. In
this situation the virtual file system takes care of all the differences and all the file-by-file
transport knows about are what appear to be normal Linux files.

Media
Backups are usually made to tape based media. There are different types of tape. Tape
media can differ in
physical size and shape, and
amount of information that can be stored.
From 100Mb up to 8Gb.
Different types of media can also be more reliable and efficient. The most common type
of backup media used today are 4 millimeter DAT tapes.

Selecting a backup strategy


Your Linux system's hard disk contains everything needed to keep the system running
as well as to keep other files that you need to like documents and databases. In case if hard
disk crashes you need to back up these files so that you can recover quickly and bring the
system in normal state. For this purpose, you have to decide which files to be selected for
backup and what storage media is used for backup. Your choice of backup media depends
upon amount of data you have to backup.
For small amount of data, such as system configuration files, you can use floppy disks as the
backup media. For backing up single user directory, you can use Zip disks a backup media
For backing up servers, you can use tape drive. Tape drive can store several gigabytes of data
and it can store an entire file system on single tape drive.
For example, if you are using Linux system as a learning tool, then you have to backup only
system files required for Linux configuration. Now you have to save system configuration
files on floppies every time you change any configuration file. On the other hand if you are
using Linux system as an office server that provides shared files storage for many users, risk
due to failure has much impact on your business. In this case you have to backup all the files
every week and any changed or new files every day.
Common BACKUP devices
Backup device Linux Device Name
IDE zip drive /dev/hdc2
Floppy drive /dev/fd0
Ftape drive /dev/qft0
SCSI tape drive /decv/st0
SCSI zip drive /dev/sda

Back up utilities for Linux


1) CTAR: - Backup and recovery software from UniTrends Software Corporation.
2) Arkeia: - Backup and recovery software from Knox software.
3) BRU: - Backup and restore utility from the TOLIS group.
4) LONE-TAR: - Tape backup software package from Lone tar software.
5) Brightstor ARC serve: - Backup for Linux from Data protection technology.

Command Availability Characteristics

dump/restore BSD systems image backup, allows multiple volumes, not


included on most AT&T systems
tar almost all file by file, most versions do not support
systems multiple volumes, intolerant of errors
cpio AT&T systems file by file, can support multiple volumes
some versions don't,
dump and restore
A favorite amongst many Systems Administrators, dump is used to perform backups and
restore is used to retrieve information from the backups.
These programs are of BSD UNIX origin and have not made the jump across to SysV
systems. Most SysV systems do not come with dump and restore. The main reason is that
since dump and restore bypass the file system, they must know how the particular file
system is structured. So you simply can't recompile a version of dump from one machine
onto another (unless they use the same file system structure).
Many recent versions of systems based on SVR4 (the latest version of System V UNIX)
come with versions of dump and restore.

dump on Linux
There is a version of dump for Linux. However, it may be possible that you do not have it
installed on your system. RedHat Linux does include an RPM package which contains
dump. If your system doesn't have dump and restore installed, you should install it now.
RedHat provides a couple of tools to install these packages: rpm and glint. glint is the
GUI tool for managing packages. Refer to the RedHat documentation for more details on
using these tools.
dump
The command line format for dump is
[root@localhost]# dump [ options [ arguments ] ] file system
[root@localhost]# dump [ options [ arguments ] ] filename
Arguments must appear after all options and must appear in a set order.
dump is generally used to backup an entire partition (file system). If given a list of
filenames, dump will backup the individual files.
dump works on the concept of levels (it uses 9 levels). A dump level of 0 means that all
files will be backed up. A dump level of 1...9 means that all files that have changed since
the last dump of a lower level will be backed up. Table shows the arguments for dump.

Options Purpose

0-9 dump level


a archive-file archive-file will be a table of contents of the
archive.
f dump-file specify the file (usually a device file) to write the
dump to, a – specifies standard output
U update the dump record (/etc/dumpdates)
V after writing each volume, rewind the tape and
verify. The file system must not be used during
dump or the verification.

Arguments for dump


There are other options. Refer to the man page for the system for more information.
Example: [root@localhost ~]#dump 0dsbfu 54000 6000 126 /dev/rst2 /usr
Full backup of /usr file system on a 2.3 Gig 8mm tape connected to device rst2 The
numbers here are special information about the tape drive the backup is being written on.

The restore command


The purpose of the restore command is to extract files archived using the dump
command. restore provides the ability to extract single individual files, directories and
their contents and even an entire file system.
[root@localhost ~]#restore -irRtx [ modifiers ] [ filenames ]

Arguments Purpose

i interactive, directory information is read from the tape after which


you can browse through the directory hierarchy and select files to
be extracted.
r restore the entire tape. Should only be used to restore an entire file
system or to restore an incremental tape after a full level 0 restore.
t table of contents, if no filename provided, root directory is listed
including all subdirectories (unless the h modifier is in effect)
x extract named files. If a directory is specified, it and all its sub-
directories are extracted.
Arguments for the restore Command.
Modifiers Purpose
a archive-file use an archive file to search for a file's
location. Convert contents of the dump
tape to the new file system format
D turn on debugging
H prevent hierarchical restoration of sub-
directories
V verbose mode
f dump-file specify dump-file to use, - refers to
standard input
sn skip to the nth dump file on the tape
Argument modifiers for the restore Command.
Using dump and restore without a tape:
Not many of you will have tape drives or similar backup media connected to your Linux
machine. However, it is important that you experiment with the dump and restore
commands to gain an understanding of how they work. This section offers a little kludge
which will allow you to use these commands without a tape drive. The method relies on
the fact that UNIX accesses devices through files.
Our practice file system
For all our experimentation with the commands in this chapter we are going to work with
a practice file system. Practicing backups with hard-drive partitions is not going to be all
that efficient as they will almost certainly be very large. Instead we are going to work
with a floppy drive.
The first step then is to format a floppy with the ext2 file system. By now you should
know how to do this. Here's what I did to format a floppy and put some material on it.
[root@localhost]# /sbin/mke2fs /dev/fd0
mke2fs 1.10, 24-Apr-97 for EXT2 FS 0.5b, 95/08/09
Linux ext2 filesystem format
Filesystem label=360 inodes, 1440 blocks
72 blocks (5.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
1 block group
8192 blocks per group, 8192 fragments per group
360 inodes per group
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
[root@localhost]# mount -t ext2 /dev/fd0 /mnt/floppy
[root@localhost]# cp /etc/passwd /etc/issue /etc/group /var/log/messages /mnt/floppy
Doing a level 0 dump
So I've copied some important stuff to this disk. Let's assume I want to do a level 0 dump
of the /mnt/floppy file system. How do I do it?
[root@localhost]# /sbin/dump 0f /tmp/backup /mnt/floppy
DUMP: Date of this level 0 dump: Sun Jan 25 15:05:11 1998
DUMP: Date of last level 0 dump: the epoch
DUMP: Dumping /dev/fd0 (/mnt/floppy) to /tmp/backup
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 42 tape blocks on 0.00 tape(s).
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: DUMP: 29 tape blocks on 1 volumes(s)
DUMP: Closing /tmp/backup
DUMP: DUMP IS DONE
The arguments to the dump command are
0
This tells dump I wish to perform a level 0 dump of the file system.
f
This is telling dump that I will tell it the name of the file that it should write the backup
to.
/tmp/backup
This is the name of the file I want the backup to go to. Normally, this would be the device
file for a tape drive or other backup device. However, since I don't have one I'm telling it
a normal file.
/mnt/floppy
This is the file system I want to backup.
What this means is that I have now created a file, /tmp/backup, which contains a level 0
dump of the floppy.
[root@localhost]# ls -l /tmp/backup
-rw-rw-r-- 1 root tty 20480 Jan 25 15:05 /tmp/backup
Restoring the backup
Now that we have a dump archive to work with, we can try using the restore command to
retrieve files.
[root@localhost]# /sbin/restore -if /tmp/backup
restore > ?
Available commands are:
ls [arg] - list directory
cd arg - change directory
pwd - print current directory
add [arg] - add `arg' to list of files to be extracted
delete [arg] - delete `arg' from list of files to be extracted
extract - extract requested files
setmodes - set modes of requested directories
quit - immediately exit program
what - list dump header information
verbose - toggle verbose flag (useful with ``ls'')
help or `?' - print this list
If no `arg' is supplied, the current directory is used
restore > ls
.:
group issue lost+found/ messages passwd

restore > add passwd


restore > extract
You have not read any tapes yet.
Unless you know which volume your file(s) are on you should start
with the last volume and work towards towards the first.
Specify next volume #: 1
Mount tape volume 1
Enter ``none'' if there are no more tapes
otherwise enter tape name (default: /tmp/backup)
set owner/mode for '.'? [yn] y
restore > quit
[root@localhost]# ls -l passwd
-rw-r--r-- 1 root root 787 Jan 25 15:00 passwd

The tar command


tar is a general purpose command used for archiving files. It takes multiple files and
directories and combines them into one large file. By default the resulting file is written
to a default device (usually a tape drive). However the resulting file can be placed onto a
disk drive.
[root@localhost]# tar -function[modifier] device [files]
When using tar, each individual file stored in the final archive is preceded by a header
that contains approximately 512 bytes of information. Also the end of the file is always
padded so that it occurs on an even block boundary. For this reason, every file added into
the tape archive has on average an extra .75Kb of padding per file.
Arguments Purpose
function A single letter specifying what should be done, values listed in
below table.
modifier Letters that modify the action of the specified function, values
listed in below table.
files The names of the files and directories to be restored or archived.
If it is a directory then EVERYTHING in that directory is
restored or archived
Arguments to tar.
Function Purpose
c create a new tape, do not write after last file
r Replace, the named files are written onto the end of the tape
t table, information about specified files is listed, similar in output
to the command ls -l, if no files specified all files listed
u* update, named files are added to the tape if they are not already
there or they have been modified since being previously written
x extract, named files restored from the tape, if the named file
matches a directory all the contents are extracted recursively

Values of the function argument for tar.


Modifier Purpose
v verbose, tar reports what it is doing and to what
w tar prints the action to be taken, the name of the file and
waits for user confirmation
f file, causes the device parameter to be treated as a file
m modify, tells tar not to restore the modification times as
they were archived but instead to use the time of
extraction
o ownership, use the UID and GID of the user running tar
not those stored on the tape

Values of the modifier argument for tar.


If the f modifier is used it must be the last modifier used. Also tar is an example of a UNIX
command where the - character is not required to specify modifiers.
Example:
[root@localhost]# tar -xvf temp.tar tar xvf temp.tar
Extracts all the contents of the tar file temp.tar

[root@localhost]# tar -xf temp.tar hello.dat


Extracts the file hello.dat from the tar file temp.tar

[root@localhost]# tar -cv /dev/rmt0


Archives all the contents of the /home directory onto tape, overwriting whatever is there

Syntax: tar options destination source


Example: Backing up single volume archive
Login as super user, insert a floppy in floppy drive and select the file for which you want to
take backup. Open terminal and type following command
[root@localhost]# tar zcvf /dev/fd0 /root/linux

To view the contents of archive created on floppy drive


Type following command on terminal
[root@localhost]# tar ztf /dev/fd0

How to extract the files from a tar backup


Create a temp folder
[root@localhost]# mkdir temp
Change the directory to /temp by using following command
[root@localhost]# cd /temp
Now issue the command
[root@localhost]# tar zxvf /dev/fd0
Now contents of tar backup is Listed in temp directory. A folder having name /root/Linux is
created inside the /temp directory.

How to restore the backup files on a specified location


[root@localhost]# tar zxvf /dev/fd0 /root
Where /root is the location where you want to restore the files.

Example: Backing up and restoring Multi volume archive


Login as super user. Sometimes storage space needed is more than the capacity of storage
medium. In that case, for spanning archive on multiple floppies or tapes, we can use M
option for Multi volume archive. So in this case we cannot create compressed archive. So no
need to use z option. Open terminal and type following command
[root@localhost]# tar cvfM /dev/fd0 /root/Linuxbook/doc *
After execution of this command, when first floppy is filled. tar command asks for second
floppy by displaying following message
[root@localhost]# prepare volume #2 for Vdev/fd0 and hit return
After pressing enter tar start copying archive on second floppy. In case if more floppies are
required then tar prompts you.

How to restore multi volume archive


Type following command to change directory to /temp
[root@localhost]# cd /temp
Now issue following command
[root@localhost]# tar xvfM /dev/fd0

How to check how many floppies are required to backup: -


du -s command is used to determine the amount of storage required to archive a directory
[root@localhost] # du -s /root/linuxbook
544 /root/linuxbook
du -s command will display size in kilobytes.

Example:-How to backup on tapes


Tape drives can hold gigabytes of data. You can write several tar archives one after another
on the same tape drive without rewinding the tape. For this you have to use non rewinding
SCSI tape device having /dev/nst0 name in Linux. This tape device writes an end of file
marker after end of each archive. End of file marker separates one archive from another.
Using mt -f /dev/nst0 rewind
Command is used for rewinding the tape.
Using following command we can extract files from first archive to current working
directory.
[root@localhost]# tar xvf /dev/nst0
Using following command we can move to next archive and position the tape at the
beginning of next archive.
[root@localhost]# mt -f /dev/nst0 fsf 1

Example:-Explaining crontab file


For performing automated backups we have to learn about crontab file firstly. crontab
program allow users to create jobs that will run at a specified time. Individual users have
there own crontab and entire system has only one crontab that can be modified using super
user access.
There are different options with crontab
crontab -r removes the crontab file.
crontab -l lists the contents of crontab file .
crontab -e edits the current crontab .

There are fields for each entry, each separated by a space or tab. First five fields we have to
specify when the command is to be run. Sixth field is the command itself. Issue following
command on terminal to see contents of crontab file.
[root@localhost]# vi /etc/crontab
01 * * * * root run-parts /etc/cron.hourly
Description of above entry
First is for minutes- 0-59
Second is for hour- 0-23
Third is for day- 0-31
Fourth is for month-1-12
Fifth is for weekday – 0-6 day of the week, o denotes the Sunday.
* represents the all .
By default cronjobs sends an mail to user account who runs the cronjobs. If we do not want
this we have to add following line at the end of the crontab file.
>/dev/null 2>&1

Example: How to Perform Automated backups


Now we can use crontab to perform automated backups. Firstly we have to design the
strategy.
1) Suppose on every Monday at 8:30 am we want to backup all the disk to tape. Type
following command in crontab file
30 8 * * 1 tar zcvf /dev/st0 /
Here / represents the root.
2) From Monday to Friday you want to backup only those files that have been changed
during the day. Issue the following command
45 7 * * 1-6 tar zcvf /dev/st0 `find / -mtime -l -type f -print'
Here find command will search those files that have been modified and tar commands
backup all the modified files to the tape.
3) Save and exit from crontab and issue following command on the terminal crontab backups
The dd command
The man page for dd lists its purpose as being "copy and convert data". Basically dd takes
input from one source and sends it to a different destination. The source and destination can
be device files for disk and tape drives, or normal files.
The basic format of dd is
[root@localhost Shell]# dd [option = value ....]
Table lists some of the different options available.
Options for dd.
Option Purpose
if=name input file name (default is standard input)
of=name output file name (default is standard output)
ibs=num the input block size in num bytes (default is 512)
obs=num the output block size in num bytes (default is 512)
bs=num set both input and output block size
skip=num Skip num input records before starting to copy
files=num copy num files before stopping (used when input is
from magnetic tape)
conv=ascii convert EBCDIC to ASCII
conv=ebcdic convert ASCII to EBCDIC
conv=lcase make all letters lowercase
conv=ucase make all letters uppercase
conv=swap swap every pair of bytes

Example:
[root@localhost]# dd if=/dev/hda1 of=/dev/rmt4
With all the default settings copy the contents of hda1 (the first partition on the first disk)
to the tape drive for the system.

The mt command
The usual media used in backups is magnetic tape. Magnetic tape is a sequential media.
That means that to access a particular file you must pass over all the tape containing files
that come before the file you want. The mt command is used to send commands to a
magnetic tape drive that control the location of the read/write head of the drive.
[root@localhost]# mt [-f tapename] command [count]
Arguments Purpose
tapename raw device name of the tape device
command one of the commands specified in table
11.10. Not all commands are recognised
by all tape drives.
count number of times to carry out command
Parameters for the mt Command.
Commands Action
fsf move forward the number of files specified by
the count argument
asf move forward to file number count
rewind rewind the tape
retension wind the tape out to the end and then rewind
erase erase the entire tape
offline eject the tape
Commands Possible using the mt Command.
Example:
[root@localhost]# mt -f /dev/nrst0 asf 3
Moves to the third file on the tape
[root@localhost]# mt -f /dev/nrst0 rewind
[root@localhost]# mt -f /dev/nrst0 fsf 3
Same as the first command
The mt command can be used to put multiple dump/tar archive files onto the one tape.
Each time dump/tar is used, one file is written to the tape. The mt command can be used
to move the read/write head of the tape drive to the end of that file, at which time
dump/tar can be used to add another file.
Example:
[root@localhost]# mt -f /dev/rmt/4 rewind
Rewinds the tape drive to the start of the tape.
[root@localhost]# tar -cvf /dev/rmt/4 /home/jonesd
Backs up my home directory, after this command the tape will be automatically rewound.
[root@localhost]# mt -f /dev/rmt/4 asf 1
Moves the read/write head forward to the end of the first file.
[root@localhost]# tar -cvf /dev/rmt/4a /home/thorleym
Backup the home directory of thorleym onto the end of the tape drive.
There are now two tar files on the tape, the first containing all the files and directories
from the directory /home/jonesd and the second containing all the files and directories
from the directory /home/thorleym.

The cpio command


cpio is another facility for creating and reading archives. Unlike tar, cpio doesn’t
automatically archive the contents of directories, so it’s common to combine cpio with
find when creating an archive:
[root@localhost]# find . -print -depth | cpio -ov -Htar > archivename
This will take all the files in the current directory and the
directories below and place them in an archive called archivename.The -depth option
controls the order in which the filenames are produced and is recommended to prevent
problems with directory permissions when doing a restore.The -o option creates the
archive, the -v option prints the names of the files archived as they are added and the -H
option specifies an archive format type (in this case it creates a tar archive). Another
common archive type is crc, a portable format with a checksum for error control.
To list the contents of a cpio archive, use
[root@localhost]# cpio -tv < archivename
To restore files, use:
[root@localhost]# cpio -idv < archivename
Here the -d option will create directories as necessary. To force cpio to extract files on
top of files of the same name that already exist (and have the same or later modification
time), use the -u option.

You might also like