RHCSA - RHCE Prepartion Study Guide
RHCSA - RHCE Prepartion Study Guide
RHCSA - RHCE Prepartion Study Guide
RHCSA (Red Hat Certified System Administrator) is a certification exam from Red Hat
company, which provides an open source operating system and software to the enterprise
community, It also provides support, training and consulting services for the organizations.
RHCSA exam is the certification obtained from Red Hat Inc, after passing the exam
(codename EX200). RHCSA exam is an upgrade to the RHCT (Red Hat Certified
Technician) exam, and this upgrade is compulsory as the Red Hat Enterprise Linux was
upgraded. The main variation between RHCT and RHCSA is that RHCT exam based on
RHEL 5, whereas RHCSA certification is based on RHEL 6 and 7, the courseware of these
two certifications are also vary to a certain level.
This Red Hat Certified System Administrator (RHCSA) is essential to perform the following
core system administration tasks needed in Red Hat Enterprise Linux environments:
1. Understand and use necessary tools for handling files, directories, command-
environments line, and system-wide / packages documentation.
2. Operate running systems, even in different run levels, identify and control processes,
start and stop virtual machines.
3. Set up local storage using partitions and logical volumes.
4. Create and configure local and network file systems and its attributes (permissions,
encryption, and ACLs).
5. Setup, configure, and control systems, including installing, updating and removing
software.
6. Manage system users and groups, along with use of a centralized LDAP directory for
authentication.
7. Ensure system security, including basic firewall and SELinux configuration.
To view fees and register for an exam in your country, check the RHCSA Certification page.
2
In this 15-article RHCSA series, titled Preparation for the RHCSA (Red Hat Certified System
Administrator) exam, we will going to cover the following topics on the latest releases of
Red Hat Enterprise Linux 7.
In this Part 1 of the RHCSA series, we will explain how to enter and execute commands with
the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use
system documentation.
Prerequisites:
At least a slight degree of familiarity with basic Linux commands such as:
3
3. cp command (copy files)
4. mv command (move or rename files)
5. touch command (create empty files or update the timestamp of existing ones)
6. rm command (delete files)
7. mkdir command (make directory)
The correct usage of some of them are anyway exemplified in this article, and you can find
further information about each of them using the suggested methods in this article.
Though not strictly required to start, as we will be discussing general commands and methods
for information search in a Linux system, you should try to install RHEL 7 as explained in
the following article. It will make things easier down the road.
If we log into a Linux box using a text-mode login screen, chances are we will be dropped
directly into our default shell. On the other hand, if we login using a graphical user interface
(GUI), we will have to open a shell manually by starting a terminal. Either way, we will be
presented with the user prompt and we can start typing and executing commands (a command
is executed by pressing the Enter key after we have typed it).
Certain arguments, called options (usually preceded by a hyphen), alter the behavior of the
command in a particular way while other arguments specify the objects upon which the
command operates.
The type command can help us identify whether another certain command is built into the
shell or if it is provided by a separate package. The need to make this distinction lies in the
place where we will find more information about the command. For shell built-ins we need to
look in the shells man page, whereas for other binaries we can refer to its own man page.
4
In the examples above, cd and type are shell built-ins, while top and less are binaries external
to the shell itself (in this case, the location of the command executable is returned by type).
exec command
Runs an external program that we specify. Note that in most cases, this is better accomplished
by just typing the name of the program we want to run, but the exec command has one
special feature: rather than create a new process that runs alongside the shell, the new process
replaces the shell, as can verified by subsequent.
When the new process terminates, the shell terminates with it. Run exec top and then hit the
q key to quit top. You will notice that the shell session ends when you do, as shown in the
following screencast:
export command
history Command
Displays the command history list with line numbers. A command in the history list can be
repeated by typing the command number preceded by an exclamation sign. If we need to edit
a command in history list before executing it, we can press Ctrl + r and start typing the first
letters associated with the command. When we see the command completed automatically,
we can edit it as per our current need:
This list of commands is kept in our home directory in a file called .bash_history. The
history facility is a useful resource for reducing the amount of typing, especially when
combined with command line editing. By default, bash stores the last 500 commands you
have entered, but this limit can be extended by using the HISTSIZE environment variable:
5
Linux history Command
But this change as performed above, will not be persistent on our next boot. In order to
preserve the change in the HISTSIZE variable, we need to edit the .bashrc file by hand:
Important: Keep in mind that these changes will not take effect until we restart our shell
session.
alias command
With no arguments or with the -p option prints the list of aliases in the form alias
name=value on standard output. When arguments are provided, an alias is defined for each
name whose value is given.
With alias, we can make up our own commands or modify existing ones by including desired
options. For example, suppose we want to alias ls to ls color=auto so that the output will
display regular files, directories, symlinks, and so on, in different colors:
Note: That you can assign any name to your new command and enclose as many
commands as desired between single quotes, but in that case you need to separate them by
semicolons, as follows:
The exit and logout commands both terminate the shell. The exit command terminates any
shell, but the logout command terminates only login shellsthat is, those that are launched
automatically when you initiate a text-mode login.
6
If we are ever in doubt as to what a program does, we can refer to its man page, which can be
invoked using the man command. In addition, there are also man pages for important files
(inittab, fstab, hosts, to name a few), library functions, shells, devices, and other features.
Examples:
1. man uname (print system information, such as kernel name, processor, operating system
type, architecture, and so on).
2. man inittab (init daemon configuration).
Another important source of information is provided by the info command, which is used to
read info documents. These documents often provide more information than the man page. It
is invoked by using the info keyword followed by a command name, such as:
# info ls
# info cut
Make sure you make it a habit to use these three methods to look up information for
commands. Pay special and careful attention to the syntax of each of them, which is
explained in detail in the documentation.
Sometimes text files contain tabs but programs that need to process the files dont cope well
with tabs. Or maybe we just want to convert tabs into spaces. Thats where the expand tool
(provided by the GNU coreutils package) comes in handy.
For example, given the file NumbersList.txt, lets run expand against it, changing tabs to
one space, and display on standard output.
The unexpand command performs the reverse operation (converts spaces into tabs).
7
Display the first lines of a file with head and the last lines with tail
By default, the head command followed by a filename, will display the first 10 lines of the
said file. This behavior can be changed using the -n option and specifying a certain number
of lines.
One of the most interesting features of tail is the possibility of displaying data (last lines) as
the input file grows (tail -f my.log, where my.log is the file under observation). This is
particularly useful when monitoring a log to which data is being continually added.
The paste command merges files line by line, separating the lines from each file with tabs
(by default), or another delimiter that can be specified (in the following example the fields in
the output are separated by an equal sign).
8
Merge Files in Linux
The split command is used split a file into two (or more) separate files, which are named
according to a prefix of our choosing. The splitting can be defined by size, chunks, or number
of lines, and the resulting files can have a numeric or alphabetic suffixes. In the following
example, we will split bash.pdf into files of size 50 KB (-b 50KB), using numeric suffixes (-
d):
You can merge the files to recreate the original file with the following command:
9
Translating characters with tr command
The tr command can be used to translate (change) characters on a one-by-one basis or using
character ranges. In the following example we will use the same file2 as previously, and we
will change:
1. lowercase os to uppercase,
2. and all lowercase to uppercase
# cat file2 | tr o O
# cat file2 | tr [a-z] [A-Z]
The uniq command allows us to report or remove duplicate lines in a file, writing to stdout
by default. We must note that uniq does not detect repeated lines unless they are adjacent.
Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text
files).
By default, sort takes the first field (separated by spaces) as key field. To specify a different
key field, we need to use the -k option. Please note how the output returned by sort and uniq
change as we change the key field in the following example:
# cat file3
# sort file3 | uniq
# sort -k2 file3 | uniq
# sort -k3 file3 | uniq
10
Remove Duplicate Lines in Linux
The cut command extracts portions of input lines (from stdin or files) and displays the result
on standard output, based on number of bytes (-b), characters (-c), or fields (-f).
When using cut based on fields, the default field separator is a tab, but a different separator
can be specified by using the -d option.
# cut -d: -f1,3 /etc/passwd # Extract specific fields: 1 and 3 in this case
# cut -d: -f2-4 /etc/passwd # Extract range of fields: 2 through 4 in this
example
11
Note that the output of the two examples above was truncated for brevity.
fmt is used to clean up files with a great amount of content or lines, or with varying
degrees of indentation. The new paragraph formatting defaults to no more than 75 characters
wide. You can change this with the -w (width) option, which set the line length to the
specified number of characters.
For example, lets see what happens when we use fmt to display the /etc/passwd file setting
the width of each line to 100 characters. Once again, output has been truncated for brevity.
pr paginates and displays in columns one or more files for printing. In other words, pr
formats a file to make it look better when printed. For example, the following command:
Shows a listing of all the files found in /etc in a printer-friendly format (3 columns) with a
custom header (indicated by the -h option), and numbered lines (-n).
Summary
In this article we have discussed how to enter and execute commands with the correct syntax
in a shell prompt or terminal, and explained how to find, inspect, and use system
12
documentation. As simple as it seems, its a large first step in your way to becoming a
RHCSA.
If you would like to add other commands that you use on a periodic basis and that have
proven useful to fulfill your daily responsibilities, feel free to share them with the world by
using the comment form below. Questions are also welcome. We look forward to hearing
from you!
File and directory management is a critical competence that every system administrator
should possess. This includes the ability to create / delete text files from scratch (the core of
each programs configuration) and directories (where you will organize files and other
directories), and to find out the type of existing files.
The touch command can be used not only to create empty files, but also to update the access
and modification times of existing files.
13
touch command example
You can use file [filename] to determine a files type (this will come in handy before
launching your preferred text editor to edit it).
rm command example
As for directories, you can create directories inside existing paths with mkdir [directory]
or create a full path with mkdir -p [/full/path/to/directory].
14
mkdir command example
When it comes to removing directories, you need to make sure that theyre empty before
issuing the rmdir [directory] command, or use the more powerful (handle with care!) rm
-rf [directory]. This last option will force remove recursively the [directory] and all
its contents so use it at your own risk.
The command line environment provides two very useful features that allows to redirect the
input and output of commands from and to files, and to send the output of a command to
another, called redirection and pipelining, respectively.
To understand those two important concepts, we must first understand the three most
important types of I/O (Input and Output) streams (or sequences) of characters, which are in
fact special files, in the *nix sense of the word.
1. Standard input (aka stdin) is by default attached to the keyboard. In other words, the
keyboard is the standard input device to enter commands to the command line.
2. Standard output (aka stdout) is by default attached to the screen, the device that receives
the output of commands and display them on the screen.
3. Standard error (aka stderr), is where the status messages of a command is sent to by
default, which is also the screen.
In the following example, the output of ls /var is sent to stdout (the screen), as well as the
result of ls /tecmint. But in the latter case, it is stderr that is shown.
15
Input and Output Example
To more easily identify these special files, they are each assigned a file descriptor, an abstract
representation that is used to access them. The essential thing to understand is that these files,
just like others, can be redirected. What this means is that you can capture the output from a
file or script and send it as input to another file, command, or script. This will allow you to
store on disk, for example, the output of commands for later processing or analysis.
To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available.
Redirection
Effect
Operator
Redirects standard output to a file containing standard output. If the
>
destination file exists, it will be overwritten.
Redirects both standard output and standard error to a file; if the specified
&>
file exists, it will be overwritten.
<> The specified file is used for both standard input and standard output.
Remember:
1. Redirection is used to send the output of a command to a file, or to send a file as input to a
command.
2. Pipelining is used to send the output of a command to another command as input.
16
Example 1: Redirecting the output of a command to a file
There will be times when you will need to iterate over a list of files. To do that, you can first
save that list to a file and then read that file line by line. While it is true that you can iterate
over the output of ls directly, this example serves to illustrate redirection.
In case we want to prevent both stdout and stderr to be displayed on the screen, we can
redirect both file descriptors to /dev/null. Note how the output changes when the
redirection is implemented for the same command.
# ls /var /tecmint
# ls /var/ /tecmint &> /dev/null
# cat [file(s)]
You can also send a file as input, using the correct redirection operator.
17
cat command example
If you have a large directory or process listing and want to be able to locate a certain file or
process at a glance, you will want to pipeline the listing to grep.
Note that we use to pipelines in the following example. The first one looks for the required
keyword, while the second one will eliminate the actual grep command from the results. This
example lists all the processes associated with the apache user.
If you need to transport, backup, or send via email a group of files, you will use an archiving
(or grouping) tool such as tar, typically used with a compression utility like gzip, bzip2, or
xz.
Your choice of a compression tool will be likely defined by the compression speed and rate
of each one. Of these three compression tools, gzip is the oldest and provides the least
compression, bzip2 provides improved compression, and xz is the newest and provides the
best compression. Typically, files compressed with these utilities have .gz, .bz2, or .xz
extensions, respectively.
18
append r Appends non-tar files to an archive
Operation
Abbreviation Description
modifier
directory dir C Changes to directory dir before performing operations
Example 5: Creating a tarball and then compressing it using the three compression utilities
You may want to compare the effectiveness of each tool before deciding to use one or
another. Note that while compressing small files, or a few files, the results may not show
much differences, but may give you a glimpse of what they have to offer.
19
tar command examples
Example 6: Preserving original permissions and ownership while archiving and when
If you are creating backups from users home directories, you will want to store the
individual files with the original permissions and ownership instead of changing them to that
of the user account or daemon performing the backup. The following example preserves these
attributes while taking the backup of the contents in the /var/log/httpd directory:
In Linux, there are two types of links to files: hard links and soft (aka symbolic) links. Since
a hard link represents another name for an existing file and is identified by the same inode, it
then points to the actual data, as opposed to symbolic links, which point to filenames instead.
In addition, hard links do not occupy space on disk, while symbolic links do take a small
amount of space to store the text of the link itself. The downside of hard links is that they can
only be used to reference files within the filesystem where they are located because inodes
are unique inside a filesystem. Symbolic links save the day, in that they point to another file
or directory by name rather than by inode, and therefore can cross filesystem boundaries.
There is no better way to visualize the relation between a file and a hard or symbolic link that
point to it, than to create those links. In the following screenshot you will see that the file and
20
the hard link that points to it share the same inode and both are identified by the same disk
usage of 466 bytes.
On the other hand, creating a hard link results in an extra disk usage of 5 bytes. Not that
youre going to run out of storage capacity, but this example is enough to illustrate the
difference between a hard link and a soft link.
A typical usage of symbolic links is to reference a versioned file in a Linux system. Suppose
there are several programs that need access to file fooX.Y, which is subject to frequent
version updates (think of a library, for example). Instead of updating every single reference to
fooX.Y every time theres a version update, it is wiser, safer, and faster, to have programs
look to a symbolic link named just foo, which in turn points to the actual fooX.Y.
Thus, when X and Y change, you only need to edit the symbolic link foo with a new
destination name instead of tracking every usage of the destination file and updating it.
Summary
In this article we have reviewed some essential file and directory management skills that must
be a part of every system administrators tool-set. Make sure to review other parts of this
series as well in order to integrate these topics with the content covered in this tutorial.
Feel free to let us know if you have any questions or comments. We are always more than
glad to hear from our readers.
21
RHCSA Series: How to Manage Users and Groups in RHEL 7 Part 3
Managing a RHEL 7 server, as it is the case with any other Linux server, will require that
you know how to add, edit, suspend, or delete user accounts, and grant users the necessary
permissions to files, directories, and other system resources to perform their assigned tasks.
To add a new user account to a RHEL 7 server, you can run either of the following two
commands as root:
# adduser [new_account]
# useradd [new_account]
When a new user account is added, by default the following operations are performed.
The full account summary is stored in the /etc/passwd file. This file holds a record per
system user account and has the following format (fields are separated by a colon):
22
1. These two fields [username] and [Comment] are self explanatory.
2. The second filed x indicates that the account is secured by a shadowed password (in
/etc/shadow), which is used to logon as [username].
3. The fields [UID] and [GID] are integers that shows the User IDentification and the primary
Group IDentification to which [username] belongs, equally.
Finally,
1. The [Home directory] shows the absolute location of [username]s home directory,
and
2. [Default shell] is the shell that is commit to this user when he/she logins into the
system.
Another important file that you must become familiar with is /etc/group, where group
information is stored. As it is the case with /etc/passwd, there is one record per line and its
fields are also delimited by a colon:
where,
After adding an account, at anytime, you can edit the users account information using
usermod, whose basic syntax is:
Read Also:
15 useradd Command Examples
15 usermod Command Examples
If you work for a company that has some kind of policy to enable account for a certain
interval of time, or if you want to grant access to a limited period of time, you can use the --
expiredate flag followed by a date in YYYY-MM-DD format. To verify that the change
has been applied, you can compare the output of
# chage -l [username]
before and after updating the account expiry date, as shown in the following image.
23
Change User Account Information
Besides the primary group that is created when a new user account is added to the system, a
user can be added to supplementary groups using the combined -aG, or append groups
options, followed by a comma separated list of groups.
EXAMPLE 3: Changing the default location of the users home directory and / or changing its shell
If for some reason you need to change the default location of the users home directory (other
than /home/username), you will need to use the -d, or home options, followed by the
absolute path to the new home directory.
If a user wants to use another shell other than bash (for example, sh), which gets assigned by
default, use usermod with the shell flag, followed by the path to the new shell.
After adding the user to a supplementary group, you can verify that it now actually belongs to
such group(s):
# groups [username]
# id [username]
24
Adding User to Supplementary Group
To remove a user from a group, omit the --append switch in the command above and list the
groups you want the user to belong to following the --groups flag.
To disable an account, you will need to use either the -L (lowercase L) or the lock option to
lock a users password. This will prevent the user from being able to log on.
When you need to re-enable the user so that he can log on to the server again, use the -U or
the unlock option to unlock a users password that was previously blocked, as explained in
Example 5 above.
25
EXAMPLE 7: Deleting a group or an user account
To delete a group, youll want to use groupdel, whereas to delete a user account you will use
userdel (add the r switch if you also want to delete the contents of its home directory and
mail spool):
If there are files owned by group_name, they will not be deleted, but the group owner will
be set to the GID of the group that was deleted.
The well-known ls command is one of the best friends of any system administrator. When
used with the -l flag, this tool allows you to view a list a directorys contents in long (or
detailed) format.
However, this command can also be applied to a single file. Either way, the first 10
characters in the output of ls -l represent each files attributes.
The first char of this 10-character sequence is used to indicate the file type:
The next nine characters of the file attributes, divided in groups of three from left to right, are
called the file mode and indicate the read (r), write(w), and execute (x) permissions granted
to the files owner, the files group owner, and the rest of the users (commonly referred to as
the world), respectively.
While the read permission on a file allows the same to be opened and read, the same
permission on a directory allows its contents to be listed if the execute permission is also set.
In addition, the execute permission in a file allows it to be handled as a program and run.
File permissions are changed with the chmod command, whose basic syntax is as follows:
where new_mode is either an octal number or an expression that specifies the new
permissions. Feel free to use the mode that works best for you in each case. Or perhaps you
already have a preferred way to set a files permissions so feel free to use the method that
works best for you.
The octal number can be calculated based on the binary equivalent, which can in turn be
obtained from the desired file permissions for the owner of the file, the owner group, and the
26
world.The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its
absence means 0. For example:
File Permissions
Please take a minute to compare our previous calculation to the actual output of ls -l after
changing the files permissions:
As a security measure, you should make sure that files with 777 permissions (read, write, and
execute for everyone) are avoided like the plague under normal circumstances. Although we
will explain in a later tutorial how to more effectively locate all the files in your system with
a certain permission set, you can -by now- combine ls with grep to obtain such information.
In the following example, we will look for file with 777 permissions in the /etc directory
only. Note that we will use pipelining as explained in Part 2: File and Directory Management
of this RHCSA series:
27
Find All Files with 777 Permission
Shell scripts, along with some binaries that all users should have access to (not just their
corresponding owner and group), should have the execute bit set accordingly (please note that
we will discuss a special case later):
Note: That we can also set a files mode using an expression that indicates the owners rights
with the letter u, the group owners rights with the letter g, and the rest with o. All of these
rights can be represented at the same time with the letter a. Permissions are granted (or
revoked) with the + or - signs, respectively.
A long directory listing also shows the files owner and its group owner in the first and
second columns, respectively. This feature serves as a first-level access control method to
files in a system:
To change file ownership, you will use the chown command. Note that you can change the
file and group ownership at the same time or separately:
Note: That you can change the user or group, or the two attributes at the same time, as long
as you dont forget the colon, leaving user or group blank if you want to update the other
attribute, for example:
28
EXAMPLE 10: Cloning permissions from one file to another
If you would like to clone ownership from one file to another, you can do so using the
reference flag, as follows:
where the owner and group of ref_file will be assigned to file as well:
Should you need to grant access to all the files owned by a certain group inside a specific
directory, you will most likely use the approach of setting the setgid bit for such directory.
When the setgid bit is set, the effective GID of the real user becomes that of the group
owner.
Thus, any user can access a file under the privileges granted to the group owner of such file.
In addition, when the setgid bit is set on a directory, newly created files inherit the same
group as the directory, and newly created subdirectories will also inherit the setgid bit of the
parent directory.
To set the setgid in octal form, prepend the number 2 to the current (or desired) basic
permissions.
Conclusion
A solid knowledge of user and group management, along with standard and special Linux
permissions, when coupled with practice, will allow you to quickly identify and troubleshoot
issues with file permissions in your RHEL 7 server.
I assure you that as you follow the steps outlined in this article and use the system
documentation (as explained in Part 1: Reviewing Essential Commands & System
Documentation of this series) you will master this essential competence of system
administration.
29
Feel free to let us know if you have any questions or comments using the form below.
RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text
with grep and regexps Part 4
Every system administrator has to deal with text files as part of his daily responsibilities. That
includes editing existing files (most likely configuration files), or creating new ones. It has
been said that if you want to start a holy war in the Linux world, you can ask sysadmins what
their favorite text editor is and why. We are not going to do that in this article, but will
present a few tips that will be helpful to use two of the most widely used text editors in
RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m
(due to its several features that convert it into more than a simple editor). I am sure that you
can find many more reasons to use one or the other, or perhaps some other editor such as
emacs or pico. Its entirely up to you.
To launch nano, you can either just type nano at the command prompt, optionally followed
by a filename (in this case, if the file exists, it will be opened in edition mode). If the file
does not exist, or if we omit the filename, nano will also be opened in edition mode but will
present a blank screen for us to start typing:
30
Nano Editor
As you can see in the previous image, nano displays at the bottom of the screen several
functions that are available via the indicated shortcuts (^, aka caret, indicates the Ctrl key).
To name a few of them:
1. Ctrl + G: brings up the help menu with a complete list of functions and
descriptions:Ctrl + X: exits the current file. If changes have not been saved, they are
discarded.
2. Ctrl + R: lets you choose a file to insert its contents into the present file by specifying
a full path.
1. Ctrl + X: exits the current file. If changes have not been saved, they are discarded.
2. Ctrl + R: lets you choose a file to insert its contents into the present file by specifying
a full path.
To easily navigate the opened file, nano provides the following features:
1. Ctrl + F and Ctrl + B move the cursor forward or backward, whereas Ctrl + P and
Ctrl + N move it up or down one line at a time, respectively, just like the arrow keys.
2. Ctrl + space and Alt + space move the cursor forward and backward one word at a
time.
Finally,
1. Ctrl + _ (underscore) and then entering X,Y will take you precisely to Line X,
column Y, if you want to place the cursor at a specific place in the document.
The example above will take you to line 15, column 14 in the current document.
32
If you can recall your early Linux days, specially if you came from Windows, you will
probably agree that starting off with nano is the best way to go for a new user.
Vim is an improved version of vi, a famous text editor in Linux that is available on all
POSIX-compliant *nix systems, such as RHEL 7. If you have the chance and can install vim,
go ahead; if not, most (if not all) the tips given in this article should also work.
1. Command mode will allow you to browse through the file and enter commands,
which are brief and case-sensitive combinations of one or more letters. If you need to
repeat one of them a certain number of times, you can prefix it with a number (there
are only a few exceptions to this rule). For example, yy (or Y, short for yank) copies
the entire current line, whereas 4yy (or 4Y) copies the entire current line along with
the next three lines (4 lines in total).
2. In ex mode, you can manipulate files (including saving a current file and running
outside programs or commands). To enter ex mode, we must type a colon (:) starting
from command mode (or in other words, Esc + :), directly followed by the name of
the ex-mode command that you want to use.
3. In insert mode, which is accessed by typing the letter i, we simply enter text. Most
keystrokes result in text appearing on the screen.
4. We can always enter command mode (regardless of the mode were working on) by
pressing the Esc key.
Lets see how we can perform the same operations that we outlined for nano in the previous
section, but now with vim. Dont forget to hit the Enter key to confirm the vim command!
To access vims full manual from the command line, type :help while in command mode and
then press Enter:
33
vim Edito Help Menu
The upper section presents an index list of contents, with defined sections dedicated to
specific topics about vim. To navigate to a section, place the cursor over it and press Ctrl + ]
(closing square bracket). Note that the bottom section displays the current file.
1. To save changes made to a file, run any of the following commands from command mode
and it will do the trick:
:wq!
:x!
ZZ (yes, double Z without the colon at the beginning)
2. To exit discarding changes, use :q!. This command will also allow you to exit the help
menu described above, and return to the current file in command mode.
5. Paste lines that were previously cutted or copied: press the P key while in command mode.
:r filename
34
Insert Content of File in vi Editor
:r! command
For example, to insert the date and time in the line below the current position of the cursor:
In another article that I wrote for, (Part 2 of the LFCS series), I explained in greater detail the
keyboard shortcuts and functions available in vim. You may want to refer to that tutorial for
further examples on how to use this powerful text editor.
By now you have learned how to create and edit files using nano or vim. Say you become a
text editor ninja, so to speak now what? Among other things, you will also need how to
search for regular expressions inside text.
1. The simplest regular expression is an alphanumeric string (i.e., the word svm) or two
(when two are present, you can use the | (OR) operator):
The presence of either of those two strings indicate that your processor supports
virtualization:
35
2. A second kind of a regular expression is a range list, enclosed between square brackets.
For example, c[aeiou]t matches the strings cat, cet, cit, cot, and cut, whereas [a-z] and
[0-9] match any lowercase letter or decimal digit, respectively. If you want to repeat the
regular expression X certain number of times, type {X} immediately following the regexp.
For example, lets extract the UUIDs of storage devices from /etc/fstab:
The parentheses, the {4} quantifier, and the hyphen indicate that the next sequence is a 4-
character long hexadecimal string, and the quantifier that follows ({3}) denote that the
expression should be repeated 3 times.
Finally, the last sequence of 12-character long hexadecimal string in the UUID is retrieved
with [0-9a-f]{12}, and the -o option prints only the matched (non-empty) parts of the
matching line in /etc/fstab.
36
[[:space:]] Any whitespace
For example, we may be interested in finding out what the used UIDs and GIDs (refer to Part
2 of this series to refresh your memory) are for real users that have been added to our system.
Thus, we will search for sequences of 4 digits in /etc/passwd:
The above example may not be the best case of use of regular expressions in the real world,
but it clearly illustrates how to use POSIX character classes to analyze text along with grep.
Conclusion
In this article we have provided some tips to make the most of nano and vim, two text editors
for the command-line users. Both tools are supported by extensive documentation, which you
can consult in their respective official web sites (links given below) and using the suggestions
given in Part 1 of this series.
Reference Links
http://www.nano-editor.org/
http://www.vim.org/
37
Linux Boot Process
1. the same basic principles apply, with perhaps minor modifications, to other Linux
distributions as well, and
2. the following description is not intended to represent an exhaustive explanation of the boot
process, but only the fundamentals.
1. The POST (Power On Self Test) initializes and performs hardware checks.
2. When the POST finishes, the system control is passed to the first stage boot loader, which
is stored on either the boot sector of one of the hard disks (for older systems using BIOS and
MBR), or a dedicated (U)EFI partition.
3. The first stage boot loader then loads the second stage boot loader, most usually GRUB
(GRand Unified Boot Loader), which resides inside /boot, which in turn loads the kernel
and the initial RAMbased file system (also known as initramfs, which contains programs
and binary files that perform the necessary actions needed to ultimately mount the actual root
filesystem).
4. We are presented with a splash screen that allows us to choose an operating system and
kernel to boot:
38
Boot Menu Screen
5. The kernel sets up the hardware attached to the system and once the root filesystem has
been mounted, launches process with PID 1, which in turn will initialize other processes and
present us with a login prompt.
Note: That if we wish to do so at a later time, we can examine the specifics of this process
using the dmesg command and filtering its output using the tools that we have explained in
previous articles of this series.
In the example above, we used the well-known ps command to display a list of current
processes whose parent process (or in other words, the process that started them) is systemd
(the system and service manager that most modern Linux distributions have switched to)
during system startup:
39
# ps -o ppid,pid,uname,comm --ppid=1
Remember that the -o flag (short for format) allows you to present the output of ps in a
customized format to suit your needs using the keywords specified in the STANDARD
FORMAT SPECIFIERS section in man ps.
Another case in which you will want to define the output of ps instead of going with the
default is when you need to find processes that are causing a significant CPU and / or
memory load, and sort them accordingly:
An Introduction to SystemD
Few decisions in the Linux world have caused more controversies than the adoption of
systemd by major Linux distributions. Systemds advocates name as its main advantages the
following facts:
1. Systemd allows more processing to be done in parallel during system startup (as opposed
to older SysVinit, which always tends to be slower because it starts processes one by one,
checks if one depends on another, and then waits for daemons to launch so more services can
start), and
2. It works as a dynamic resource management in a running system. Thus, services are started
when needed (to avoid consuming system resources if they are not being used) instead of
being launched without a valid reason during boot.
Systemd is controlled by the systemctl utility. If you come from a SysVinit background,
chances are you will be familiar with:
1. the service tool, which -in those older systems- was used to manage SysVinit scripts, and
2. the chkconfig utility, which served the purpose of updating and querying runlevel
information for system services.
40
3. shutdown, which you must have used several times to either restart or halt a running
system.
The following table shows the similarities between the use of these legacy tools and
systemctl:
Systemctl
Legacy tool Description
equivalent
service
systemctl Displays the status of all current services
status-all
chkconfig systemctl Displays all services and tells whether they are
list type=service enabled or disabled
shutdown -h systemctl
Power-off the machine (halt)
now poweroff
shutdown -r systemctl
Reboot the system
now reboot
41
Systemd also introduced the concepts of units (which can be either a service, a mount point,
a device, or a network socket) and targets (which is how systemd manages to start several
related process at the same time, and can be considered -though not equal- as the equivalent
of runlevels in SysVinit-based systems.
Summing Up
Other tasks related with process management include, but may not be limited to, the ability
to:
1. Adjust the execution priority as far as the use of system resources is concerned of a process:
This is accomplished through the renice utility, which alters the scheduling priority of one or
more running processes. In simple terms, the scheduling priority is a feature that allows the
kernel (present in versions => 2.6) to allocate system resources as per the assigned execution
priority (aka niceness, in a range from -20 through 19) of a given process.
In the generic command above, the first argument is the priority value to be used, whereas the
other argument can be interpreted as process IDs (which is the default setting), process group
IDs, user IDs, or user names. A normal user (other than root) can only modify the scheduling
priority of a process he or she owns, and only increase the niceness level (which means
taking up less system resources).
In more precise terms, killing a process entitles sending it a signal to either finish its
execution gracefully (SIGTERM=15) or immediately (SIGKILL=9) through the kill or pkill
commands.
The difference between these two tools is that the former is used to terminate a specific
process or a process group altogether, while the latter allows you to do the same based on
name and other attributes.
In addition, pkill comes bundled with pgrep, which shows you the PIDs that will be affected
should pkill be used. For example, before running:
# pkill -u gacanepa
42
It may be useful to view at a glance which are the PIDs owned by gacanepa:
# pgrep -l -u gacanepa
By default, both kill and pkill send the SIGTERM signal to the process. As we mentioned
above, this signal can be ignored (while the process finishes its execution or for good), so
when you seriously need to stop a running process with a valid reason, you will need to
specify the SIGKILL signal on the command line:
Conclusion
In this article we have explained the basics of the boot process in a RHEL 7 system, and
analyzed some of the tools that are available to help you with managing processes using
common utilities and systemd-specific commands.
Note that this list is not intended to cover all the bells and whistles of this topic, so feel free to
add your own preferred tools and commands to this article using the comment form below.
Questions and other comments are also welcome.
43
RHCSA: Configure and Encrypt System Storage Part 6
Please note that we will present this topic in this article but will continue its description and
usage on the next one (Part 7) due to vastness of the subject.
In RHEL 7, parted is the default utility to work with partitions, and will allow you to:
It is recommended that before attempting the creation of a new partition or the modification
of an existing one, you should ensure that none of the partitions on the device are in use
(umount /dev/partition), and if youre using part of the device as swap you need to
disable it (swapoff -v /dev/partition) during the process.
The easiest way to do this is to boot RHEL in rescue mode using an installation media such
as a RHEL 7 installation DVD or USB (Troubleshooting Rescue a Red Hat Enterprise
Linux system) and Select Skip when youre prompted to choose an option to mount the
existing Linux installation, and you will be presented with a command prompt where you can
start typing the same commands as shown as follows during the creation of an ordinary
partition in a physical device that is not being used.
44
RHEL 7 Rescue Mode
# parted /dev/sdb
Where /dev/sdb is the device where you will create the new partition; next, type print to
display the current drives partition table:
45
As you can see, in this example we are using a virtual drive of 5 GB. We will now proceed to
create a 4 GB primary partition and then format it with the xfs filesystem, which is the
default in RHEL 7.
You can choose from a variety of file systems. You will need to manually create the partition
with mkpart and then format it with mkfs.fstype as usual because mkpart does not support
many modern filesystems out-of-the-box.
In the following example we will set a label for the device and then create a primary partition
(p) on /dev/sdb, which starts at the 0% percentage of the device and ends at 4000 MB (4
GB):
Next, we will format the partition as xfs and print the partition table again to verify that
changes were applied:
# mkfs.xfs /dev/sdb1
# parted /dev/sdb print
46
Format Partition as XFS Filesystem
For older filesystems, you could use the resize command in parted to resize a partition.
Unfortunately, this only applies to ext2, fat16, fat32, hfs, linux-swap, and reiserfs (if
libreiserfs is installed).
Thus, the only way to resize a partition is by deleting it and creating it again (so make sure
you have a good backup of your data!). No wonder the default partitioning scheme in RHEL
7 is based on LVM.
47
Remove or Delete Partition
Once a disk has been partitioned, it can be difficult or risky to change the partition sizes. For
that reason, if we plan on resizing the partitions on our system, we should consider the
possibility of using LVM instead of the classic partitioning system, where several physical
devices can form a volume group that will host a defined number of logical volumes, which
can be expanded or reduced without any hassle.
In simple terms, you may find the following diagram useful to remember the basic
architecture of LVM.
48
Basic Architecture of LVM
Follow these steps in order to set up LVM using classic volume management tools. Since you
can expand this topic reading the LVM series on this site, I will only outline the basic steps to
set up LVM, and then compare them to implementing the same functionality with SSM.
Note: That we will use the whole disks /dev/sdb and /dev/sdc as PVs (Physical Volumes)
but its entirely up to you if you want to do the same.
1. Create partitions /dev/sdb1 and /dev/sdc1 using 100% of the available disk space in
/dev/sdb and /dev/sdc:
49
Create New Partitions
# pvcreate /dev/sdb1
# pvcreate /dev/sdc1
Remember that you can use pvdisplay /dev/sd{b,c}1 to show information about the newly
created PVs.
Remember that you can use vgdisplay tecmint_vg to show information about the newly
created VG.
50
# lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs 3 GB]
# lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs 1 GB]
# lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes 6
GB]
Remember that you can use lvdisplay tecmint_vg to show information about the newly
created LVs on top of VG tecmint_vg.
5. Format each of the logical volumes with xfs (do NOT use xfs if youre planning on
shrinking volumes later!):
# mkfs.xfs /dev/tecmint_vg/vol01_docs
# mkfs.xfs /dev/tecmint_vg/vol02_logs
# mkfs.xfs /dev/tecmint_vg/vol03_homes
7. Now we will reverse the LVM implementation and remove the LVs, the VG, and the PVs:
# lvremove /dev/tecmint_vg/vol01_docs
# lvremove /dev/tecmint_vg/vol02_logs
# lvremove /dev/tecmint_vg/vol03_homes
# vgremove /dev/tecmint_vg
# pvremove /dev/sd{b,c}1
8. Now lets install SSM and we will see how to perform the above in ONLY 1 STEP!
51
1. initialize block devices as physical volumes
2. create a volume group
3. create logical volumes
4. format LVs, and
5. mount them using only one command
9. We can now display the information about PVs, VGs, or LVs, respectively, as follows:
10. As we already know, one of the distinguishing features of LVM is the possibility to resize
(expand or decrease) logical volumes without downtime.
Say we are running out of space in vol02_logs but have plenty of space in vol03_homes. We
will resize vol03_homes to 4 GB and expand vol02_logs to use the remaining space:
Run ssm list pool again and take note of the free space in tecmint_vg:
52
Check Volume Size
Then do:
Note: that the plus sign after the -s flag indicates that the specified value should be added to
the present value.
11. Removing logical volumes and volume groups is much easier with ssm as well. A simple,
will return a prompt asking you to confirm the deletion of the VG and the LVs it contains:
SSM also provides system administrators with the capability of managing encryption for new
or existing volumes. You will need the cryptsetup package installed first:
Then issue the following command to create an encrypted volume. You will be prompted to
enter a passphrase to maximize security:
Our next task consists in adding the corresponding entries in /etc/fstab in order for those
logical volumes to be available on boot. Rather than using the device identifier
(/dev/something).
We will use each LVs UUID (so that our devices will still be uniquely identified should we
add other logical volumes or devices), which we can find out with the blkid utility:
53
# blkid -o value UUID /dev/tecmint_vg/vol03_homes
In our case:
Next, create the /etc/crypttab file with the following contents (change the UUIDs for the
ones that apply to your setup):
Now reboot (systemctl reboot) and you will be prompted to enter the passphrase for each
LV. Afterwards you can confirm that the mount operation was successful by checking the
corresponding mount points:
54
Verify Logical Volume Mount Points
Conclusion
In this tutorial we have started to explore how to set up and configure system storage using
classic volume management tools and SSM, which also integrates filesystem and encryption
capabilities in one package. This makes SSM an invaluable tool for any sysadmin.
Let us know if you have any questions or comments feel free to use the form below to get
in touch with us!
55
RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba
/ NFS Shares Part 7
In the last article (RHCSA series Part 6) we started explaining how to set up and configure
local system storage using parted and ssm.
RHCSA Series:: Configure ACLs and Mounting NFS / Samba Shares Part 7
We also discussed how to create and mount encrypted volumes with a password during
system boot. In addition, we warned you to avoid performing critical storage management
operations on mounted filesystems. With that in mind we will now review the most used file
system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of
mounting, using, and unmounting both manually and automatically network filesystems
(CIFS and NFS), along with the implementation of access control lists for your system.
Prerequisites
Before proceeding further, please make sure you have a Samba server and a NFS server
available (note that NFSv2 is no longer supported in RHEL 7).
During this guide we will use a machine with IP 192.168.0.10 with both services running in
it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we
will tell you which packages you need to install on the client.
Beginning with RHEL 7, XFS has been introduced as the default file system for all
architectures due to its high performance and scalability. It currently supports a maximum
filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for
mainstream hardware.
56
Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists)
as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7),
which means that you dont need to specify those options explicitly either on the command
line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in
this last case, you have to explicitly use no_acl and no_user_xattr).
Keep in mind that the extended user attributes can be assigned to files and directories for
storing arbitrary additional information such as the mime type, character set or encoding of a
file, whereas the access permissions for user attributes are defined by the regular file
permission bits.
As every system administrator, either beginner or expert, is well acquainted with regular
access permissions on files and directories, which specify certain privileges (read, write, and
execute) for the owner, the group, and the world (all others). However, feel free to refer to
Part 3 of the RHCSA series if you need to refresh your memory a little bit.
However, since the standard ugo/rwx set does not allow to configure different permissions
for different users, ACLs were introduced in order to define more detailed access rights for
files and directories than those specified by regular permissions.
In fact, ACL-defined permissions are a superset of the permissions specified by the file
permission bits. Lets see how all of this translates is applied in the real world.
1. There are two types of ACLs: access ACLs, which can be applied to either a specific file
or a directory), and default ACLs, which can only be applied to a directory. If files contained
therein do not have a ACL set, they inherit the default ACL of their parent directory.
2. To begin, ACLs can be configured per user, per group, or per an user not in the owning
group of a file.
3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively.
For example, let us create a group named tecmint and add users johndoe and davenull to it:
# groupadd tecmint
# useradd johndoe
# useradd davenull
# usermod -a -G tecmint johndoe
# usermod -a -G tecmint davenull
And lets verify that both users belong to supplementary group tecmint:
# id johndoe
# id davenull
57
Verify Users
Lets now create a directory called playground within /mnt, and a file named testfile.txt
inside. We will set the group owner to tecmint and change its default ugo/rwx
permissions to 770 (read, write, and execute permissions granted to both the owner and
the group owner of the file):
# mkdir /mnt/playground
# touch /mnt/playground/testfile.txt
# chmod 770 /mnt/playground/testfile.txt
Then switch user to johndoe and davenull, in that order, and write to the file:
So far so good. Now lets have user gacanepa write to the file and the write operation will
fail, which was to be expected.
But what if we actually need user gacanepa (who is not a member of group tecmint) to have
write permissions on /mnt/playground/testfile.txt? The first thing that may come to your
mind is adding that user account to group tecmint. But that will give him write permissions
on ALL files were the write bit is set for the group, and we dont want that. We only want
him to be able to write to /mnt/playground/testfile.txt.
# touch /mnt/playground/testfile.txt
# chown :tecmint /mnt/playground/testfile.txt
# chmod 777 /mnt/playground/testfile.txt
# su johndoe
$ echo "My name is John Doe" > /mnt/playground/testfile.txt
$ su davenull
$ echo "My name is Dave Null" >> /mnt/playground/testfile.txt
$ su gacanepa
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt
Run as root,
58
and youll have successfully added an ACL that allows gacanepa to write to the test file.
Then switch to user gacanepa and try to write to the file again:
# getfacl /mnt/playground/testfile.txt
To set a default ACL to a directory (which its contents will inherit unless overwritten
otherwise), add d: before the rule and specify a directory instead of a file name:
The ACL above will allow users not in the owner group to have read access to the future
contents of the /mnt/playground directory. Note the difference in the output of getfacl
/mnt/playground before and after the change:
Chapter 20 in the official RHEL 7 Storage Administration Guide provides more ACL
examples, and I highly recommend you take a look at it and have it handy as reference.
59
To show the list of NFS shares available in your server, you can use the showmount
command with the -e option, followed by the machine name or its IP address. This tool is
included in the nfs-utils package:
Then do:
# showmount -e 192.168.0.10
and you will get a list of the available NFS shares on 192.168.0.10:
To mount NFS network shares on the local client using the command line on demand, use the
following syntax:
If you get the following error message: Job for rpc-statd.service failed. See systemctl
status rpc-statd.service and journalctl -xn for details., make sure the rpcbind service
is enabled and started in your system first:
and then reboot. That should do the trick and you will be able to mount your NFS share as
explained earlier. If you need to mount the NFS share automatically on system boot, add a
valid entry to the /etc/fstab file:
60
Samba represents the tool of choice to make a network share available in a network with
*nix and Windows machines. To show the Samba shares that are available, use the smbclient
command with the -L flag, followed by the machine name or its IP address. This tool is
included in the samba-client package:
# smbclient -L 192.168.0.10
To mount Samba network shares on the local client you will need to install first the cifs-utils
package:
where smbcredentials:
username=gacanepa
password=XXXXXX
61
is a hidden file inside roots home (/root/) with permissions set to 600, so that no one else but
the owner of the file can read or write to it.
Please note that the samba_share is the name of the Samba share as returned by smbclient -
L remote_host as shown above.
Now, if you need the Samba share to be available automatically on system boot, add a valid
entry to the /etc/fstab file as follows:
Conclusion
In this article we have explained how to set up ACLs in Linux, and discussed how to mount
CIFS and NFS network shares in a RHEL 7 client.
I recommend you to practice these concepts and even mix them (go ahead and try to set
ACLs in mounted network shares) until you feel comfortable. If you have questions or
comments feel free to use the form below to contact us anytime. Also, feel free to share this
article through your social networks.
62
RHCSA Series: Securing SSH, Setting Hostname and Enabling Network
Services Part 8
As a system administrator you will often have to log on to remote systems to perform a
variety of administration tasks using a terminal emulator. You will rarely sit in front of a real
(physical) terminal, so you need to set up a way to log on remotely to the machines that you
will be asked to manage.
In fact, that may be the last thing that you will have to do in front of a physical terminal. For
security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through
the wire in unencrypted, plain text.
In addition, in this article we will also review how to configure network services to start
automatically at boot and learn how to set up network and hostname resolution statically or
dynamically.
For you to be able to log on remotely to a RHEL 7 box using SSH, you will have to install
the openssh, openssh-clients and openssh-servers packages. The following command not
only will install the remote login program, but also the secure file transfer tool, as well as the
remote file copy utility:
Note that its a good idea to install the server counterparts as you may want to use the same
machine as both client and server at some point or another.
63
After installation, there is a couple of basic things that you need to take into account if you
want to secure remote access to your SSH server. The following settings should be present in
the /etc/ssh/sshd_config file.
1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high
port (2000 or greater), but first make sure the chosen port is not being used.
For example, lets suppose you choose port 2500. Use netstat in order to check whether the
chosen port is being used or not:
If netstat does not return anything, you can safely use port 2500 for sshd, and you should
change the Port setting in the configuration file as follows:
Port 2500
Protocol 2
3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to
a minimum the list of users which are allowed to login via ssh:
LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa
PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes
This assumes that you have already created a key pair with your user name on your client
machine and copied it to your server as explained here.
1. Every system administrator should be well acquainted with the following system-wide
configuration files:
For example,
64
192.168.0.10 laptop laptop.gabrielcanepa.com.ar
2. /etc/resolv.conf specifies the IP addresses of DNS servers and the search domain,
which is used for completing a given query name to a fully qualified domain name when no
domain suffix is supplied.
Under normal circumstances, you dont need to edit this file as it is managed by the system.
However, should you want to change DNS servers, be advised that you need to stick to the
following structure in each line:
nameserver - IP address
For example,
nameserver 8.8.8.8
3. 3. /etc/host.conf specifies the methods and the order by which hostnames are resolved
within a network. In other words, tells the name resolver which services to use, and in what
order.
Although this file has several options, the most common and basic setup includes a line as
follows:
order bind,hosts
Which indicates that the resolver should first look in the nameservers specified in
resolv.conf and then to the /etc/hosts file for name resolution.
4. /etc/sysconfig/network contains routing and global host information for all network
interfaces. The following values may be used:
NETWORKING=yes|no
HOSTNAME=value
GATEWAY=XXX.XXX.XXX.XXX
GATEWAYDEV=value
In a machine with multiple NICs, value is the gateway device, such as enp0s3.
Inside the directory mentioned previously, you will find several plain text files named.
ifcfg-name
65
Check Network Link Status
For example:
Network Files
Other than for the loopback interface, you can expect a similar configuration for your NICs.
Note that some variables, if set, will override those present in /etc/sysconfig/network for
this particular interface. Each line is commented for clarification in this article but in the
actual file you should avoid comments:
Setting Hostnames
In Red Hat Enterprise Linux 7, the hostnamectl command is used to both query and set the
systems hostname.
# hostnamectl status
66
Check System Hostname
For example,
For the changes to take effect you will need to restart the hostnamed daemon (that way you
will not have to log off and on again in order to apply the change):
67
Set System Hostname
In addition, RHEL 7 also includes the nmcli utility that can be used for the same purpose. To
display the hostname, run:
For example,
To wrap up, let us see how we can ensure that network services are started automatically on
boot. In simple terms, this is done by creating symlinks to certain files specified in the
[Install] section of the service configuration files.
[Install]
WantedBy=basic.target
Alias=dbus-org.fedoraproject.FirewallD1.service
Conclusion
68
In this article we have summarized how to install and secure connections via SSH to a RHEL
server, how to change its name, and finally how to ensure that network services are started on
boot. If you notice that a certain service has failed to start properly, you can use systemctl
status -l [service] and journalctl -xn to troubleshoot it.
Feel free to let us know what you think about this article using the comment form below.
Questions are also welcome. We look forward to hearing from you!
A FTP server is one of the oldest and most commonly used resources (even to this day) to
make files available to clients on a network in cases where no authentication is necessary
since FTP uses username and password without encryption.
The web server available in RHEL 7 is version 2.4 of the Apache HTTP Server. As for the
FTP server, we will use the Very Secure Ftp Daemon (aka vsftpd) to establish connections
secured by TLS.
In this article we will explain how to install, configure, and secure a web server and a FTP
server in RHEL 7.
In this guide we will use a RHEL 7 server with a static IP address of 192.168.0.18/24. To
install Apache and VSFTPD, run the following command:
69
# yum update && yum install httpd vsftpd
When the installation completes, both services will be disabled initially, so we need to start
them manually for the time being and enable them to start automatically beginning with the
next boot:
In addition, we have to open ports 80 and 21, where the web and ftp daemons are listening,
respectively, in order to allow access to those services from the outside:
To confirm that the web server is working properly, fire up your browser and enter the IP of
the server. You should see the test page:
As for the ftp server, we will have to configure it further, which we will do in a minute,
before confirming that its working as expected.
Although the default configuration should be sufficient for most cases, its a good idea to
become familiar with all the available options as described in the official documentation.
As always, make a backup copy of the main configuration file before editing it:
70
Then open it with your preferred text editor and look for the following variables:
1. ServerRoot: the directory where the servers configuration, error, and log files are
kept.
2. Listen: instructs Apache to listen on specific IP address and / or ports.
3. Include: allows the inclusion of other configuration files, which must exist.
Otherwise, the server will fail, as opposed to the IncludeOptional directive, which is
silently ignored if the specified configuration files do not exist.
4. User and Group: the name of the user/group to run the httpd service as.
5. DocumentRoot: The directory out of which Apache will serve your documents. By
default, all requests are taken from this directory, but symbolic links and aliases may
be used to point to other locations.
6. ServerName: this directive sets the hostname (or IP address) and port that the server
uses to identify itself.
The first security measure will consist of creating a dedicated user and group (i.e.
tecmint/tecmint) to run the web server as and changing the default port to a higher one (9000
in this case):
ServerRoot "/etc/httpd"
Listen 192.168.0.18:9000
User tecmint
Group tecmint
DocumentRoot "/var/www/html"
ServerName 192.168.0.18:9000
# apachectl configtest
and dont forget to enable the new port (and disable the old one) in the firewall:
Note that, due to SELinux policies, you can only use the ports returned by
If you want to use another port (i.e. TCP port 8100), you will have to add it to SELinux port
context for the httpd service:
71
Add Apache Port to SELinux Policies
2. Disable directory listing in order to prevent the browser from displaying the contents of a
directory if there is no index.html present in that directory.
Edit /etc/httpd/conf/httpd.conf (and the configuration files for virtual hosts, if any) and
make sure that the Options directive, both at the top and at Directory block levels, is set to
None:
Options None
3. Hide information about the web server and the operating system in HTTP responses. Edit
/etc/httpd/conf/httpd.conf as follows:
ServerTokens Prod
ServerSignature Off
Now you are ready to start serving content from your /var/www/html directory.
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
chroot_local_user=YES
allow_writeable_chroot=YES
listen=NO
72
listen_ipv6=YES
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
By using chroot_local_user=YES, local users will be (by default) placed in a chrooted jail
in their home directory right after login. This means that local users will not be able to access
any files outside their corresponding home directories.
Finally, to allow ftp to read files in the users home directory, set the following SELinux
boolean:
# setsebool -P ftp_home_dir on
You can now connect to the ftp server using a client such as Filezilla:
Note that the /var/log/xferlog log records downloads and uploads, which concur with the
above directory listing:
Read Also: Limit FTP Network Bandwidth Used by Applications in a Linux System with
Trickle
Summary
73
In this tutorial we have explained how to set up a web and a ftp server. Due to the vastness of
the subject, it is not possible to cover all the aspects of these topics (i.e. virtual web hosts).
Thus, I recommend you also check other excellent articles in this website about Apache.
RHCSA: Yum Package Management, Cron Job Scheduling and Log Monitoring Part 10
To install a package along with all its dependencies that are not already installed, you will
use:
For example, to install httpd and mlocate (in that order), type.
Note: That the letter y in the example above bypasses the confirmation prompts that yum
presents before performing the actual download and installation of the requested programs.
You can leave it out if you want.
74
By default, yum will install the package with the architecture that matches the OS
architecture, unless overridden by appending the package architecture to its name.
For example, on a 64 bit system, yum install package will install the x86_64 version of
package, whereas yum install package.x86 (if available) will install the 32-bit one.
There will be times when you want to install a package but dont know its exact name. The
search all or search options can search the currently enabled repositories for a certain
keyword in the package name and/or in its description as well, respectively.
For example,
will search the installed repositories for packages with the word log in their names and
summaries, whereas
will look for the same keyword in the package description and url fields as well.
Once the search returns a package listing, you may want to display further information about
some of them before installing. That is when the info option will come in handy:
You can regularly check for updates with the following command:
# yum check-update
The above command will return all the installed packages for which an update is available. In
the example shown in the image below, only rhel-7-server-rpms has an update available:
75
Check For Package Updates
If there are several packages that can be updated, yum update will update all of them at
once.
Now what happens when you know the name of an executable, such as ps2pdf, but dont
know which package provides it? You can find out with yum whatprovides
*/[executable]:
Now, when it comes to removing a package, you can do so with yum remove package. Easy,
huh? This goes to show that yum is a complete and powerful package manager.
RPM (aka RPM Package Manager, or originally RedHat Package Manager) can also be
used to install or update packages when they come in form of standalone .rpm packages.
It is often utilized with the -Uvh flags to indicate that it should install the package if its not
already present or attempt to update it if its installed (-U), producing a verbose output (-v)
and a progress bar with hash marks (-h) while the operation is being performed. For
example,
76
Another typical use of rpm is to produce a list of currently installed packages with code>rpm
-qa (short for query all):
# rpm -qa
Linux and other Unix-like operating systems include a tool called cron that allows you to
schedule tasks (i.e. commands or shell scripts) to run on a periodic basis. Cron checks every
minute the /var/spool/cron directory for files which are named after accounts in /etc/passwd.
When executing commands, any output is mailed to the owner of the crontab (or to the user
specified in the MAILTO environment variable in the /etc/crontab, if it exists).
Crontab files (which are created by typing crontab -e and pressing Enter) have the following
format:
Crontab Entries
77
Thus, if we want to update the local file database (which is used by locate to find files by
name or pattern) every second day of the month at 2:15 am, we need to add the following
crontab entry:
15 02 2 * * /bin/updatedb
The above crontab entry reads, Run /bin/updatedb on the second day of the month, every
month of the year, regardless of the day of the week, at 2:15 am. As Im sure you
already guessed, the star symbol is used as a wildcard character.
After adding a cron job, you can see that a file named root was added inside /var/spool/cron,
as we mentioned earlier. That file lists all the tasks that the crond daemon should run:
# ls -l /var/spool/cron
In the above image, the current users crontab can be displayed either using cat
/var/spool/cron/root or,
# crontab -l
If you need to run a task on a more fine-grained basis (for example, twice a day or three times
each month), cron can also help you to do that.
For example, to run /my/script on the 1st and 15th of each month and send any output to
/dev/null, you can add two crontab entries as follows:
But in order for the task to be easier to maintain, you can combine both entries into one:
Following the previous example, we can run /my/other/script at 1:30 am on the first day of
the month every three months:
But when you have to repeat a certain task every x minutes, hours, days, or months, you
can divide the right position by the desired frequency. The following crontab entry has the
exact same meaning as the previous one:
78
Or perhaps you need to run a certain job on a fixed frequency or after the system boots, for
example. You can use one of the following string instead of the five fields to indicate the
exact time when you want your job to run:
System logs are located (and rotated) inside the /var/log directory. According to the Linux
Filesystem Hierarchy Standard, this directory contains miscellaneous log files, which are
written to it or an appropriate subdirectory (such as audit, httpd, or samba in the image
below) by the corresponding daemons during system operation:
# ls /var/log
Other interesting logs are dmesg (contains all messages from kernel ring buffer), secure (logs
connection attempts that require user authentication), messages (system-wide messages) and
wtmp (records of all user logins and logouts).
Logs are very important in that they allow you to have a glimpse of what is going on at all
times in your system, and what has happened in the past. They represent a priceless tool to
troubleshoot and monitor a Linux server, and thus are often used with the tail -f command
to display events, in real time, as they happen and are recorded in a log.
For example, if you want to display kernel-related events, type the following command:
# tail -f /var/log/dmesg
# tail -f /var/log/httpd/access.log
Summary
If you know how to efficiently manage packages, schedule tasks, and where to look for
information about the current and past operation of your system you can rest assure that you
79
will not run into surprises very often. I hope this article has helped you learn or refresh your
knowledge about these basic skills.
Dont hesitate to drop us a line using the contact form below if you have any questions or
comments.
In this article we will review the basics of firewalld, the default dynamic firewall daemon in
Red Hat Enterprise Linux 7, and iptables service, the legacy firewall service for Linux,
with which most system and network administrators are well acquainted, and which is also
available in RHEL 7.
Under the hood, both firewalld and the iptables service talk to the netfilter framework in the
kernel through the same interface, not surprisingly, the iptables command. However, as
opposed to the iptables service, firewalld can change the settings during normal system
operation without existing connections being lost.
Firewalld should be installed by default in your RHEL system, though it may not be running.
You can verify with the following commands (firewall-config is the user interface
configuration tool):
80
Check FirewallD Information
and,
On the other hand, the iptables service is not included by default, but can be installed
through.
Both daemons can be started and enabled to start on boot with the usual systemd commands:
81
Read Also: Useful Commands to Manage Systemd Services
As for the configuration files, the iptables service uses /etc/sysconfig/iptables (which
will not exist if the package is not installed in your system). On a RHEL 7 box used as a
cluster node, this file looks as follows:
Whereas firewalld store its configuration across two directories, /usr/lib/firewalld and
/etc/firewalld:
# ls /usr/lib/firewalld /etc/firewalld
FirewallD Configuration
We will examine these configuration files further later in this article, after we add a few rules
here and there. By now it will suffice to remind you that you can always find more
information about both tools with.
# man firewalld.conf
# man firewall-cmd
# man iptables
82
Other than that, remember to take a look at Reviewing Essential Commands & System
Documentation Part 1 of the current series, where I described several sources where you
can get information about the packages installed on your RHEL 7 system.
You may want to refer to Configure Iptables Firewall Part 8 of the Linux Foundation
Certified Engineer (LFCE) series to refresh your memory about iptables internals before
proceeding further. Thus, we will be able to jump in right into the examples.
TCP ports 80 and 443 are the default ports used by the Apache web server to handle normal
(HTTP) and secure (HTTPS) web traffic. You can allow incoming and outgoing web traffic
through both ports on the enp0s3 interface as follows:
There may be times when you need to block all (or some) type of traffic originating from a
specific network, say 192.168.1.0/24 for example:
will drop all packages coming from the 192.168.1.0/24 network, whereas,
If you use your RHEL 7 box not only as a software firewall, but also as the actual hardware-
based one, so that it sits between two distinct networks, IP forwarding must have been
already enabled in your system. If not, you need to edit /etc/sysctl.conf and set the value
of net.ipv4.ip_forward to 1, as follows:
net.ipv4.ip_forward = 1
then save the change, close your text editor and finally run the following command to apply
the change:
# sysctl -p /etc/sysctl.conf
83
For example, you may have a printer installed at an internal box with IP 192.168.0.10, with
the CUPS service listening on port 631 (both on the print server and on your firewall). In
order to forward print requests from clients on the other side of the firewall, you should add
the following iptables rule:
Please keep in mind that iptables reads its rules sequentially, so make sure the default
policies or later rules do not override those outlined in the examples above.
One of the changes introduced with firewalld are zones. This concept allows to separate
networks into different zones level of trust the user has decided to place on the devices and
traffic within that network.
# firewall-cmd --get-active-zones
In the example below, the public zone is active, and the enp0s3 interface has been assigned
to it automatically. To view all the information about a particular zone:
Since you can read more about zones in the RHEL 7 Security guide, we will only list some
specific examples here.
# firewall-cmd --get-services
84
List All Supported Services
To allow http and https web traffic through the firewall, effective immediately and on
subsequent boots:
If code>zone is omitted, the default zone (you can check with firewall-cmd get-default-
zone) is used.
To remove the rule, replace the word add with remove in the above commands.
First off, you need to find out if masquerading is enabled for the desired zone:
In the image below, we can see that masquerading is enabled for the external zone, but not
for public:
85
You can find further examples on Part 9 of the RHCSA series, where we explained how to
allow or disable the ports that are usually used by a web server and a ftp server, and how to
change the corresponding rule when the default port for those services are changed. In
addition, you may want to refer to the firewalld wiki for further examples.
Conclusion
In this article we have explained what a firewall is, what are the available services to
implement one in RHEL 7, and provided a few examples that can help you get started with
this task. If you have any comments, suggestions, or questions, feel free to let us know using
the form below. Thank you in advance!
In this article we will show what you need to use kickstart utility so that you can forget
about babysitting servers during the installation process.
86
Introducing Kickstart and Automated Installations
Kickstart is an automated installation method used primarily by Red Hat Enterprise Linux
(and other Fedora spin-offs, such as CentOS, Oracle Linux, etc.) to execute unattended
operating system installation and configuration. Thus, kickstart installations allow system
administrators to have identical systems, as far as installed package groups and system
configuration are concerned, while sparing them the hassle of having to manually install each
of them.
1. Create a Kickstart file, a plain text file with several predefined configuration options.
2. Make the Kickstart file available on removable media, a hard drive or a network
location. The client will use the rhel-server-7.0-x86_64-boot.iso file, whereas you will need
to make the full ISO image (rhel-server-7.0-x86_64-dvd.iso) available from a network
resource, such as a HTTP of FTP server (in our present case, we will use another RHEL 7
box with IP 192.168.0.18).
To create a kickstart file, login to your Red Hat Customer Portal account, and use the
Kickstart configuration tool to choose the desired installation options. Read each one of them
carefully before scrolling down, and choose what best fits your needs:
If you specify that the installation should be performed either through HTTP, FTP, or NFS,
make sure the firewall on the server allows those services.
Although you can use the Red Hat online tool to create a kickstart file, you can also create it
manually using the following lines as reference. You will notice, for example, that the
87
installation process will be in English, using the latin american keyboard layout and the
America/Argentina/San_Luis time zone:
lang en_US
keyboard la-latin1
timezone America/Argentina/San_Luis --isUtc
rootpw $1$5sOtDvRo$In4KTmX7OmcOW9HUvWtfn0 --iscrypted
#platform x86, AMD64, or Intel EM64T
text
url --url=http://192.168.0.18//kickstart/media
bootloader --location=mbr --append="rhgb quiet crashkernel=auto"
zerombr
clearpart --all --initlabel
autopart
auth --passalgo=sha512 --useshadow
selinux --enforcing
firewall --enabled
firstboot --disable
%packages
@base
@backup-server
@print-server
%end
In the online configuration tool, use 192.168.0.18 for HTTP Server and
/kickstart/tecmint.bin for HTTP Directory in the Installation section after selecting
HTTP as installation source. Finally, click the Download button at the right top corner to
download the kickstart file.
In the kickstart sample file above, you need to pay careful attention to.
url --url=http://192.168.0.18//kickstart/media
That directory is where you need to extract the contents of the DVD or ISO installation
media. Before doing that, we will mount the ISO installation file in /media/rhel as a loop
device:
# cp -R /media/rhel /var/www/html/kickstart/media
When youre done, the directory listing and disk usage of /var/www/html/kickstart/media
should look as follows:
88
Kickstart Media Files
Regardless of how you choose to create the kickstart file, its always a good idea to check its
syntax before proceeding with the installation. To do that, install the pykickstart package.
# ksvalidator /var/www/html/kickstart/tecmint.bin
If the syntax is correct, you will not get any output, whereas if theres an error in the file, you
will get a warning notice indicating the line where the syntax is not correct or unknown.
To start, boot your client using the rhel-server-7.0-x86_64-boot.iso file. When the initial
screen appears, select Install Red Hat Enterprise Linux 7.0 and press the Tab key to
append the following stanza and press Enter:
# inst.ks=http://192.168.0.18/kickstart/tecmint.bin
89
RHEL Kickstart Installation
When you press Enter, the automated installation will begin, and you will see the list of
packages that are being installed (the number and the names will differ depending on your
choice of programs and package groups):
When the automated process ends, you will be prompted to remove the installation media and
then you will be able to boot into your newly installed system:
Although you can create your kickstart files manually as we mentioned earlier, you should
consider using the recommended approach whenever possible. You can either use the online
configuration tool, or the anaconda-ks.cfg file that is created by the installation process in
roots home directory.
This file actually is a kickstart file, so you may want to install the first box manually with all
the desired options (maybe modify the logical volumes layout or the file system on top of
each one) and then use the resulting anaconda-ks.cfg file to automate the installation of the
rest.
90
In addition, using the online configuration tool or the anaconda-ks.cfg file to guide future
installations will allow you to perform them using an encrypted root password out-of-the-
box.
Conclusion
Now that you know how to create kickstart files and how to use them to automate the
installation of Red Hat Enterprise Linux 7 servers, you can forget about babysitting the
installation process. This will give you time to do other things, or perhaps some leisure time
if youre lucky.
Either way, let us know what you think about this article using the form below. Questions are
also welcome!
Although necessary as first level permissions and access control mechanisms, they have some
limitations that are addressed by Security Enhanced Linux (aka SELinux for short).
One of such limitations is that a user can expose a file or directory to a security breach
through a poorly elaborated chmod command and thus cause an unexpected propagation of
access rights. As a result, any process started by that user can do as it pleases with the files
91
owned by the user, where finally a malicious or otherwise compromised software can achieve
root-level access to the entire system.
With those limitations in mind, the United States National Security Agency (NSA) first
devised SELinux, a flexible mandatory access control method, to restrict the ability of
processes to access or perform other operations on system objects (such as files, directories,
network ports, etc) to the least permission model, which can be modified later as needed. In
few words, each element of the system is given only the access required to function.
In RHEL 7, SELinux is incorporated into the kernel itself and is enabled in Enforcing mode
by default. In this article we will explain briefly the basic concepts associated with SELinux
and its operation.
SELinux Modes
1. Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that
control the security engine.
2. Permissive: SELinux does not deny access, but denials are logged for actions that would have
been denied if running in enforcing mode.
3. Disabled (self-explanatory).
The getenforce command displays the current mode of SELinux, whereas setenforce
(followed by a 1 or a 0) is used to change the mode to Enforcing or Permissive, respectively,
during the current session only.
In order to achieve persistence across logouts and reboots, you will need to edit the
/etc/selinux/config file and set the SELINUX variable to either enforcing, permissive,
or disabled:
# getenforce
# setenforce 0
# getenforce
# setenforce 1
# getenforce
# cat /etc/selinux/config
Typically you will use setenforce to toggle between SELinux modes (enforcing to permissive
and back) as a first troubleshooting step. If SELinux is currently set to enforcing while
92
youre experiencing a certain problem, and the same goes away when you set it to
permissive, you can be confident youre looking at a SELinux permissions issue.
SELinux Contexts
A SELinux context consists of an access control environment where decisions are made
based on SELinux user, role, and type (and optionally a level):
1. A SELinux user complements a regular Linux user account by mapping it to a SELinux user
account, which in turn is used in the SELinux context for processes in that session, in order to
explicitly define their allowed roles and levels.
2. The concept of role acts as an intermediary between domains and SELinux users in that it
defines which process domains and file types can be accessed. This will shield your system
against vulnerability to privilege escalation attacks.
3. A type defines an SELinux file type or an SELinux process domain. Under normal
circumstances, processes are prevented from accessing files that other processes use, and
and from accessing other processes, thus access is only allowed if a specific SELinux policy
rule exists that allows it.
Lets see how all of that works through the following examples.
In Securing SSH Part 8 we explained that changing the default port where sshd listens on is
one of the first security measures to secure your server against external attacks. Lets edit the
/etc/ssh/sshd_config file and set the port to 9999:
Port 9999
As you can see, sshd has failed to start. But what happened?
93
they might be easily identified from other messages) because that is a reserved port for the
JBoss Management service:
At this point you could disable SELinux (but dont!) as explained earlier and try to start sshd
again, and it should work. However, the semanage utility can tell us what we need to change
in order for us to be able to start sshd in whatever port we choose without issues.
Run,
to get a list of the ports where SELinux allows sshd to listen on.
Semanage Tool
So lets change the port in /etc/ssh/sshd_config to Port 9998, add the port to the
ssh_port_t context, and then restart the service:
As you can see, the service was started successfully this time. This example illustrates the
fact that SELinux controls the TCP port number to its own port type internal definitions.
94
EXAMPLE 2: Allowing httpd to send access sendmail
This is an example of SELinux managing a process accessing another process. If you were to
implement mod_security and mod_evasive along with Apache in your RHEL 7 server, you
need to allow httpd to access sendmail in order to send a mail notification in the wake of a
(D)DoS attack. In the following command, omit the -P flag if you do not want the change to
be persistent across reboots.
As you can tell from the above example, SELinux boolean settings (or just booleans) are
true / false rules embedded into SELinux policies. You can list all the booleans with
semanage boolean -l, and alternatively pipe it to grep in order to filter the output.
EXAMPLE 3: Serving a static site from a directory other than the default one
Suppose you are serving a static website using a different directory than the default one
(/var/www/html), say /websites (this could be the case if youre storing your web files in a
shared network drive, for example, and need to mount it at /websites).
a). Create an index.html file inside /websites with the following contents:
<html>
<h2>SELinux test</h2>
</html>
If you do,
# ls -lZ /websites/index.html
you will see that the index.html file has been labeled with the default_t SELinux type,
which Apache cant access:
95
c). Browse to http://<web server IP address>, and you should get a 503 Forbidden
HTTP response.
d). Next, change the label of /websites, recursively, to the httpd_sys_content_t type in order
to grant Apache read-only access to that directory and its contents:
# restorecon -R -v /websites
Now restart Apache and browse to http://<web server IP address> again and you will
see the html file displayed correctly:
Summary
In this article we have gone through the basics of SELinux. Note that due to the vastness of
the subject, a full detailed explanation is not possible in a single article, but we believe that
the principles outlined in this guide will help you to move on to more advanced topics should
you wish to do so.
If I may, let me recommend two essential resources to start with: the NSA SELinux page and
the RHEL 7 SELinux Users and Administrators guide.
96
RHCSA Series: Setup LDAP Server and Client Authentication Part 14
As we will see, there are several other possible application scenarios, but in this guide we will
focus entirely on LDAP-based authentication. In addition, please keep in mind that due to
the vastness of the subject, we will only cover its basics here, but you can refer to the
documentation outlined in the summary for more in-depth details.
For the same reason, you will note that I have decided to leave out several references to man
pages of LDAP tools for the sake of brevity, but the corresponding explanations are at a
fingertips distance (man ldapadd, for example).
If you want, you can use the machine installed in Part 12: Automate RHEL 7 installations
using Kickstart as client.
What is LDAP?
LDAP stands for Lightweight Directory Access Protocol and consists in a set of protocols
that allows a client to access, over a network, centrally stored information (such as a directory
of login shells, absolute paths to home directories, and other typical system user information,
for example) that should be accessible from different places or available to a large number of
end users (another example would be a directory of home addresses and phone numbers of all
employees in a company).
97
Keeping such (and more) information centrally means it can be more easily maintained and
accessed by everyone who has been granted permissions to use it.
The following diagram offers a simplified diagram of LDAP, and is described below in
greater detail:
LDAP Diagram
That being said, lets proceed with the server and client installations.
In RHEL 7, LDAP is implemented by OpenLDAP. To install the server and client, use the
following commands, respectively:
Once the installation is complete, there are some things we look at. The following steps
should be performed on the server alone, unless explicitly noted:
1. Make sure SELinux does not get in the way by enabling the following booleans
persistently, both on the server and the client:
98
2. Enable and start the service:
Keep in mind that you can also disable, restart, or stop the service with systemctl as well:
3. Since the slapd service runs as the ldap user (which you can verify with ps -e -o
pid,uname,comm | grep slapd), such user should own the /var/lib/ldap directory in order
for the server to be able to modify entries created by administrative tools that can only be run
as root (more on this in a minute).
Before changing the ownership of this directory recursively, copy the sample database
configuration file for slapd into it:
# cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
# chown -R ldap:ldap /var/lib/ldap
# slappasswd
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD
where:
Referring to the theoretical background provided earlier, the ldaprootpasswd.ldif file will
add an entry to the LDAP directory. In that entry, each line represents an attribute: value pair
99
(where dn, changetype, add, and olcRootPW are the attributes and the strings to the right of
each colon are their corresponding values).
You may want to keep this in mind as we proceed further, and please note that we are using
the same Common Names (cn=) throughout the rest of this article, where each step depends
on the previous one.
5. Now, add the corresponding LDAP entry by specifying the URI referring to the ldap
server, where only the protocol/host/port fields are allowed.
LDAP Configuration
and import some basic LDAP definitions from the /etc/openldap/schema directory:
LDAP Definitions
Create another LDIF file, which we will call ldapdomain.ldif, with the following contents,
replacing your domain (in the Domain Component dc=) and password as appropriate:
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by
dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
100
read by dn.base="cn=Manager,dc=mydomain,dc=com" read by * none
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
dn="cn=Manager,dc=mydomain,dc=com" write by anonymous auth by self write
by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=mydomain,dc=com" write by * read
7. Now its time to add some entries to our LDAP directory. Attributes and values are
separated by a colon (:) in the following file, which well name baseldapdomain.ldif:
dn: dc=mydomain,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: mydomain com
dc: mydomain
101
dn: cn=Manager,dc=mydomain,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager
dn: ou=People,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: People
dn: ou=Group,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: Group
8. Create a LDAP user called ldapuser (adduser ldapuser), then create the definitions for a
LDAP group in ldapgroup.ldif.
# adduser ldapuser
# vi ldapgroup.ldif
dn: cn=Manager,ou=Group,dc=mydomain,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 1004
where gidNumber is the GID in /etc/group for ldapuser) and load it:
9. Add a LDIF file with the definitions for user ldapuser (ldapuser.ldif):
dn: uid=ldapuser,ou=People,dc=mydomain,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldapuser
uid: ldapuser
102
uidNumber: 1004
gidNumber: 1004
homeDirectory: /home/ldapuser
userPassword: {SSHA}fiN0YqzbDuDI0Fpqq9UudWmjZQY28S3M
loginShell: /bin/bash
gecos: ldapuser
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0
Likewise, you can delete the user entry you just created:
# ldapdelete -x -W -D cn=Manager,dc=mydomain,dc=com
"uid=ldapuser,ou=People,dc=mydomain,dc=com"
# firewall-cmd --add-service=ldap
11. Last, but not least, enable the client to authenticate using LDAP.
To help us in this final step, we will use the authconfig utility (an interface for configuring
system authentication resources).
Using the following command, the home directory for the requested user is created if it
doesnt exist after the authentication against the LDAP server succeeds:
103
LDAP Client Configuration
Summary
In this article we have explained how to set up basic authentication against a LDAP server.
To further configure the setup described in the present guide, please refer to Chapter 13
LDAP Configuration in the RHEL 7 System administrators guide, paying special attention to
the security settings using TLS.
Feel free to leave any questions you may have using the comment form below.
104
RHCSA Series: Essentials of Virtualization and Guest Administration
with KVM Part 15
If you look up the word virtualize in a dictionary, you will find that it means to create a
virtual (rather than actual) version of something. In computing, the term virtualization
refers to the possibility of running multiple operating systems simultaneously and isolated
one from another, on top of the same physical (hardware) system, known in the virtualization
schema as host.
RHCSA Series: Essentials of Virtualization and Guest Administration with KVM Part 15
Through the use of the virtual machine monitor (also known as hypervisor), virtual machines
(referred to as guests) are provided virtual resources (i.e. CPU, RAM, storage, network
interfaces, to name a few) from the underlying hardware.
With that in mind, it is plain to see that one of the main advantages of virtualization is cost
savings (in equipment and network infrastructure and in terms of maintenance effort) and a
substantial reduction in the physical space required to accommodate all the necessary
hardware.
Since this brief how-to cannot cover all virtualization methods, I encourage you to refer to the
documentation listed in the summary for further details on the subject.
Please keep in mind that the present article is intended to be a starting point to learn the
basics of virtualization in RHEL 7 using KVM (Kernel-based Virtual Machine) with
command-line utilities, and not an in-depth discussion of the topic.
In order to set up virtualization, your CPU must support it. You can verify whether your
system meets the requirements with the following command:
105
# grep -E 'svm|vmx' /proc/cpuinfo
In the following screenshot we can see that the current system (with an AMD
microprocessor) supports virtualization, as indicated by svm. If we had an Intel-based
processor, we would see vmx instead in the results of the above command.
In addition, you will need to have virtualization capabilities enabled in the firmware of your
host (BIOS or UEFI).
1. qemu-kvm is an open source virtualizer that provides hardware emulation for the
KVM hypervisor whereas qemu-img provides a command line tool for manipulating
disk images.
2. libvirt includes the tools to interact with the virtualization capabilities of the
operating system.
3. libvirt-python contains a module that permits applications written in Python to use
the interface supplied by libvirt.
4. libguestfs-tools: miscellaneous system administrator command line tools for virtual
machines.
5. virt-install: other command-line utilities for virtual machine administration.
Once the installation completes, make sure you start and enable the libvirtd service:
By default, each virtual machine will only be able to communicate with the rest in the same
physical server and with the host itself. To allow the guests to reach other machines inside
our LAN and also the Internet, we need to set up a bridge interface in our host (say br0, for
example) by,
1. adding the following line to our main NIC configuration (most likely
/etc/sysconfig/network-scripts/ifcfg-enp0s3):
BRIDGE=br0
106
2. creating the configuration file for br0 (/etc/sysconfig/network-scripts/ifcfg-br0)
with these contents (note that you may have to change the IP address, gateway address, and
DNS information):
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
IPADDR=192.168.0.18
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NM_CONTROLLED=no
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br0
ONBOOT=yes
DNS1=8.8.8.8
DNS2=8.8.4.4
net.ipv4.ip_forward = 1
# sysctl -p
Note that you may also need to tell firewalld that this kind of traffic should be allowed.
Remember that you can refer to the article on that topic in this same series (Part 11: Network
Traffic Control Using FirewallD and Iptables) if you need help to do that.
Creating VM Images
This means that you need to make sure that you have allocated the necessary space in that
filesystem to accommodate your virtual machines.
The following command will create a virtual machine named tecmint-virt01 with 1 virtual
CPU, 1 GB (=1024 MB) of RAM, and 20 GB of disk space (represented by
/var/lib/libvirt/images/tecmint-virt01.img) using the rhel-server-7.0-x86_64-
dvd.iso image located inside /home/gacanepa/ISOs as installation media and the br0 as
network bridge:
107
# virt-install \
--network bridge=br0
--name tecmint-virt01 \
--ram=1024 \
--vcpus=1 \
--disk path=/var/lib/libvirt/images/tecmint-virt01.img,size=20 \
--graphics none \
--cdrom /home/gacanepa/ISOs/rhel-server-7.0-x86_64-dvd.iso
--extra-args="console=tty0 console=ttyS0,115200"
If the installation file was located in a HTTP server instead of an image stored in your disk,
you will have to replace the cdrom flag with location and indicate the address of the
online repository.
As for the graphics none option, it tells the installer to perform the installation in text-mode
exclusively. You can omit that flag if you are using a GUI interface and a VNC window to
access the main VM console. Finally, with extra-args we are passing kernel boot
parameters to the installer that set up a serial VM console.
The installation should now proceed as a regular (real) server now. If not, please review the
steps listed above.
These are some typical administration tasks that you, as a system administrator, will need to
perform on your virtual machines. Note that all of the following commands need to be run
from your host:
From the output of the above command you will have to note the Id for the virtual machine
(although it will also return its name and current status) because you will need it for most
administration tasks related to a particular VM.
4. Access a VMs serial console if networking is not available and no X server is running on
the host:
108
Note that this will require that you add the serial console configuration information to the
/etc/grub.conf file (refer to the argument passed to the extra-args option when the VM
was created).
Then modify
For CPU:
Then modify
For further commands and details, please refer to table 26.1 in Chapter 26 of the RHEL 5
Virtualization guide (that guide, though a bit old, includes an exhaustive list of virsh
commands used for guest administration).
SUMMARY
In this article we have covered some basic aspects of virtualization with KVM in RHEL 7,
which is both a vast and a fascinating topic, and I hope it will be helpful as a starting guide
for you to later explore more advanced subjects found in the official RHEL virtualization
getting started and deployment / administration guides.
In addition, you can refer to the preceding articles in this KVM series in order to clarify or
expand some of the concepts explained here.
109
RHCE Series: How to Setup and Test Static
Network Routing Part 1
by Gabriel Cnepa | Published: July 29, 2015 | Last Updated: March 11, 2016
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free
Shell Scripting eBooks
RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives
an open source operating system and software to the enterprise community, It also gives
training, support and consulting services for the companies.
Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the
exam, which will going to cover in this RHCE series:
110
Part 6: Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on
Clients
Part 7: Setting Up NFS Server with Kerberos-based Authentication for Linux Clients
Part 8: Implementing HTTPS through TLS using Network Security Service (NSS) for
Apache
Part 9: How to Setup Postfix Mail Server (SMTP) using null-client Configuration
Part 10: Install and Configure Caching-Only DNS Server in RHEL/CentOS 7
Part 11: Setup and Configure Network Bonding or Teaming in RHEL/CentOS 7
Part 12: Create Centralized Secure Storage using iSCSI Target / Initiator on RHEL/CentOS
7
Part 13: Setting Up NTP (Network Time Protocol) Server in RHEL/CentOS 7
To view fees and register for an exam in your country, check the RHCE Certification page.
In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases
where the principles of static routing, packet filtering, and network address translation come
into play.
Please note that we will not cover them in depth, but rather organize these contents in such a
way that will be helpful to take the first steps and build from there.
One of the wonders of modern networking is the vast availability of devices that can connect
groups of computers, whether in relatively small numbers and confined to a single room or
several machines in the same building, city, country, or across continents.
However, in order to effectively accomplish this in any situation, network packets need to be
routed, or in other words, the path they follow from source to destination must be ruled
somehow.
111
Static routing is the process of specifying a route for network packets other than the default,
which is provided by a network device known as the default gateway. Unless specified
otherwise through static routing, network packets are directed to the default gateway; with
static routing, other paths are defined based on predefined criteria, such as the packet
destination.
Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7
box connecting to router #1 [192.168.0.1] to access the Internet and machines in
192.168.0.0/24.
A second router (router #2) has two network interface cards: enp0s3 is also connected to
router #1 to access the Internet and to communicate with the RHEL 7 box and other
machines in the same network, whereas the other (enp0s8) is used to grant access to the
10.0.0.0/24 network where internal services reside, such as a web and / or database server.
In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to
make sure that it can both access the Internet through router #1 and the internal network via
router #2.
In RHEL 7, you will use the ip command to configure and show devices and routing using
the command line. These changes can take effect immediately on a running system but since
they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files
inside /etc/sysconfig/network-scripts to save our configuration permanently.
# ip route show
1. The default gateways IP address is 192.168.0.1 and can be accessed via the enp0s3
NIC.
112
2. When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in
case). In few words, if a machine is set to obtain an IP address through DHCP but
fails to do so for some reason, it is automatically assigned an address in this network.
Bottom line is, this route will allow us to communicate, also via enp0s3, with other
machines who have failed to obtain an IP address from a DHCP server.
3. Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24
network through enp0s3, whose IP address is 192.168.0.18.
These are the typical tasks that you would have to perform in such a setting. Unless specified
otherwise, the following tasks should be performed in router #2:
# ip link show
Oops! We made a mistake in the IP address. We will have to remove the one we assigned
earlier and then add the right one (10.0.0.18):
Now, please note that you can only add a route to a destination network through a gateway
that is itself already reachable. For that reason, we need to assign an IP address within the
192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it:
and stop / disable (just for the time being until we cover packet filtering in the next article)
the firewall:
Back in our RHEL 7 box (192.168.0.18), lets configure a route to 10.0.0.0/24 through
192.168.0.19 (enp0s3 in router #2):
113
# ip route show
Likewise, add the corresponding route in the machine(s) youre trying to reach in 10.0.0.0/24:
# ping -c 4 10.0.0.20
# ping -c 192.168.0.18
where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine.
Alternatively, we can use tcpdump (you may need to install it with yum install tcpdump) to
check the 2-way communication over TCP between our RHEL 7 box and the web server at
10.0.0.20.
and from another terminal in the same system lets telnet to port 80 in the web server
(assuming Apache is listening on that port; otherwise, indicate the right port in the following
command):
# telnet 10.0.0.20 80
114
Where the connection has been properly initialized, as we can tell by looking at the 2-way
communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20).
Please remember that these changes will go away when you restart the system. If you want to
make them persistent, you will need to edit (or create, if they dont already exist) the
following files, in the same systems where we performed the above commands.
Though not strictly necessary for our test case, you should know that /etc/sysconfig/network
contains system-wide network parameters. A typical /etc/sysconfig/network looks as
follows:
When it comes to setting specific variables and values for each NIC (as we did for router #2),
you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and
/etc/sysconfig/network-scripts/ifcfg-enp0s8.
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.19
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NAME=enp0s3
ONBOOT=yes
and
TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
NAME=enp0s8
ONBOOT=yes
Now reboot your system and you should see that route in your table.
115
Summary
In this article we have covered the essentials of static routing in Red Hat Enterprise Linux
7. Although scenarios may vary, the case presented here illustrates the required principles and
the procedures to perform this task. Before wrapping up, I would like to suggest you to take a
look at Chapter 4 of the Securing and Optimizing Linux section in The Linux
Documentation Project site for further details on the topics covered here.
Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) This 800+
eBook contains comprehensive collection of Linux security tips and how to use them safely
and easily to configure Linux-based applications and services.
In the next article we will talk about packet filtering and network address translation to sum
up the networking basic skills needed for the RHCE certification.
As always, we look forward to hearing from you, so feel free to leave your questions,
comments, and suggestions using the form below.
116
When we talk about packet filtering, we refer to a process performed by a firewall in which it
reads the header of each data packet that attempts to pass through it. Then, it filters the packet
by taking the required action based on rules that have been previously defined by the system
administrator.
As you probably know, beginning with RHEL 7, the default service that manages firewall
rules is firewalld. Like iptables, it talks to the netfilter module in the Linux kernel in order to
examine and manipulate network packets. Unlike iptables, updates can take effect
immediately without interrupting active connections you dont even have to restart the
service.
However, you will recall that we disabled the firewall on router #2 to simplify the example
since we had not covered packet filtering yet. Lets see now how we can enable incoming
packets destined for a specific service or port in the destination.
First, lets add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8
(10.0.0.18):
# cat /etc/firewalld/direct.xml
117
Now you can telnet to the web server from the RHEL 7 box and run tcpdump again to
monitor the TCP traffic between the two machines, this time with the firewall in router #2
enabled.
# telnet 10.0.0.20 80
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20
What if you want to only allow incoming connections to the web server (port 80) from
192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network?
Now you can make HTTP requests to the web server, from 192.168.0.18 and from some
other machine in 192.168.0.0/24. In the first case the connection should complete
successfully, whereas in the second it will eventually timeout.
# telnet 10.0.0.20 80
# wget 10.0.0.20
I strongly advise you to check out the Firewalld Rich Language documentation in the Fedora
Project Wiki for further details on rich rules.
Network Address Translation (NAT) is the process where a group of computers (it can also
be just one of them) in a private network are assigned an unique public IP address. As result,
they are still uniquely identified by their own private IP address inside the network but to the
outside they all seem the same.
In addition, NAT makes it possible that computers inside a network sends requests to outside
resources (like the Internet) and have the corresponding responses be sent back to the source
system only.
118
Network Address Translation
In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the
internal zone, where masquerading, or NAT, is enabled by default:
For our current setup, the internal zone along with everything that is enabled in it will be
the default zone:
# firewall-cmd --set-default-zone=internal
# firewall-cmd --reload
You can now verify that you can ping router #1 and an external site (tecmint.com, for
example) from the web server:
# ping -c 2 192.168.0.1
# ping -c 2 tecmint.com
119
Setting Kernel Runtime Parameters in RHEL 7
In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and
RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-
the-fly to modify the systems behavior without much hassle when operating conditions
change.
To do so, the echo shell built-in is used to write to files inside /proc/sys/<category>, where
<category> is most likely one of the following directories:
# sysctl -a | less
Another runtime parameter that you may want to set is kernel.sysrq, which enables the
Sysrq key in your keyboard to instruct the system to perform gracefully some low-level
functions, such as rebooting the system if it has frozen for some reason:
# sysctl <parameter.name>
For example,
# sysctl net.ipv4.ip_forward
# sysctl kernel.sysrq
Some parameters, such as the ones mentioned above, require only one value, whereas others
(for example, fs.inode-state) require multiple values:
120
Check Kernel Parameters
In either case, you need to read the kernels documentation before making any changes.
Please note that these settings will go away when the system is rebooted. To make these
changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows:
(where the number 10 indicates the order of processing relative to other files in the same
directory).
# sysctl -p /etc/sysctl.d/10-forward.conf
Summary
In this tutorial we have explained the basics of packet filtering, network address translation,
and setting kernel runtime parameters on a running system and persistently across reboots. I
hope you have found this information useful, and as always, we look forward to hearing from
you!
Dont hesitate to share with us your questions, comments, or suggestions using the form
below.
121
RHCE: Monitor Linux Performance Activity Reports Part 3
Besides the well-known native Linux tools that are used to check disk, memory, and CPU
usage to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets
to enhance the data you can collect for your reports: sysstat and dstat.
In this article we will describe both, but lets first start by reviewing the usage of the classic
tools.
With df, you will be able to report disk space and inode usage of by filesystem. You need to
monitor both because a lack of space will prevent you from being able to save further files
(and may even cause the system to crash), just like running out of inodes will mean you cant
link further files with their corresponding data structures, thus producing the same effect: you
wont be able to save those files to disk.
122
Check Linux Total Disk Usage
With du, you can estimate file space usage by either file, directory, or filesystem.
For example, lets see how much space is used by the /home directory, which includes all of
the users personal files. The first command will return the overall space currently used by
the entire /home directory, whereas the second will also display a disaggregated list by sub-
directory as well:
# du -sch /home
# du -sch /home/*
123
Check Linux Directory Disk Size
Dont Miss:
Another utility that cant be missing from your toolset is vmstat. It will allow you to see at a
quick glance information about processes, CPU and memory usage, disk activity, and more.
If run without arguments, vmstat will return averages since the last reboot. While you may
use this form of the command once in a while, it will be more helpful to take a certain amount
of system utilization samples, one after another, with a defined time separation between
samples.
For example,
# vmstat 5 10
124
As you can see in the above picture, the output of vmstat is divided by columns: procs
(processes), memory, swap, io, system, and cpu. The meaning of each field can be found in
the FIELD DESCRIPTION sections in the man page of vmstat.
Where can vmstat come in handy? Lets examine the behavior of the system before and
during a yum update:
# vmstat -a 1 5
Please note that as files are being modified on disk, the amount of active memory increases
and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to
user processes (us).
Or during the saving process of a large file directly to disk (caused by dsync):
# vmstat -a 1 5
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync
In this case, we can see a yet larger number of blocks being written to disk (bo), which was to
be expected, but also an increase of the amount of CPU time that it has to wait for I/O
operations to complete before processing tasks (wa).
whereas dstat adds some extra features to the functionality provided by those tools, along
with more counters and flexibility. You can find an overall description of each tool by
running yum info sysstat or yum info dstat, respectively, or checking the individual man
pages after installation.
The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following
parameters in that file:
When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The
first job runs the system activity accounting tool every 10 minutes and stores the reports in
/var/log/sa/saXX where XX is the day of the month.
Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month.
This assumes that we are using the default value in the HISTORY variable in the
configuration file above:
126
The second job generates a daily summary of process accounting at 11:53 pm every day and
stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous
example:
53 23 * * * root /usr/lib64/sa/sa2 -A
For example, you may want to output system statistics from 9:30 am through 5:30 pm of the
sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or
Microsoft Excel (this approach will also allow you to create charts or graphs):
You could alternatively use the -j flag instead of -d in the sadf command above to output the
system stats in JSON format, which could be useful if you need to consume the data in a web
application, for example.
Finally, lets see what dstat has to offer. Please note that if run without arguments, dstat
assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats,
respectively), and adds one line every second (execution can be interrupted anytime with Ctrl
+ C):
# dstat
127
Linux Disk Statistics Monitoring
To output the stats to a .csv file, use the output flag followed by a file name. Lets see how
this looks on LibreOffice Calc:
I strongly advise you to check out the man page of dstat along with the man page of sysstat in
PDF format for your reading convenience. You will find several other options that will help
you create custom and detailed system activity reports.
Summary
In this guide we have explained how to use both native Linux tools and specific utilities
provided with RHEL 7 in order to produce reports on system utilization. At one point or
another, you will come to rely on these reports as best friends.
You will probably have used other tools that we have not covered in this tutorial. If so, feel
free to share them with the rest of the community along with any other suggestions /
questions / comments that you may have- using the form below.
128
Using Shell Scripting to Automate Linux System Maintenance Tasks
Part 4
Some time ago I read that one of the distinguishing characteristics of an effective system
administrator / engineer is laziness. It seemed a little contradictory at first but the author then
proceeded to explain why:
if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can
suspect he or she is not doing things quite right. In other words, an effective system
administrator / engineer should develop a plan to perform repetitive tasks with as less action
on his / her part as possible, and should foresee problems by using,
for example, the tools reviewed in Part 3 Monitor System Activity Reports Using Linux
Toolsets of this series. Thus, although he or she may not seem to be doing much, its because
most of his / her responsibilities have been taken care of with the help of shell scripting,
which is what were going to talk about in this tutorial.
In few words, a shell script is nothing more and nothing less than a program that is executed
step by step by a shell, which is another program that provides an interface layer between the
Linux kernel and the end user.
By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a
detailed description and some historical background, you can refer to this Wikipedia article.
To find out more about the enormous set of features provided by this shell, you may want to
check out its man page, which is downloaded in in PDF format at (Bash Commands). Other
than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise
129
you to go through A Guide from Newbies to SysAdmin article in Tecmint.com before
proceeding). Now lets get started.
For our convenience, lets create a directory to store our shell scripts:
# mkdir scripts
# cd scripts
And open a new text file named system_info.sh with your preferred text editor. We will
begin by inserting a few comments at the top and some commands afterwards:
#!/bin/bash
# chmod +x system_info.sh
./system_info.sh
Note that the headers of each section are shown in color for better visualization:
130
Server Monitoring Shell Script
Where COLOR1 and COLOR2 are the foreground and background colors, respectively
(more info and options are explained in this entry from the Arch Linux Wiki) and <YOUR
TEXT HERE> is the string that you want to show in color.
Automating Tasks
The tasks that you may need to automate may vary from case to case. Thus, we cannot
possibly cover all of the possible scenarios in a single article, but we will present three classic
tasks that can be automated using shell scripting:
1) update the local file database, 2) find (and alternatively delete) files with 777 permissions,
and 3) alert when filesystem usage surpasses a defined limit.
Lets create a file named auto_tasks.sh in our scripts directory with the following content:
#!/bin/bash
131
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
updatedb
if [ $? == 0 ]; then
echo "The local file database was updated correctly."
else
echo "The local file database was not updated correctly."
fi
echo ""
Please note that there is a space between the two < signs in the last line of the script.
Using Cron
To take efficiency one step further, you will not want to sit in front of your computer and run
those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic
132
basis and sends the results to a predefined list of recipients via email or save them to a file
that can be viewed using a web browser.
The following script (filesystem_usage.sh) will run the well-known df -h command, format
the output into a HTML table and save it in the report.html file:
#!/bin/bash
# Sample script to demonstrate the creation of an HTML report using shell
scripting
# Web directory
WEB_DIR=/var/www/html
# A little CSS and table layout to make the report look a little nicer
echo "<HTML>
<HEAD>
<style>
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em
0.2em;}
table
{
border-collapse:collapse;
}
table, td, th
{
border:1px solid black;
}
</style>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
</HEAD>
<BODY>" > $WEB_DIR/report.html
# View hostname and insert it at the top of the html body
HOST=$(hostname)
echo "Filesystem usage for host <strong>$HOST</strong><br>
Last updated: <strong>$(date)</strong><br><br>
<table border='1'>
<tr><th class='titulo'>Filesystem</td>
<th class='titulo'>Size</td>
<th class='titulo'>Use %</td>
</tr>" >> $WEB_DIR/report.html
# Read the output of df -h line by line
while read line; do
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
echo "</td></tr>" >> $WEB_DIR/report.html
done < <(df -h | grep -vi filesystem)
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html
133
Server Monitoring Report
You can add to that report as much information as you want. To run the script every day at
1:30 pm, add the following crontab entry:
30 13 * * * /root/scripts/filesystem_usage.sh
Summary
You will most likely think of several other tasks that you want or need to automate; as you
can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you
find this article helpful and don't hesitate to add your own ideas or comments via the form
below.
134
RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate Part 5
In RHEL 7, the rsyslogd daemon is responsible for system logging and reads its
configuration from /etc/rsyslog.conf (this file specifies the default location for all system
logs) and from files inside /etc/rsyslog.d, if any.
Rsyslogd Configuration
A quick inspection of the rsyslog.conf will be helpful to start. This file is divided into 3 main
sections: Modules (since rsyslog follows a modular design), Global directives (used to set
global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last
section indicates what gets logged or shown (also known as the selector) and where, and will
be our focus throughout this article.
Rsyslogd Configuration
In the image above, we can see that a selector consists of one or more pairs Facility:Priority
separated by semicolons, where Facility describes the type of message (refer to section 4.1.1
in RFC 3164 to see the complete list of facilities available for rsyslog) and Priority indicates
its severity, which can be one of the following self-explanatory words:
1. debug
135
2. info
3. notice
4. warning
5. err
6. crit
7. alert
8. emerg
Though not a priority itself, the keyword none means no priority at all of the given facility.
Note: That a given priority indicates that all messages of such priority and above should be
logged. Thus, the line in the example above instructs the rsyslogd daemon to log all
messages of priority info or higher (regardless of the facility) except those belonging to mail,
authpriv, and cron services (no messages coming from this facilities will be taken into
account) to /var/log/messages.
You can also group multiple facilities using the colon sign to apply the same priority to all of
them. Thus, the line:
*.info;mail.none;authpriv.none;cron.none /var/log/messages
Could be rewritten as
*.info;mail,authpriv,cron.none /var/log/messages
In other words, the facilities mail, authpriv, and cron are grouped and the keyword none is
applied to the three of them.
To log all daemon messages to /var/log/tecmint.log, we need to add the following line either
in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d:
daemon.* /var/log/tecmint.log
Lets restart the daemon (note that the service name does not end with a d):
And check the contents of our custom log before and after restarting two random daemons:
136
Create Custom Log File
As a self-study exercise, I would recommend you play around with the facilities and priorities
and either log additional messages to existing log files or create new ones as in the previous
example.
To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress,
remove, and alternatively mail logs, thus easing the administration of systems that generate
large numbers of log files.
Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration
from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any.
As with the case of rsyslog, even when you can include settings for specific services in the
main file, creating separate configuration files for each one will help organize your settings
better.
137
Logrotate Configuration
In the example above, logrotate will perform the following actions for /var/loh/wtmp:
attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a
brand new log file with permissions set to 0664 and ownership given to user root and group
utmp. Next, only keep one archived log, as specified by the rotate directive:
138
Rotate Apache Log Files
You can read more about the settings for logrotate in its man pages (man logrotate and man
logrotate.conf). Both files are provided along with this article in PDF format for your reading
convenience.
As a system engineer, it will be pretty much up to you to decide for how long logs will be
stored and in what format, depending on whether you have /var in a separate partition /
logical volume. Otherwise, you really want to consider removing old logs to save storage
space. On the other hand, you may be forced to keep several logs for future security auditing
according to your companys or clients internal policies.
Of course examining logs (even with the help of tools such as grep and regular expressions)
can become a rather tedious task. For that reason, rsyslog allows us to export them into a
database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle.
This section of the tutorial assumes that you have already installed the MariaDB server and
client in the same RHEL 7 box where the logs are being managed:
Then use the mysql_secure_installation utility to set the password for the root user and
other security considerations:
139
Secure MySQL Database
Note: If you dont want to use the MariaDB root user to insert log messages to the database,
you can configure another user account to do so. Explaining how to do that is out of the scope
of this tutorial but is explained in detail in MariaDB knowledge base. In this tutorial we will
use the root account for simplicity.
Next, download the createDB.sql script from GitHub and import it into your database server:
$ModLoad ommysql
$ActionOmmysqlServerPort 3306
*.* :ommysql:localhost,Syslog,root,YourPasswordHere
140
Now perform some tasks that will modify the logs (like stopping and starting services, for
example), then log to your DB server and use standard SQL commands to display and search
in the logs:
USE Syslog;
SELECT ReceivedAt, Message FROM SystemEvents;
Summary
In this article we have explained how to set up system logging, how to rotate logs, and how to
redirect the messages to a database for easier search. We hope that these skills will be helpful
as you prepare for the RHCE exam and in your daily responsibilities as well.
As always, your feedback is more than welcome. Feel free to use the form below to reach us.
In this article and in the next of this series we will go through the essentials of setting up
Samba and NFS servers with Windows/Linux and Linux clients, respectively.
141
RHCE: Setup Samba File Sharing Part 6
This article will definitely come in handy if youre called upon to set up file servers in
corporate or enterprise environments where you are likely to find different operating systems
and types of devices.
Since you can read about the background and the technical aspects of both Samba and NFS
all over the Internet, in this article and the next we will cut right to the chase with the topic at
hand.
Our current testing environment consists of two RHEL 7 boxes and one Windows 8
machine, in that order:
On box2:
142
Step 2: Setting Up File Sharing Through Samba
One of the reason why Samba is so relevant is because it provides file and print services to
SMB/CIFS clients, which causes those clients to see the server as if it was a Windows
system (I must admit I tend to get a little emotional while writing about this topic as it was
my first setup as a new Linux system administrator some years ago).
To allow for group collaboration, we will create a group named finance with two users
(user1 and user2) with useradd command and a directory /finance in box1.
We will also change the group owner of this directory to finance and set its permissions to
0770 (read, write, and execution permissions for the owner and the group owner):
# groupadd finance
# useradd user1
# useradd user2
# usermod -a -G finance user1
# usermod -a -G finance user2
# mkdir /finance
# chmod 0770 /finance
# chgrp finance /finance
Now its time to dive into the configuration file /etc/samba/smb.conf and add the section for
our share: we want the members of the finance group to be able to browse the contents of
/finance, and save / create files or subdirectories in it (which by default will have their
permission bits set to 0770 and finance will be their group owner):
smb.conf
[finance]
comment=Directory for collaboration of the company's finance team
browsable=yes
path=/finance
143
public=no
valid users=@finance
write list=@finance
writeable=yes
create mask=0770
Force create mode=0770
force group=finance
Save the file and then test it with the testparm utility. If there are any errors, the output of the
following command will indicate what you need to fix. Otherwise, it will display a review of
your Samba server configuration:
Should you want to add another share that is open to the public (meaning without any
authentication whatsoever), create another section in /etc/samba/smb.conf and under the new
shares name copy the section above, only changing public=no to public=yes and not
including the valid users and write list directives.
Next, you will need to add user1 and user2 as Samba users. To do so, you will use the
smbpasswd command, which interacts with Sambas internal database. You will be
prompted to enter a password that you will later use to connect to the share:
# smbpasswd -a user1
# smbpasswd -a user2
Finally, restart Samba, enable the service to start on boot, and make sure the share is actually
available to network clients:
144
Verify Samba Share
At this point, the Samba file server has been properly installed and configured. Now its time
to test this setup on our RHEL 7 and Windows 8 clients.
First, make sure the Samba share is accessible from this client:
145
Mount Samba Share on Linux
As any other storage media, you can mount (and later unmount) this network share when
needed:
fstab
//192.168.0.18/finance /media/samba cifs
credentials=/media/samba/.smbcredentials,defaults 0 0
.smbcredentials
username=user1
password=PasswordForUser1
Finally, lets create a file inside /finance and check the permissions and ownership:
# touch /media/samba/FileCreatedInRHELClient.txt
As you can see, the file was created with 0770 permissions and ownership set to
user1:finance.
146
Step 7: Mounting the Samba Share in Windows
To mount the Samba share in Windows, go to My PC and choose Computer, then Map
network drive. Next, assign a letter for the drive to be mapped and check Connect using
different credentials (the screenshots below are in Spanish, my native language):
Finally, lets create a file and check the permissions and ownership:
# ls -l /finance
This time the file belongs to user2 since thats the account we used to connect from the
Windows client.
147
Summary
In this article we have explained not only how to set up a Samba server and two clients using
different operating systems, but also how to configure the firewalld and SELinux on the
server to allow the desired group collaboration capabilities.
Last, but not least, let me recommend the reading of the online man page of smb.conf to
explore other configuration directives that may be more suitable for your case than the
scenario described in this article.
As always, feel free to drop a comment using the form below if you have any comments or
suggestions.
In this article we will walk you through the process of using Kerberos-based authentication
for NFS shares. It is assumed that you already have set up a NFS server and a client. If not,
please refer to install and configure NFS server which will list the necessary packages that
need to be installed and explain how to perform initial configurations on the server before
proceeding further.
In addition, you will want to configure both SELinux and firewalld to allow for file sharing
through NFS.
148
The following example assumes that your NFS share is located in /nfs in box2:
1. Create a group called nfs and add the nfsnobody user to it, then change the permissions of
the /nfs directory to 0770 and its group owner to nfs. Thus, nfsnobody (which is mapped to
the client requests) will have write permissions on the share) and you wont need to use
no_root_squash in the /etc/exports file.
# groupadd nfs
# usermod -a -G nfs nfsnobody
# chmod 0770 /nfs
# chgrp nfs /nfs
2. Modify the exports file (/etc/exports) as follows to only allow access from box1 using
Kerberos security (sec=krb5).
Note: that the value of anongid has been set to the GID of the nfs group that we created
previously:
3. Re-export (-r) all (-a) the NFS shares. Adding verbosity to the output (-v) is a good idea
since it will provide helpful information to troubleshoot the server if something goes wrong:
# exportfs -arv
4. Restart and enable the NFS server and related services. Note that you dont have to enable
nfs-lock and nfs-idmapd because they will be automatically started by the other services on
boot:
149
Note: that Kerberos service is crucial to the authentication scheme.
As you can see, the NFS server and the KDC are hosted in the same machine for simplicity,
although you can set them up in separate machines if you have more available. Both
machines are members of the mydomain.com domain.
Last but not least, Kerberos requires at least a basic schema of name resolution and the
Network Time Protocol service to be present in both client and server since the security of
Kerberos authentication is in part based upon the timestamps of tickets.
To set up name resolution, we will use the /etc/hosts file in both client and server:
In RHEL 7, chrony is the default software that is used for NTP synchronization:
To make sure chrony is actually synchronizing your systems time with time servers you
may want to issue the following command two or three times and make sure the offset is
getting nearer to zero:
# chronyc tracking
150
Synchronize Server Time with Chrony
To set up the KDC, install the following packages on both server and client (omit the server
package in the client):
Now create the Kerberos database (please note that this may take a while as it requires a
some level of entropy in your system. To speed things up, I opened another terminal and ran
ping -f localhost for 30-45 seconds):
# kdb5_util create -s
Next, enable Kerberos through the firewall and start / enable the related services.
Next, using the kadmin.local tool, create an admin principal for root:
# kadmin.local
# addprinc root/admin
Same with the NFS service for both client (box1) and server (box2). Please note that in the
screenshot below I forgot to do it for box1 before quitting:
151
And exit by typing quit and pressing Enter:
# kinit root/admin
# klist
Cache Kerberos
The last step before actually using Kerberos is storing into a keytab file (in the server) the
principals that are authorized to use Kerberos authentication:
# kdadmin.local
# ktadd host/box2.mydomain.com
# ktadd nfs/box2.mydomain.com
# ktadd nfs/box1.mydomain.com
152
Finally, mount the share and perform a write test:
Lets now unmount the share, rename the keytab file in the client (to simulate its not
present) and try to mount the share again:
# umount /mnt
# mv /etc/krb5.keytab /etc/krb5.keytab.orig
Now you can use the NFS share with Kerberos-based authentication.
Summary
In this article we have explained how to set up NFS with Kerberos authentication. Since
there is much more to the topic than we can cover in a single guide, feel free to check the
online Kerberos documentation and since Kerberos is a bit tricky to say the least, dont
hesitate to drop us a note using the form below if you run into any issue or need help with
your testing or implementation.
153
RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for
Apache Part 8
In order to provide more secure communications between web clients and servers, the
HTTPS protocol was born as a combination of HTTP and SSL (Secure Sockets Layer) or
more recently, TLS (Transport Layer Security).
Due to some serious security breaches, SSL has been deprecated in favor of the more robust
TLS. For that reason, in this article we will explain how to secure connections between your
web server and clients using TLS.
This tutorial assumes that you have already installed and configured your Apache web server.
If not, please refer to following article in this site before proceeding further.
First off, make sure that Apache is running and that both http and https are allowed through
the firewall:
Important: Please note that you can replace mod_nss with mod_ssl in the command above
if you want to use OpenSSL libraries instead of NSS (Network Security Service) to
implement TLS (which one to use is left entirely up to you, but we will use NSS in this
154
article as it is more robust; for example, it supports recent cryptography standards such as
PKCS #11).
Then restart Apache and check whether the mod_nss module has been loaded:
# apachectl restart
# httpd -M | grep nss
1. Indicate NSS database directory. You can use the default directory or create a new one. In
this tutorial we will use the default:
NSSCertificateDatabase /etc/httpd/alias
2. Avoid manual passphrase entry on each system start by saving the password to the
database directory in /etc/httpd/nss-db-password.conf:
NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf
internal:mypassword
In addition, its permissions and ownership should be set to 0640 and root:apache,
respectively:
155
3. Red Hat recommends disabling SSL and all versions of TLS previous to TLSv1.0 due to
the POODLE SSLv3 vulnerability (more information here).
Make sure that every instance of the NSSProtocol directive reads as follows (you are likely
to find only one if you are not hosting other virtual hosts):
NSSProtocol TLSv1.0,TLSv1.1
4. Apache will refuse to restart as this is a self-signed certificate and will not recognize the
issuer as valid. For this reason, in this particular case you will have to add:
NSSEnforceValidCerts off
5. Though not strictly required, it is important to set a password for the NSS database:
# certutil -W -d /etc/httpd/alias
Next, we will create a self-signed certificate that will identify the server to our clients (please
note that this method is not the best option for production environments; for such use you
may want to consider buying a certificate verified by a 3rd trusted certificate authority, such
as DigiCert).
To create a new NSS-compliant certificate for box1 which will be valid for 365 days, we will
use the genkey command. When this process completes:
Choose Next:
156
Create Apache SSL Key
You can leave the default choice for the key size (2048), then choose Next again:
157
Generating Random Key Bits
To speed up the process, you will be prompted to enter random text in your console, as
shown in the following screencast. Please note how the progress bar stops when no input
from the keyboard is received. Then, you will be asked to:
1. Whether to send the Certificate Sign Request (CSR) to a Certificate Authority (CA):
Choose No, as this is a self-signed certificate.
Finally, you will be prompted to enter the password to the NSS certificate that you set earlier:
# certutil L d /etc/httpd/alias
And delete them by name (only if strictly required, replacing box1 by your own certificate
name) with:
158
if you need to.c
Finally, its time to test the secure connection to our web server. When you point your
browser to https://<web server IP or hostname>, you will get the well-known message
This connection is untrusted:
In the above situation, you can click on Add Exception and then Confirm Security
Exception but dont do it yet. Lets first examine the certificate to see if its details match
the information that we entered earlier (as shown in the screencast).
To do so, click on View > Details tab above and you should see this when you select
Issuer from the list:
159
Confirm Apache SSL Certificate Details
Now you can go ahead, confirm the exception (either for this time or permanently) and you
will be taken to your web servers DocumentRoot directory via https, where you can inspect
the connection details using your browsers builtin developer tools:
In Firefox you can launch it by right clicking on the screen, and choosing Inspect Element
from the context menu, specifically through the Network tab:
160
Please note that this is the same information as displayed before, which was entered during
the certificate previously. Theres also a way to test the connection using command line tools:
Summary
As Im sure you already know, the presence of HTTPS inspires trust in visitors who may
have to enter personal information in your site (from user names and passwords all the way
to financial / bank account information).
In that case, you will want to get a certificate signed by a trusted Certificate Authority as we
explained earlier (the steps to set it up are identical with the exception that you will need to
send the CSR to a CA, and you will get the signed certificate back); otherwise, a self-signed
certificate as the one used in this tutorial will do.
For more details on the use of NSS, please refer to the online help about mod-nss. And dont
hesitate to let us know if you have any questions or comments.
161
The following image illustrates the process of email transport starting with the sender until
the message reaches the recipients inbox:
To make this possible, several things happen behind the scenes. In order for an email
message to be delivered from a client application (such as Thunderbird, Outlook, or webmail
services such as Gmail or Yahoo! Mail) to a mail server, and from there to the destination
server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol)
service must be in place in each server.
That is the reason why in this article we will explain how to set up a SMTP server in RHEL
7 where emails sent by local users (even to other local users) are forwarded to a central mail
server for easier access.
Our test environment will consist of an originating mail server and a central mail server or
relayhost.
For name resolution we will use the well-known /etc/hosts file on both boxes:
1. Install Postfix:
162
2. Start the service and enable it to run on future reboots:
Postfixs main configuration file is located in /etc/postfix/main.cf. This file itself is a great
documentation source as the included comments explain the purpose of the programs
settings.
For brevity, lets display only the lines that need to be edited (yes, you need to leave
mydestination blank in the originating server; otherwise the emails will be stored locally as
opposed to in a central mail server which is what we actually want):
And set the related SELinux boolean to true permanently if not already done:
163
# setsebool -P allow_postfix_local_write_mail_spool on
The above SELinux boolean will allow Postfix to write to the mail spool in the central
server.
5. Restart the service on both servers for the changes to take effect:
If Postfix does not start correctly, you can use following commands to troubleshoot.
To test the mail servers, you can use any Mail User Agent (most commonly known as MUA
for short) such as mail or mutt.
Since mutt is a personal favorite, I will use it in box1 to send an email to user tecmint using
an existing file (mailbody.txt) as message body:
Now go to the central mail server (mail.mydomain.com), log on as user tecmint, and check
whether the email was received:
# su tecmint
# mail
164
Check Postfix Mail Server Delivery
If the email was not received, check roots mail spool for a warning or error notification. You
may also want to make sure that the SMTP service is running on both servers and that port
25 is open in the central mail server using nmap command:
Summary
165
Setting up a mail server and a relay host as shown in this article is an essential skill that
every system administrator must have, and represents the foundation to understand and install
a more complex scenario such as a mail server hosting a live domain for several (even
hundreds or thousands) of email accounts.
(Please note that this kind of setup requires a DNS server, which is out of the scope of this
guide), but you can use following article to setup DNS Server:
Finally, I highly recommend you become familiar with Postfixs configuration file (main.cf)
and the programs man page. If in doubt, dont hesitate to drop us a line using the form below
or using our forum, Linuxsay.com, where you will get almost immediate help from Linux
experts from all around the world.
The cache-only DNS server is also known as resolver, which will query DNS records and
fetch all the DNS details from other servers, and keep each query request in its cache for later
use so that when we perform the same request in the future, it will serve from its cache, thus
reducing the response time even more.
166
If youre looking to setup DNS Caching-Only Server in CentOS/RHEL 6, follow this guide
here:
My Testing Environment
1. The Cache-Only DNS server, can be installed via the bind package. If you dont remember
the package name, you can do a quick search for the package name using the command
below.
2. In the above result, you will see several packages. From those, we need to choose and
install only bind and bind-utils packages using following yum command.
3. Once DNS packages are installed we can go ahead and configure DNS. Open and edit
/etc/named.conf using your preferred text editor. Make the changes suggested below (or
you can use your settings as per your requirements).
167
Configure Cache-Only DNS in CentOS and RHEL 7
These directives instruct the DNS server to listen on UDP port 53, and to allow queries and
caches responses from localhost and any other machine that reaches the server.
4. It is important to note that the ownership of this file must be set to root:named and also if
SELinux is enabled, after editing the configuration file we need to make sure that its context
is set to named_conf_t as shown in Fig. 4 (same thing for the auxiliary file
/etc/named.rfc1912.zones):
# ls -lZ /etc/named.conf
# ls -lZ /etc/named.rfc1912.zones
5. Additionally, we need to test the DNS configuration now for some syntax error before
starting the bind service:
# named-checkconf /etc/named.conf
6. After the syntax verification results seems perfect, restart the named service to take new
changes into effect and also make the service to auto start across system boots, and then
check its status:
168
Configure and Start DNS Named Service
# firewall-cmd --add-port=53/udp
# firewall-cmd --add-port=53/udp --permanent
8. If you wish to deploy the Cache-only DNS server within chroot environment, you need to
have the package chroot installed on the system and no further configuration is needed as it
by default hard-link to chroot.
Once chroot package has been installed, you can restart named to take the new changes into
effect:
# ln -s /etc/named.conf /var/named/chroot/etc/named.conf
10. Add the DNS Cache servers IP 192.168.0.18 as resolver to the client machine. Edit
/etc/sysconfig/network-scripts/ifcfg-enp0s3 as shown in the following figure:
169
DNS=192.168.0.18
nameserver 192.168.0.18
11. Finally its time to check our cache server. To do this, you can use dig utility or nslookup
command.
Choose any website and query it twice (we will use facebook.com as an example). Note that
with dig the second time the query is completed much faster because it is being served from
the cache.
# dig facebook.com
170
Check Cache only DNS Queries
You can also use nslookup to verify that the DNS server is working as expected.
# nslookup facebook.com
Summary
In this article we have explained how to set up a DNS Cache-only server in Red Hat
Enterprise Linux 7 and CentOS 7, and tested it in a client machine. Feel free to let us know
if you have any questions or suggestions using the form below.
171
How to Setup and Configure Network Bonding or Teaming in
RHEL/CentOS 7 Part 11
When a system administrator wants to increase the bandwidth available and provide
redundancy and load balancing for data transfers, a kernel feature known as network bonding
allows to get the job done in a cost-effective way.
In simple words, bonding means aggregating two or more physical network interfaces (called
slaves) into a single, logical one (called master). If a specific NIC (Network Interface Card)
experiences a problem, communications are not affected significantly as long as the other(s)
remain active.
By default, the bonding kernel module is not enabled. Thus, we will need to load it and
ensure it is persistent across boots. When used with the --first-time option, modprobe
will alert us if loading the module fails:
The above command will load the bonding module for the current session. In order to ensure
persistency, create a .conf file inside /etc/modules-load.d with a descriptive name, such
as /etc/modules-load.d/bonding.conf:
# echo "# Load the bonding kernel module at boot" > /etc/modules-
load.d/bonding.conf
# echo "bonding" >> /etc/modules-load.d/bonding.conf
Now reboot your server and once it restarts, make sure the bonding module is loaded
automatically, as seen in Fig. 1:
In this article we will use 3 interfaces (enp0s3, enp0s8, and enp0s9) to create a bond, named
conveniently bond0.
To create bond0, we can either use nmtui, the text interface for controlling
NetworkManager. When invoked without arguments from the command line, nmtui brings
172
up a text interface that allows you to edit an existing connection, activate a connection, or set
the system hostname.
In the Edit Connection screen, add the slave interfaces (enp0s3, enp0s8, and enp0s9 in our
case) and give them a descriptive (Profile) name (for example, NIC #1, NIC #2, and NIC #3,
respectively).
In addition, you will need to set a name and device for the bond (TecmintBond and bond0 in
Fig. 3, respectively) and an IP address for bond0, enter a gateway address, and the IPs of
DNS servers.
Note that you do not need to enter the MAC address of each interface since nmtui will do
that for you. You can leave all other settings as default. See Fig. 3 for more details.
173
Network Bonding Teaming Configuration
When youre done, go to the bottom of the screen and choose OK (see Fig. 4):
174
Configuration of bond0
And youre done. Now you can exit the text interface and return to the command line, where
you will enable the newly created interface using ip command:
After that, you can see that bond0 is UP and is assigned 192.168.0.200, as seen in Fig. 5:
To verify that bond0 actually works, you can either ping its IP address from another machine,
or whats even better, watch the kernel interface table in real time (well, the refresh time in
seconds is given by the -n option) to see how network traffic is distributed between the three
network interfaces, as shown in Fig. 6.
175
Check Kernel Interface Table
It is important to note that there are several bonding modes, each with its distinguishing
characteristics. They are documented in section 4.5 of the Red Hat Enterprise Linux 7
Network Administration guide. Depending on your needs, you will choose one or the other.
In our current setup, we chose the Round-robin mode (see Fig. 3), which ensures packets are
transmitted beginning with the first slave in sequential order, ending with the last slave, and
starting with the first again.
The Round-robin alternative is also called mode 0, and provides load balancing and fault
tolerance. To change the bonding mode, you can use nmtui as explained before (see also Fig.
7):
If we change it to Active Backup, we will be prompted to choose a slave that will the only
one active interface at a given time. If such card fails, one of the remaining slaves will take its
place and becomes active.
Lets choose enp0s3 to be the primary slave, bring bond0 down and up again, restart the
network, and display the kernel interface table (see Fig. 8).
Note how data transfers (TX-OK and RX-OK) are now being made over enp0s3 only:
176
Bond Acting in Active Backup Mode
Alternatively, you can view the bond as the kernel sees it (see Fig. 9):
# cat /proc/net/bonding/bond0
177
Check Network Bond as Kernel
Summary
In this chapter we have discussed how to set up and configure bonding in Red Hat
Enterprise Linux 7 (also works on CentOS 7 and Fedora 22+) in order to increase
bandwidth along with load balancing and redundancy for data transfers.
As you take the time to explore other bonding modes, you will come to master the concepts
and practice related with this topic of the certification.
If you have questions about this article, or suggestions to share with the rest of the
community, feel free to let us know using the comment form below.
178
Create Centralized Secure Storage using iSCSI Target / Initiator on
RHEL/CentOS 7 Part 12
iSCSI is a block level Protocol for managing storage devices over TCP/IP Networks,
specially over long distances. iSCSI target is a remote hard disk presented from an remote
iSCSI server (or) target. On the other hand, the iSCSI client is called the Initiator, and will
access the storage that is shared in the Target machine.
Server (Target):
Client (Initiator):
To install the packages needed for the target (we will deal with the client later), do:
When the installation completes, we will start and enable the service as follows:
# firewall-cmd --add-service=iscsi-target
# firewall-cmd --add-service=iscsi-target --permanent
And last but not least, we must not forget to allow the iSCSI target discovery:
# firewall-cmd --add-port=860/tcp
# firewall-cmd --add-port=860/tcp --permanent
# firewall-cmd --reload
Before proceeding to defining LUNs in the Target, we need to create two logical volumes as
explained in Part 6 of RHCSA series (Configuring system storage).
179
This time we will name them vol_projects and vol_backups and place them inside a
volume group called vg00, as shown in Fig. 1. Feel free to choose the space allocated to each
LV:
After creating the LVs, we are ready to define the LUNs in the Target in order to make them
available for the client machine.
As shown in Fig. 2, we will open a targetcli shell and issue the following commands,
which will create two block backstores (local storage resources that represent the LUN the
initiator will actually use) and an Iscsi Qualified Name (IQN), a method of addressing the
target server.
180
Please refer to Page 32 of RFC 3720 for more details on the structure of the IQN. In
particular, the text after the colon character (:tgt1) specifies the name of the target, while
the text before (server:) indicates the hostname of the target inside the domain.
# targetcli
# cd backstores
# cd block
# create server.backups /dev/vg00/vol_backups
# create server.projects /dev/vg00/vol_projects
# cd /iscsi
# create iqn.2016-02.com.tecmint.server:tgt1
With the above step, a new TPG (Target Portal Group) was created along with the default
portal (a pair consisting of an IP address and a port which is the way initiators can reach the
target) listening on port 3260 of all IP addresses.
If you want to bind your portal to a specific IP (the Targets main IP, for example), delete the
default portal and create a new one as follows (otherwise, skip the following targetcli
commands. Note that for simplicity we have skipped them as well):
# cd /iscsi/iqn.2016-02.com.tecmint.server:tgt1/tpg1/portals
# delete 0.0.0.0 3260
# create 192.168.0.29 3260
Now we are ready to proceed with the creation of LUNs. Note that we are using the
backstores we previously created (server.backups and server.projects). This process is
illustrated in Fig. 3:
# cd iqn.2016-02.com.tecmint.server:tgt1/tpg1/luns
# create /backstores/block/server.backups
# create /backstores/block/server.projects
181
Fig 3: Create LUNs in iSCSI Target Server
The last part in the Target configuration consists of creating an Access Control List to restrict
access on a per-initiator basis. Since our client machine is named client, we will append
that text to the IQN. Refer to Fig. 4 for details:
# cd ../acls
# create iqn.2016-02.com.tecmint.server:client
At this point we can the targetcli shell to show all configured resources, as we can see in Fig.
5:
# targetcli
# cd /
# ls
182
Fig 5: User targetcli to Check Configured Resources
To quit the targetcli shell, simply type exit and press Enter. The configuration will be saved
automatically to /etc/target/saveconfig.json.
As you can see in Fig. 5 above, we have a portal listening on port 3260 of all IP addresses as
expected. We can verify that using netstat command (see Fig. 6):
This concludes the Target configuration. Feel free to restart the system and verify that all
settings survive a reboot. If not, make sure to open the necessary ports in the firewall
configuration and to start the target service on boot. We are now ready to set up the Initiator
and to connect to the client.
183
In the client we will need to install the iscsi-initiator-utils package, which provides the
server daemon for the iSCSI protocol (iscsid) as well as iscsiadm, the administration utility:
Once the installation completes, open /etc/iscsi/initiatorname.iscsi and replace the default
initiator name (commented in Fig. 7) with the name that was previously set in the ACL on
the server (iqn.2016-02.com.tecmint.server:client).
Then save the file and run iscsiadm in discovery mode pointing to the target. If successful,
this command will return the target information as shown in Fig. 7:
The next step consists in restarting and enabling the iscsid service:
and contacting the target in node mode. This should result in kernel-level messages, which
when captured through dmesg show the device identification that the remote LUNs have
been given in the local system (sde and sdf in Fig. 8):
184
Fig 8: Connecting to iSCSCI Target Server in Node Mode
From this point on, you can create partitions, or even LVs (and filesystems on top of them) as
you would do with any other storage device. For simplicity, we will create a primary partition
on each disk that will occupy its entire available space, and format it with ext4.
Finally, lets mount /dev/sde1 and /dev/sdf1 on /projects and /backups, respectively (note
that these directories must be created first):
Additionally, you can add two entries in /etc/fstab in order for both filesystems to be
mounted automatically at boot using each filesystems UUID as returned by blkid.
Note that the _netdev mount option must be used in order to defer the mounting of these
filesystems until the network service has been started:
You can now use these devices as you would with any other storage media.
Summary
In this article we have covered how to set up and configure an iSCSI Target and an Initiator
in RHEL/CentOS 7 disitributions. Although the first task is not part of the required
competencies of the EX300 (RHCE) exam, it is needed in order to implement the second
topic.
Dont hesitate to let us know if you have any questions or comments about this article feel
free to drop us a line using the comment form below.
Looking to setup iSCSI Target and Client Initiator on RHEL/CentOS 6, follow this guide:
Setting Up Centralized iSCSI Storage with Client Initiator.
185
Setting Up NTP (Network Time Protocol) Server in RHEL/CentOS 7
Network Time Protocol NTP- is a protocol which runs over port 123 UDP at Transport
Layer and allows computers to synchronize time over networks for an accurate time. While
time is passing by, computers internal clocks tend to drift which can lead to inconsistent time
issues, especially on servers and clients logs files or if you want to replicate servers resources
or databases.
Requirements:
Additional Requirements:
This tutorial will demonstrate how you can install and configure NTP server on
CentOS/RHEL 7 and automatically synchronize time with the closest geographically peers
available for your server location by using NTP Public Pool Time Servers list.
186
1. NTP server package is provided by default from official CentOS /RHEL 7 repositories
and can be installed by issuing the following command.
2. After the server is installed, first go to official NTP Public Pool Time Servers, choose your
Continent area where the server physically is located, then search for your Country location
and a list of NTP servers should appear.
187
NTP Pool Server
3. Then open NTP daemon main configuration file for editing, comment the default list of
Public Servers from pool.ntp.org project and replace it with the list provided for your
country like in the screenshot below.
4. Further, you need to allow clients from your networks to synchronize time with this server.
To accomplish this, add the following line to NTP configuration file, where restrict
statement controls, what network is allowed to query and sync time replace network IPs
accordingly.
The nomodify notrap statements suggest that your clients are not allowed to configure the
server or be used as peers for time sync.
5. If you need additional information for troubleshooting in case there are problems with your
NTP daemon add a log file statement which will record all NTP server issues into one
dedicated log file.
logfile /var/log/ntp.log
188
Enable NTP Logs
6. After you have edited the file with all configuration explained above save and close
ntp.conf file. Your final configuration should look like in the screenshot below.
189
NTP Server Configuration
7. NTP service uses UDP port 123 on OSI transport layer (layer 4). It is designed particularly
to resist the effects of variable latency (jitter). To open this port on RHEL/CentOS 7 run the
following commands against Firewalld service.
8. After you have opened Firewall port 123, start NTP server and make sure you enable it
system-wide. Use the following commands to manage the service.
190
9. After NTP daemon has been started, wait a few minutes for the server to synchronize time
with its pool list servers, then run the following commands to verify NTP peers
synchronization status and your system time.
# ntpq -p
# date -R
10. If you want to query and synchronize against a pool of your choice use ntpdate
command, followed by the server or servers addresses, as suggested in the following
command line example.
11. If your windows machine is not a part of a Domain Controller you can configure
Windows to synchronize time with your NTP server by going to Time from the right side of
Taskbar -> Change Date and Time Settings -> Internet Time tab -> Change Settings ->
Check Synchronize with an Internet time server -> put your servers IP or FQDN on
Server filed -> Update now -> OK.
191
Synchronize Windows Time with NTP
Thats all! Setting up a local NTP Server on your network ensures that all your servers and
clients have the same time set in case of an Internet connectivity failure and they all are
synchronized with each other.
192