Oracle Linux Administrator's Guide For Release 7
Oracle Linux Administrator's Guide For Release 7
Oracle Linux Administrator's Guide For Release 7
E54669-41
May 2016
Abstract
This manual provides an introduction to administering various features of Oracle Linux 7 systems.
Document generated on: 2016-05-20 (revision: 3746)
Table of Contents
Preface ............................................................................................................................................ xiii
I System Configuration ....................................................................................................................... 1
1 The Unbreakable Linux Network .............................................................................................. 7
1.1 About the Unbreakable Linux Network ........................................................................... 7
1.2 About ULN Channels .................................................................................................... 7
1.3 About Software Errata .................................................................................................. 9
1.4 Registering as a ULN User ........................................................................................... 9
1.5 Registering an Oracle Linux 6 or Oracle Linux 7 System .............................................. 10
1.6 Registering an Oracle Linux 4 or Oracle Linux 5 System .............................................. 10
1.7 Configuring an Oracle Linux 5 System to Use yum with ULN ........................................ 10
1.8 Disabling Package Updates ........................................................................................ 11
1.9 Subscribing Your System to ULN Channels ................................................................. 11
1.10 Browsing and Downloading Errata Packages ............................................................. 12
1.11 Downloading Available Errata for a System ................................................................ 12
1.12 Updating System Details ........................................................................................... 13
1.13 Deleting a System .................................................................................................... 13
1.14 About CSI Administration .......................................................................................... 13
1.14.1 Becoming a CSI Administrator ........................................................................ 14
1.14.2 Listing Active CSIs and Transferring Their Registered Servers .......................... 15
1.14.3 Listing Expired CSIs and Transferring Their Registered Servers ........................ 16
1.14.4 Removing a CSI Administrator ........................................................................ 17
1.15 Switching from RHN to ULN ...................................................................................... 17
1.16 For More Information About ULN ............................................................................... 18
2 Yum ...................................................................................................................................... 19
2.1 About Yum ................................................................................................................. 19
2.2 Yum Configuration ...................................................................................................... 19
2.2.1 Configuring Use of a Proxy Server ................................................................... 20
2.2.2 Yum Repository Configuration .......................................................................... 21
2.3 Downloading the Oracle Public Yum Repository Files ................................................... 21
2.4 Using Yum from the Command Line ............................................................................ 22
2.5 Yum Groups ............................................................................................................... 23
2.6 Using the Yum Security Plugin .................................................................................... 23
2.7 Switching CentOS or Scientific Linux Systems to Use the Oracle Public Yum Server ....... 26
2.8 Creating and Using a Local ULN Mirror ....................................................................... 26
2.9 Creating a Local Yum Repository Using an ISO Image ................................................. 26
2.10 Setting up a Local Yum Server Using an ISO Image .................................................. 27
2.11 For More Information About Yum .............................................................................. 28
3 Ksplice Uptrack ..................................................................................................................... 29
3.1 About Ksplice Uptrack ................................................................................................ 29
3.1.1 Supported Kernels ........................................................................................... 29
3.2 Registering to Use Ksplice Uptrack ............................................................................. 30
3.3 Installing Ksplice Uptrack ............................................................................................ 30
3.4 Configuring Ksplice Uptrack ........................................................................................ 31
3.5 Managing Ksplice Updates .......................................................................................... 32
3.6 Patching and Updating Your System ........................................................................... 33
3.7 Removing the Ksplice Uptrack software ....................................................................... 33
3.8 About Ksplice Offline Client ......................................................................................... 33
3.8.1 Modifying a Local Yum Server to Act as a Ksplice Mirror .................................... 34
3.8.2 Configuring Ksplice Offline Clients .................................................................... 35
3.9 For More Information About Ksplice Uptrack ................................................................ 37
4 Boot and Service Configuration .............................................................................................. 39
iii
Oracle Linux
4.1
4.2
4.3
4.4
4.5
4.6
4.7
iv
Oracle Linux
Oracle Linux
vi
142
144
145
145
145
145
148
148
149
150
153
153
153
153
154
154
155
155
156
156
159
159
159
160
160
162
163
164
164
165
167
171
172
172
175
175
176
176
179
181
185
185
185
187
189
195
195
196
198
200
200
201
201
Oracle Linux
vii
201
202
202
202
203
204
205
205
206
207
208
209
210
211
212
214
216
216
217
223
223
224
225
226
227
228
228
229
230
230
231
231
232
233
233
234
234
234
234
235
237
237
239
239
241
241
242
242
244
245
245
246
246
247
Oracle Linux
viii
247
247
247
248
249
250
250
250
251
252
252
254
254
254
254
255
255
256
256
257
257
258
260
260
263
263
263
263
266
266
266
268
271
271
273
273
274
275
276
276
276
278
280
281
281
283
283
283
284
284
284
285
287
287
Oracle Linux
ix
287
287
288
288
289
293
293
294
295
297
297
298
298
298
299
300
302
303
306
307
308
308
310
311
316
316
317
320
322
324
326
329
330
332
332
334
334
336
336
339
339
340
340
341
341
341
342
342
342
343
345
345
346
347
Oracle Linux
349
349
349
351
354
355
356
357
359
362
363
364
364
365
365
366
370
370
370
371
372
372
373
375
376
376
376
377
377
379
383
383
383
384
385
385
385
386
387
388
388
391
395
395
397
397
397
398
398
400
402
403
404
404
Oracle Linux
xi
405
407
407
408
409
410
410
412
xii
Preface
The Oracle Linux Administrator's Guide provides introductory information about administering various
features of Oracle Linux 7 systems, including system configuration, networking, network services, storage
devices, file systems, authentication, and security.
Audience
This document is intended for administrators who need to configure and administer Oracle Linux. It is
assumed that readers are familiar with web technologies and have a general understanding of using the
Linux operating system, including knowledge of how to use a text editor such as emacs or vim, essential
commands such as cd, chmod, chown, ls, mkdir, mv, ps, pwd, and rm, and using the man command to
view manual pages.
Document Organization
The document is organized as follows:
Part I, System Configuration describes how to configure software and kernel updates, booting, kernel
and module settings, and devices, how to schedule tasks, and how to monitor and tune your system.
Part II, Networking and Network Services describes how to configure network interfaces, network
addresses, name service, network time services, basic web and email services, load balancing, and high
availability.
Part III, Storage and File Systems describes how to configure storage devices and how to create and
manage local, shared, and cluster file systems.
Part IV, Authentication and Security describes how to configure user account databases and
authentication, how to add group and user accounts, how to administer essential aspects of system
security, and how to configure and use the OpenSSH tools.
Part V, Containers describes how to configure containers to isolate applications from the other
processes that are running on a host system.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website
at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Related Documents
The documentation for this product is available at:
http://www.oracle.com/technetwork/server-storage/linux/documentation/index.html.
Conventions
The following text conventions are used in this document:
xiii
Convention
Meaning
boldface
italic
Italic type indicates book titles, emphasis, or placeholder variables for which
you supply particular values.
monospace
xiv
Table of Contents
1 The Unbreakable Linux Network ...................................................................................................... 7
1.1 About the Unbreakable Linux Network ................................................................................... 7
1.2 About ULN Channels ............................................................................................................ 7
1.3 About Software Errata .......................................................................................................... 9
1.4 Registering as a ULN User ................................................................................................... 9
1.5 Registering an Oracle Linux 6 or Oracle Linux 7 System ...................................................... 10
1.6 Registering an Oracle Linux 4 or Oracle Linux 5 System ...................................................... 10
1.7 Configuring an Oracle Linux 5 System to Use yum with ULN ................................................ 10
1.8 Disabling Package Updates ................................................................................................ 11
1.9 Subscribing Your System to ULN Channels ......................................................................... 11
1.10 Browsing and Downloading Errata Packages ..................................................................... 12
1.11 Downloading Available Errata for a System ........................................................................ 12
1.12 Updating System Details ................................................................................................... 13
1.13 Deleting a System ............................................................................................................ 13
1.14 About CSI Administration .................................................................................................. 13
1.14.1 Becoming a CSI Administrator ................................................................................ 14
1.14.2 Listing Active CSIs and Transferring Their Registered Servers .................................. 15
1.14.3 Listing Expired CSIs and Transferring Their Registered Servers ............................... 16
1.14.4 Removing a CSI Administrator ................................................................................ 17
1.15 Switching from RHN to ULN .............................................................................................. 17
1.16 For More Information About ULN ....................................................................................... 18
2 Yum .............................................................................................................................................. 19
2.1 About Yum ......................................................................................................................... 19
2.2 Yum Configuration .............................................................................................................. 19
2.2.1 Configuring Use of a Proxy Server ........................................................................... 20
2.2.2 Yum Repository Configuration .................................................................................. 21
2.3 Downloading the Oracle Public Yum Repository Files ........................................................... 21
2.4 Using Yum from the Command Line .................................................................................... 22
2.5 Yum Groups ....................................................................................................................... 23
2.6 Using the Yum Security Plugin ............................................................................................ 23
2.7 Switching CentOS or Scientific Linux Systems to Use the Oracle Public Yum Server .............. 26
2.8 Creating and Using a Local ULN Mirror ............................................................................... 26
2.9 Creating a Local Yum Repository Using an ISO Image ......................................................... 26
2.10 Setting up a Local Yum Server Using an ISO Image .......................................................... 27
2.11 For More Information About Yum ...................................................................................... 28
3 Ksplice Uptrack ............................................................................................................................. 29
3.1 About Ksplice Uptrack ........................................................................................................ 29
3.1.1 Supported Kernels ................................................................................................... 29
3.2 Registering to Use Ksplice Uptrack ..................................................................................... 30
3.3 Installing Ksplice Uptrack .................................................................................................... 30
3.4 Configuring Ksplice Uptrack ................................................................................................ 31
3.5 Managing Ksplice Updates .................................................................................................. 32
3.6 Patching and Updating Your System ................................................................................... 33
3.7 Removing the Ksplice Uptrack software ............................................................................... 33
3.8 About Ksplice Offline Client ................................................................................................. 33
3.8.1 Modifying a Local Yum Server to Act as a Ksplice Mirror ........................................... 34
3.8.2 Configuring Ksplice Offline Clients ............................................................................ 35
3.9 For More Information About Ksplice Uptrack ........................................................................ 37
4 Boot and Service Configuration ...................................................................................................... 39
4.1 About systemd ................................................................................................................... 39
4.2 About the Boot Process ...................................................................................................... 40
4.3
4.4
4.5
4.6
4.7
You can choose for your system to remain at a specific OS revision, or you can allow the system to be
updated with packages from later revisions.
You should subscribe to the channel that corresponds to the architecture of your system and the update
level at which you want to maintain it. Patches and errata are available for specific revisions of Oracle
Linux, but you do not need to upgrade from a given revision level to install these fixes. ULN channels also
exist for MySQL, Oracle VM, OCFS2, RDS, and productivity applications.
The following table describes the main channels that are available.
Channel
Description
_latest
Provides all the packages in a distribution, including any errata that are also provided
in the patch channel. Unless you explicitly specify the version, any package that you
download on this channel will be the most recent that is available. If no vulnerabilities
have been found in a package, the package version might be the same as that
included in the original distribution. For other packages, the version will be the same
as that provided in the patch channel for the highest update level. For example, the
ol6_arch_latest channel for Oracle Linux 6 Update 3 contains a combination of
the ol6_u3_arch_base and ol6_u3_arch_patch channels.
_base
Provides the packages for each major version and minor update of Oracle Linux and
Oracle VM. This channel corresponds to the released ISO media image. For example,
there is a base channel for each of the updates to Oracle Linux 6 as well as for Oracle
Linux 6. Oracle does not publish security errata and bugfixes on these channels.
_patch
Provides only those packages that have changed since the initial release of a major or
minor version of Oracle Linux or Oracle VM. The patch channel always provides the most
recent version of a package, including all fixes that have been provided since the initial
version was released.
_addons
Provides packages that are not included in the base distribution, such as the package
that you can use to create a yum repository on Oracle Linux 6 or Oracle Linux 7.
_oracle
Provides freely downloadable RPMs from Oracle that you can install on Oracle Linux,
such as ASMLib and Oracle Instant Client.
_optional
Provides optional packages for Oracle Linux 7 that have been sourced from upstream.
This channel includes most development packages (*-devel).
Other channels may also be available, such as _beta channels for the beta versions of packages.
As each new major version or minor update of Oracle Linux becomes available, Oracle creates new base
and patch channels for each supported architecture to distribute the new packages. The existing base
and patch channels for the previous versions or updates remain available and do not include the new
packages. The _latest channel distributes the highest possible version of any package, and tracks the
top of the development tree independently of the update level.
Caution
You can choose to maintain your system at a specific update level of Oracle
Linux and selectively apply errata to that level by subscribing the system to the
_base and _patch channels and unsubscribing it from the _latest channel.
However, for Oracle Linux 7, patches are not added to the _patch channel for
previous updates after a new update has been released. For example, after the
release of Oracle Linux 7 Update 1, no further errata will be released on the
ol7_x86_64_u0_patch channel.
Oracle recommends that you keep you system subscribed to the _latest channel.
If you unsubscribe from the _latest channel, your system will become vulnerable
to security-related issues when a new update is released.
CSI from your existing CSIs, your user name is associated with the new CSI in
addition to your existing CSIs.
Alternatively, if you use the GNOME graphical user desktop, select System > Administration > ULN
Registration on Oracle Linux 6 or Applications > System Tools > ULN Registration on Oracle Linux
7. You can also register your system with ULN if you configure networking when installing Oracle Linux
6 or Oracle Linux 7.
2. When prompted, enter your ULN user name, password, and customer support identifier (CSI).
3. Enter a name for the system that will allow you to identify it on ULN, and choose whether to upload
hardware and software profile data that allows ULN to select the appropriate packages for the system.
4. If you have an Oracle Linux Premier Support account, you can choose to configure an Oracle Linux
6 or Oracle Linux 7 system that is running a supported kernel to receive kernel updates from Oracle
Ksplice. See Section 3.2, Registering to Use Ksplice Uptrack.
The yum-rhn-plugin is enabled and your system is subscribed to the appropriate software channels.
If you use a proxy server for Internet access, see Section 2.2.1, Configuring Use of a Proxy Server.
3. When prompted, enter your ULN user name, password, and CSI.
4. Enter the name of the system that will be displayed on ULN, and choose whether to upload hardware
and software profile data that will allow ULN to select the appropriate packages for your system.
10
2. If your organization uses a proxy server as an intermediary for Internet access, specify the
enableProxy and httpProxy settings in /etc/sysconfig/rhn/up2date as shown in this
example.
enableProxy=1
httpProxy=http://proxysvr.yourdom.com:3128
If the proxy server requires authentication, additionally specify the enableProxyAuth, proxyUser,
and proxyPassword settings:
enableProxy=1
enableProxyAuth=1
httpProxy=http://proxysvr.yourdom.com:3128
proxyUser=yumacc
proxyPassword=clydenw
Caution
All yum users require read access to /etc/sysconfig/rhn/up2date. If this
file must be world-readable, do not use a password that is the same as any
user's login password, and especially not root's password.
With the plugin installed, you can immediately start to use yum instead of up2date.
To disable updates for particular packages, add an exclude statement to the [main] section of the /
etc/yum.conf file. For example, to exclude updates for VirtualBox and kernel:
exclude=VirtualBox* kernel*
Note
Excluding certain packages from being updated can cause dependency errors for
other packages. Your machine might also become vulnerable to security-related
issues if you do not install the latest updates.
11
12
13
List active CSIs, list the servers that are currently registered with an active CSI, and transfer those
servers to another user or to another CSI. See Section 1.14.2, Listing Active CSIs and Transferring
Their Registered Servers.
List expired CSIs, list the servers that are currently registered with an expired CSI, and transfer those
servers to another user or to another CSI. See Section 1.14.3, Listing Expired CSIs and Transferring
Their Registered Servers.
Remove yourself or someone else as administrator of a CSI. See Section 1.14.4, Removing a CSI
Administrator.
14
4. On the Assign Administrator page in the Select New Administrator list, click the + icon that is next to
the user name of the user that you want to add as an administrator. Their user name is added to the
Administrator box.
5. If you administer more than one CSI, select the CSI that the user will administer from the CSI drop
down list.
6. Click Assign Administrator.
Note
If you want to become the administrator of a CSI but the person to whom it
is registered is no longer with your organization, contact an Oracle support
representative to request that you be made the administrator for the CSI.
15
16
d. On the Confirm Transfer Profile - CSI page, click Apply Changes to confirm the transfer to the new
CSI.
If the rhn-setup-gnome package is installed on your system, extract the packages from
uln_register-gnome.tgz.
# tar -xzf uln_register-gnome.tgz
17
5. Follow the instructions on the screen to complete the registration. The uln_register utility collects
information about your system and uploads it to Oracle.
18
Chapter 2 Yum
Table of Contents
2.1 About Yum .................................................................................................................................
2.2 Yum Configuration ......................................................................................................................
2.2.1 Configuring Use of a Proxy Server ...................................................................................
2.2.2 Yum Repository Configuration ..........................................................................................
2.3 Downloading the Oracle Public Yum Repository Files ...................................................................
2.4 Using Yum from the Command Line ............................................................................................
2.5 Yum Groups ...............................................................................................................................
2.6 Using the Yum Security Plugin ....................................................................................................
2.7 Switching CentOS or Scientific Linux Systems to Use the Oracle Public Yum Server ......................
2.8 Creating and Using a Local ULN Mirror .......................................................................................
2.9 Creating a Local Yum Repository Using an ISO Image .................................................................
2.10 Setting up a Local Yum Server Using an ISO Image ..................................................................
2.11 For More Information About Yum ..............................................................................................
19
19
20
21
21
22
23
23
26
26
26
27
28
This chapter describes how you can use the yum utility to install and upgrade software packages.
Description
cachedir
debuglevel
exactarch
exclude
19
Directive
Description
gpgcheck
gpgkey
installonly_limit
keepcache
logfile
obsoletes
plugins
proxy
URL of a proxy server including the port number. See Section 2.2.1,
Configuring Use of a Proxy Server.
proxy_password
proxy_username
reposdir
Directories where yum should look for repository files with a .repo extension.
The default directory is /etc/yum.repos.d.
It is possible to define repositories below the [main] section in /etc/yum.conf or in separate repository
configuration files. By default, yum expects any repository configuration files to be located in the /etc/
yum.repos.d directory unless you use the reposdir directive to define alternate directories.
If the proxy server requires authentication, additionally specify the proxy_username, and
proxy_password settings.
proxy=http://proxysvr.yourdom.com:3128
proxy_username=yumacc
proxy_password=clydenw
If you use the yum plugin (yum-rhn-plugin) to access the ULN, specify the enableProxy and
httpProxy settings in /etc/sysconfig/rhn/up2date as shown in this example.
20
enableProxy=1
httpProxy=http://proxysvr.yourdom.com:3128
If the proxy server requires authentication, additionally specify the enableProxyAuth, proxyUser, and
proxyPassword settings.
enableProxy=1
httpProxy=http://proxysvr.yourdom.com:3128
enableProxyAuth=1
proxyUser=yumacc
proxyPassword=clydenw
Caution
All yum users require read access to /etc/yum.conf or /etc/sysconfig/rhn/
up2date. If these files must be world-readable, do not use a proxy password that is
the same as any user's login password, and especially not root's password.
Description
baseurl
enabled
name
Descriptive name for the repository channel. This directive must be specified.
Any other directive that appears in this section overrides the corresponding global definition in [main]
section of the yum configuration file. See the yum.conf(5) manual page for more information.
The following listing shows an example repository section from a configuration file.
[ol6_u2_base]
name=Oracle Linux 6 U2 - $basearch - base
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=1
In this example, the values of gpgkey and gpgcheck override any global setting. yum substitutes the
name of the current system's architecture for the variable $basearch.
21
# cd /etc/yum.repos.d
2. Use the wget utility to download the repository configuration file that is appropriate for your system.
# wget http://public-yum.oracle.com/public-yum-release.repo
The /etc/yum.repos.d directory is updated with the repository configuration file, in this example,
public-yum-ol7.repo.
3. You can enable or disable repositories in the file by setting the value of the enabled directive to 1 or 0
as required.
Description
yum repolist
yum list
Lists all packages that are available in all enabled repositories and
all packages that are installed on your system.
Finds the name of the package to which the specified file or feature
belongs. For example:
yum provides /etc/sysconfig/atd
yum check-update
yum update
22
Yum Groups
Command
Description
yum update
yum help
yum shell
Note
yum makes no distinction between installing and upgrading a kernel package.
yum always installs a new kernel regardless of whether you specify update or
install.
Description
yum grouplist
Lists installed groups and groups that are available for installation.
23
To list the errata that are available for your system, enter:
# yum updateinfo list
ELBA-2012-1518 bugfix
ELBA-2012-1518 bugfix
ELBA-2012-1518 bugfix
ELBA-2012-1457 bugfix
ELBA-2012-1457 bugfix
ELSA-2013-0215 Important/Sec.
ELSA-2013-0215 Important/Sec.
ELSA-2013-0215 Important/Sec.
ELSA-2013-0215 Important/Sec.
ELSA-2013-0215 Important/Sec.
ELSA-2013-0215 Important/Sec.
...
NetworkManager-1:0.8.1-34.el6_3.x86_64
NetworkManager-glib-1:0.8.1-34.el6_3.x86_64
NetworkManager-gnome-1:0.8.1-34.el6_3.x86_64
ORBit2-2.14.17-3.2.el6_3.x86_64
ORBit2-devel-2.14.17-3.2.el6_3.x86_64
abrt-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-ccpp-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-kerneloops-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-python-2.0.8-6.0.1.el6_3.2.x86_64
abrt-cli-2.0.8-6.0.1.el6_3.2.x86_64
abrt-desktop-2.0.8-6.0.1.el6_3.2.x86_64
The output from the command sorts the available errata in order of their IDs, and it also specifies whether
each erratum is a security patch (severity/Sec.), a bug fix (bugfix), or a feature enhancement
(enhancement). Security patches are listed by their severity: Important, Moderate, or Low.
You can use the --sec-severity option to filter the security errata by severity, for example:
# yum updateinfo list --sec-severity=Moderate
ELSA-2013-0269 Moderate/Sec. axis-1.2.1-7.3.el6_3.noarch
ELSA-2013-0668 Moderate/Sec. boost-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-date-time-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-devel-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-filesystem-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-graph-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-iostreams-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-program-options-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-python-1.41.0-15.el6_4.x86_64
...
To list the security errata by their Common Vulnerabilities and Exposures (CVE) IDs instead of their errata
IDs, specify the keyword cves as an argument:
# yum updateinfo list cves
CVE-2012-5659 Important/Sec.
CVE-2012-5660 Important/Sec.
CVE-2012-5659 Important/Sec.
CVE-2012-5660 Important/Sec.
CVE-2012-5659 Important/Sec.
CVE-2012-5660 Important/Sec.
CVE-2012-5659 Important/Sec.
CVE-2012-5660 Important/Sec.
...
abrt-2.0.8-6.0.1.el6_3.2.x86_64
abrt-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-ccpp-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-ccpp-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-kerneloops-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-kerneloops-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-python-2.0.8-6.0.1.el6_3.2.x86_64
abrt-addon-python-2.0.8-6.0.1.el6_3.2.x86_64
Similarly, the keywords bugfix, enhancement, and security filter the list for all bug fixes,
enhancements, and security errata.
You can use the --cve option to display the errata that correspond to a specified CVE, for example:
# yum updateinfo list --cve CVE-2012-2677
ELSA-2013-0668 Moderate/Sec. boost-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-date-time-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-devel-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-filesystem-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-graph-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-iostreams-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-program-options-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-python-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-regex-1.41.0-15.el6_4.x86_64
ELSA-2013-0668 Moderate/Sec. boost-serialization-1.41.0-15.el6_4.x86_64
24
ELSA-2013-0668 Moderate/Sec.
ELSA-2013-0668 Moderate/Sec.
ELSA-2013-0668 Moderate/Sec.
ELSA-2013-0668 Moderate/Sec.
ELSA-2013-0668 Moderate/Sec.
updateinfo list done
boost-signals-1.41.0-15.el6_4.x86_64
boost-system-1.41.0-15.el6_4.x86_64
boost-test-1.41.0-15.el6_4.x86_64
boost-thread-1.41.0-15.el6_4.x86_64
boost-wave-1.41.0-15.el6_4.x86_64
To update all packages for which security-related errata are available to the latest versions of the
packages, even if those packages include bug fixes or new features but not security errata, enter:
# yum --security update
To update all packages to the latest versions that contain security errata, ignoring any newer packages that
do not contain security errata, enter:
# yum --security update-minimal
To update all kernel packages to the latest versions that contain security errata, enter:
# yum --security update-minimal kernel*
You can also update only those packages that correspond to a CVE or erratum, for example:
# yum update --cve CVE-2012-3954
# yum update --advisory ELSA-2012-1141
Note
Some updates might require you to reboot the system. By default, the boot
manager will automatically enable the most recent kernel version.
For more information, see the yum-security(8) manual page.
25
Switching CentOS or Scientific Linux Systems to Use the Oracle Public Yum Server
2. Transfer the removable storage to the system on which you want to create a local yum repository, and
copy the DVD image to a directory in a local file system.
# cp /media/USB_stick/V33411-01.iso /ISOs
3. Create a suitable mount point, for example /var/OSimage/OL6.3_x86_64, and mount the DVD
image on it.
# mkdir -p /var/OSimage/OL6.3_x86_64
# mount -o loop,ro /ISOs/V33411-01.iso /var/OSimage/OL6.3_x86_64
Note
Include the read-only mount option (ro) to avoid changing the contents of the
ISO by mistake.
4. Create an entry in /etc/fstab so that the system always mounts the DVD image after a reboot.
26
5. In the /etc/yum.repos.d directory, edit the existing repository files, such as public-yumol6.repo or ULN-base.repo, and disable all entries by setting enabled=0.
6. Create the following entries in a new repository file (for example, /etc/yum.repos.d/OL63.repo).
[OL63]
name=Oracle Linux 6.3 x86_64
baseurl=file:///var/OSimage/OL6.3_x86_64
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=1
status
25,459
b. Use the restorecon command to apply the file type to the entire repository.
# /sbin/restorecon -R -v /var/OSimage
Note
The semanage and restorecon commands are provided by the
policycoreutils-python and policycoreutils packages.
4. Create a symbolic link in /var/www/html that points to the repository:
# ln -s /var/OSimage /var/www/html/OSimage
27
a. Specify the resolvable domain name of the server in the argument to ServerName.
ServerName server_addr:80
If the server does not have a resolvable domain name, enter its IP address instead.
b. Verify that the setting of the Options directive in the <Directory "/var/www/html"> section
specifies Indexes and FollowSymLinks to allow you to browse the directory hierarchy, for
example:
Options Indexes FollowSymLinks
7. If you have enabled a firewall on your system, configure it to allow incoming HTTP connection requests
on TCP port 80, for example:
# firewall-cmd --zone=zone --add-port=80/tcp
# firewall-cmd --permanent --zone=zone --add-port=80/tcp
Replace server_addr with the IP address or resolvable host name of the local yum server.
9. On each client, copy the repository file from the server to the /etc/yum.repos.d directory.
10. In the /etc/yum.repos.d directory, edit any other repository files, such as public-yum-ol6.repo
or ULN-base.repo, and disable all entries by setting enabled=0.
11. On the server and each client, test that you can use yum to access the repository.
# yum repolist
Loaded plugins: refresh-packagekit, security
...
repo id
repo name
OL63
Oracle Linux 6.3 x86_64
repolist: 25,459
status
25,459
28
29
29
30
30
31
32
33
33
33
34
35
37
This chapter describes how to configure Ksplice Uptrack to update the kernel on a running system.
Note
An enhanced version of the Ksplice client is available that can patch shared
libraries for user-space processes that are running on an Oracle Linux 7 system.
For more information, see About the Enhanced Ksplice Client in the Oracle Linux
Ksplice User's Guide.
29
To confirm whether a particular kernel is supported, install the Uptrack client on a system that is running
the kernel.
If you have a question about supported kernels, send e-mail to [email protected].
30
3. Using a browser, log in at http://linux.oracle.com with the ULN user name and password that you used
to register the system, and perform the following steps:
a. On the Systems tab, click the link named for your system in the list of registered machines.
b. On the System Details page, click Manage Subscriptions.
c. On the System Summary page, select the Ksplice for Oracle Linux channel for the correct release
and your system's architecture (i386 or x86_64) from the list of available channels and click the
right arrow (>) to move it to the list of subscribed channels.
d. Click Save Subscriptions and log out of the ULN.
4. On your system, use yum to install the uptrack package.
# yum install -y uptrack
The access key for Ksplice Uptrack is retrieved from ULN and added to /etc/uptrack/
uptrack.conf, for example:
[Auth]
accesskey = 0e1859ad8aea14b0b4306349142ce9160353297daee30240dab4d61f4ea4e59b
5. To enable the automatic installation of updates, change the following entry in /etc/uptrack/
uptrack.conf:
autoinstall = no
so that it reads:
autoinstall = yes
For information about configuring Ksplice Uptrack, see Section 3.4, Configuring Ksplice Uptrack.
For information about managing Ksplice updates, see Section 3.5, Managing Ksplice Updates.
or you can configure Ksplice Uptrack to use a proxy server. To configure Ksplice Uptrack to use a proxy
server, set the following entry in /etc/uptrack/uptrack.conf:
https_proxy = https://proxy_URL:https_port
You receive e-mail notification when Ksplice updates are available for your system.
To make Ksplice Uptrack install all updates automatically as they become available, set the following entry:
autoinstall = yes
Note
Enabling automatic installation of updates does not automatically update Ksplice
Uptrack itself. Oracle notifies you by e-mail when you can upgrade the Ksplice
Uptrack software using yum.
To install updates automatically at boot time, the following entry must appear in /etc/uptrack/
uptrack.conf:
install_on_reboot = yes
When you boot the system into the same kernel, the /etc/init.d/uptrack script reapplies the installed
Ksplice updates to the kernel.
To prevent Ksplice Uptrack from automatically reapplying updates to the kernel when you reboot the
system, set the entry to:
install_on_reboot = no
To install all available updates at boot time, even if you boot the system into a different kernel, uncomment
the following entry in /etc/uptrack/uptrack.conf:
#upgrade_on_reboot = yes
so that it reads:
upgrade_on_reboot = yes
To install an individual Ksplice update, specify the update's ID as the argument (in this example, the ID is
dfvn0zq8):
# uptrack-upgrade dfvn0zq8
32
After Ksplice has applied updates to a running kernel, the kernel has an effective version that is different
from the original boot version displayed by the uname -a command. Use the uptrack-uname command
to display the effective version of the kernel:
# uptrack-uname -a
uptrack-uname supports the commonly used uname flags, including -a and -r, and provides a way
for applications to detect that the kernel has been patched. The effective version is based on the version
number of the latest patch that Ksplice Uptrack has applied to the kernel.
To view the updates that Ksplice has made to the running kernel:
# uptrack-show
To prevent Ksplice Uptrack from reapplying the updates at the next system reboot, create the empty file /
etc/uptrack/disable:
# touch /etc/uptrack/disable
Alternatively, specify nouptrack as a parameter on the boot command line when you next restart the
system.
33
Note
You cannot use the web interface or the Ksplice Uptrack API to monitor systems
that are running Ksplice Offline Client as such systems are not registered with
https://uptrack.ksplice.com.
Channel Label
Description
ol5_i386_ksplice
ol5_x86_64_ksplice
ol6_i386_ksplice
ol6_x86_64_ksplice
ol7_x86_64_ksplice
For more information about the release channels that are available, see http://www.oracle.com/
technetwork/articles/servers-storage-admin/yum-repo-setup-1659167.html.
34
7. When you have finished selecting channels, click Save Subscriptions and log out of ULN.
35
[ol6_u3_base]
name=Oracle Linux $releasever U3 - $basearch - base
baseurl=http://local_yum_server/yum/OracleLinux/OL6/3/base/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=0
[ol6_ga_patch]
name=Oracle Linux $releasever GA - $basearch - patch
baseurl=http://local_yum_server/yum/OracleLinux/OL6/0/patch/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=0
[ol6_u1_patch]
name=Oracle Linux $releasever U1 - $basearch - patch
baseurl=http://local_yum_server/yum/OracleLinux/OL6/1/patch/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=0
[ol6_u2_patch]
name=Oracle Linux $releasever U2 - $basearch - patch
baseurl=http://local_yum_server/yum/OracleLinux/OL6/2/patch/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=0
[ol6_u3_patch]
name=Oracle Linux $releasever U3 - $basearch - patch
baseurl=http://local_yum_server/yum/OracleLinux/OL6/3/patch/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
gpgcheck=1
enabled=0
Replace local_yum_server with the IP address or resolvable host name of the local yum server.
In the sample configuration, only the ol6_latest and ol6_x86_64_ksplice channels are enabled.
Note
As an alternative to specifying a gpgkey entry for each repository definition, you
can use the following command to import the GPG key:
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY
If yum cannot connect to the local yum server, check that the firewall settings on that server allow
incoming TCP connections to port 80.
4. Install the Ksplice updates that are available for the kernel.
# yum install uptrack-updates-`uname -r`
For an Oracle Linux 5 client, use this form of the command instead:
# yum install uptrack-updates-`uname -r`.`uname -m`
As new Ksplice updates are made available, you can use this command to pick up these updates
and apply them. It is recommended that you set up a cron job to perform this task. For example, the
following crontab entry for root runs the command once per day at 7am:
36
To display information about Ksplice updates, use the rpm -qa | grep uptrack-updates and
uptrack-show commands.
37
38
39
40
41
42
43
44
44
45
46
47
48
48
49
50
51
51
This chapter describes the Oracle Linux boot process, how to use the GRUB 2 boot loader, how to change
the systemd target for a system, and how to configure the services that are available for a target.
39
40
Starting swapping.
See Section 4.7, About System-State Targets.
6. If you have made /etc/rc.local executable and you have copied /usr/lib/systemd/system/
rc-local.service to /etc/systemd/system, systemd runs any actions that you have defined
in /etc/rc.local. However, the preferred way of running such local actions is to define your own
systemd unit.
For information on systemd and on how to write systemd units, see the systemd(1), systemdsystem.conf(5), and systemd.unit(5) manual pages.
Alternatively, you can specify the value of the text of the entry as a string enclosed in quotes.
41
For more information about using, configuring, and customizing GRUB 2, see the GNU GRUB Manual,
which is also installed as /usr/share/doc/grub2-tools-2.00/grub.html.
Description
0, 1, 2, 3, 4, 5, or 6, or
systemd.unit=runlevelN.target
Specifies the nearest systemd-equivalent systemstate target to an Oracle Linux 6 run level. N can
take an integer value between 0 and 6.
For a description of system-state targets, see
Section 4.7, About System-State Targets.
1, s, S, single, or
systemd.unit=rescue.target
3 or systemd.unit=multi-user.target
5 or systemd.unit=graphical.target
-b, emergency, or
systemd.unit=emergency.target
KEYBOARDTYPE=kbtype
KEYTABLE=kbtype
LANG=language_territory.codeset
max_loop=N
nouptrack
quiet
rd_LUKS_UUID=UUID
rd_LVM_VG=vg/lv_vol
rd_NO_LUKS
rhgb
rn_NO_DM
42
Option
Description
rn_NO_MD
ro root=/dev/mapper/vg-lv_root
rw root=UUID=UUID
selinux=0
Disables SELinux.
SYSFONT=font
The kernel boot parameters that were last used to boot a system are recorded in /proc/cmdline, for
example:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=52c1cab6-969f-4872-958d-47f8518267de
ro rootflags=subvol=root vconsole.font=latarcyrheb-sun16 crashkernel=auto vconsole.keymap=uk
rhgb quiet LANG=en_GB.UTF-8
43
For example, press End to go to the end of the line, and enter an additional boot parameter.
Figure 4.2 shows the kernel boot line with the additional parameter
systemd.target=runlevel1.target, which starts the rescue shell.
Figure 4.2 Kernel Boot Line with an Additional Parameter to Select the Rescue Shell
This example adds the parameter systemd.unit=runlevel3.target so that the system boots into
multi-user, non-graphical mode by default.
2. Rebuild /boot/grub2/grub.cfg:
# grub2-mkconfig -o /boot/grub2/grub.cfg
The change takes effect for subsequent system reboots of all configured kernels.
44
Table 4.1, System-State Targets and Equivalent Run-Level Targets shows the commonly-used systemstate targets and their equivalent run-level targets, where compatibility with Oracle Linux 6 run levels is
required.
Table 4.1 System-State Targets and Equivalent Run-Level Targets
System-State Targets
Description
graphical.target
runlevel5.target
multi-user.target
runlevel2.target
runlevel3.target
runlevel4.target
poweroff.target
runlevel0.target
reboot.target
runlevel6.target
rescue.target
runlevel1.target
To display the currently active targets on a system, use the systemctl list-units command, for
example:
# systemctl list-units --type target
UNIT
LOAD
ACTIVE SUB
basic.target
loaded active active
cryptsetup.target
loaded active active
getty.target
loaded active active
graphical.target
loaded active active
local-fs-pre.target loaded active active
local-fs.target
loaded active active
multi-user.target
loaded active active
network.target
loaded active active
nfs.target
loaded active active
paths.target
loaded active active
remote-fs.target
loaded active active
slices.target
loaded active active
sockets.target
loaded active active
sound.target
loaded active active
swap.target
loaded active active
sysinit.target
loaded active active
timers.target
loaded active active
DESCRIPTION
Basic System
Encrypted Volumes
Login Prompts
Graphical Interface
Local File Systems (Pre)
Local File Systems
Multi-User System
Network
Network File System Server
Paths
Remote File Systems
Slices
Sockets
Sound Card
Swap
System Initialization
Timers
LOAD
= Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
45
SUB
17 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
This sample output for a system with the graphical target active shows that this target depends on 16
other active targets, including network and sound to support networking and sound.
To display the status of all targets on the system, specify the --all option:
# systemctl list-units
UNIT
basic.target
cryptsetup.target
emergency.target
final.target
getty.target
graphical.target
local-fs-pre.target
local-fs.target
multi-user.target
network-online.target
network.target
nfs.target
nss-lookup.target
nss-user-lookup.target
paths.target
remote-fs-pre.target
remote-fs.target
rescue.target
shutdown.target
slices.target
sockets.target
sound.target
swap.target
sysinit.target
syslog.target
time-sync.target
timers.target
umount.target
LOAD
= Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB
= The low-level unit activation state, values depend on unit type.
28 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
For more information, see the systemctl(1) and systemd.target(5) manual pages.
Note
This command changes the target to which the default target is linked, but does not
change the state of the system.
To change the currently active system target, use the systemctl isolate command, for example:
46
Listing all targets shows that graphical and sound targets are not active:
# systemctl list-units
UNIT
basic.target
cryptsetup.target
emergency.target
final.target
getty.target
graphical.target
local-fs-pre.target
local-fs.target
multi-user.target
network-online.target
network.target
nfs.target
nss-lookup.target
nss-user-lookup.target
paths.target
remote-fs-pre.target
remote-fs.target
rescue.target
shutdown.target
slices.target
sockets.target
sound.target
swap.target
sysinit.target
syslog.target
time-sync.target
timers.target
umount.target
LOAD
= Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB
= The low-level unit activation state, values depend on unit type.
28 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
Description
systemctl halt
systemctl hibernate
systemctl hybrid-sleep
systemctl poweroff
systemctl reboot
systemctl suspend
47
For legacy scripts in /etc/init.d that have not been ported as systemd services, you can run the script
directly with the start argument:
# /etc/init.d/yum-cron start
Note
Changing the state of a service only lasts as long as the system remains at the
same state. If you stop a service and then change the system-state target to one
in which the service is configured to run (for example, by rebooting the system),
the service restarts. Similarly, starting a service does not enable the service to start
following a reboot. See Section 4.7.5, Enabling and Disabling Services.
systemctl supports the disable, enable, reload, restart, start, status, and stop actions
for services. For other actions, you must either run the script that the service provides to support these
actions, or for legacy scripts, the /etc/init.d script with the required action argument. For legacy
scripts, omitting the argument to the script displays a usage message, for example:
# /etc/init.d/yum-cron
Usage: /etc/init.d/yum-cron {start|stop|status|restart|reload|force-reload|condrestart}
The command enables a service by creating a symbolic link for the lowest-level system-state target at
which the service should start. In the example, the command creates the symbolic link httpd.service
for the multi-user target.
Disabling a service removes the symbolic link:
# systemctl disable httpd
rm '/etc/systemd/system/multi-user.target.wants/httpd.service'
You can use the is-enabled subcommand to check whether a service is enabled:
# systemctl is-enabled httpd
disabled
# systemctl is-enabled nfs
48
enabled
You can use the status action to view a detailed summary of the status of a service, including a tree of all
the tasks in the control group (cgroup) that the service implements:
# systemctl status httpd
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: active (running) since Mon 2014-04-28 15:02:40 BST; 1s ago
Main PID: 6452 (httpd)
Status: "Processing requests..."
CGroup: /system.slice/httpd.service
6452 /usr/sbin/httpd -DFOREGROUND
6453 /usr/sbin/httpd -DFOREGROUND
6454 /usr/sbin/httpd -DFOREGROUND
6455 /usr/sbin/httpd -DFOREGROUND
6456 /usr/sbin/httpd -DFOREGROUND
6457 /usr/sbin/httpd -DFOREGROUND
Apr 28 15:02:40 localhost.localdomain systemd[1]: Started The Apache HTTP Ser...
Hint: Some lines were ellipsized, use -l to show in full.
A cgroup is a collection of processes that are bound together so that you can control their access to
system resources. In the example, the cgroup for the httpd service is httpd.service, which is in the
system slice.
Slices divide the cgroups on a system into different categories. To display the slice and cgroup hierarchy,
use the systemd-cgls command:
# systemd-cgls
user.slice
user-1000.slice
session-12.scope
3152 gdm-session-worker [pam/gdm-password]
3169 /usr/bin/gnome-keyring-daemon --daemonize --login
3171 gnome-session --session gnome-classic
...
3763 /usr/libexec/evolution-calendar-factory
user-0.slice
session-13.scope
3836 -bash
4015 systemd-cgls
4016 systemd-cgls
session-6.scope
3030 /usr/sbin/anacron -s
system.slice
1 /usr/lib/systemd/systemd --switched-root --system --deserialize 23
bluetooth.service
49
system.slice contains services and other system processes. user.slice contains user processes,
which run within transient cgroups called scopes. In the example, the processes for the user with ID 1000
are running in the scope session-12.scope under the slice /user.slice/user-1000.slice.
You can use the systemctl command to limit the CPU, I/O, memory, and other resources that are
available to the processes in service and scope cgroups. See Section 4.7.7, Controlling Access to System
Resources.
For more information, see the systemctl(1) and systemd-cgls(1) manual pages.
CPUShare controls access to CPU resources. As the default value is 1024, a value of 512 halves the
access that the processes in the cgroup have to CPU time. Similarly, MemoryLimit controls the maximum
amount of memory that the cgroup can use.
Note
You do not need to specify the .service extension to the name of a service.
If you specify the --runtime option, the setting does not persist across system reboots.
# systemctl --runtime set-property httpd CPUShares=512 MemoryLimit=1G
Alternatively, you can change the resource settings for a service under the [Service] heading in the
service's configuration file in /usr/lib/systemd/system. After editing the file, make systemd reload
its configuration files and then restart the service:
# systemctl daemon-reload
# systemctl restart service
50
You can run general commands within scopes and use systemctl to control the access that these
transient cgroups have to system resources. To run a command within in a scope, use the systemd-run
command:
# systemd-run --scope --unit=group_name [--slice=slice_name] command
If you do not want to create the group under the default system slice, you can specify another slice or the
name of a new slice.
Note
If you do not specify the --scope option, the control group is a created as a service
rather than as a scope.
For example, run a command named mymonitor in mymon.scope under myslice.slice:
# systemd-run --scope --unit=mymon --slice=myslice mymonitor
Running as unit mymon.scope.
You can then use systemctl to control the access that a scope has to system resources in the same way
as for a service. However, unlike a service, you must specify the .scope extension, for example:
# systemctl --runtime set-property mymon.scope CPUShares=256
For more information see the systemctl(1), systemd-cgls(1), and systemd.resourcecontrol(5) manual pages.
51
52
53
54
55
58
59
60
62
62
This chapter describes the files and virtual file systems that you can use to change configuration settings
for your system.
authconfig
autofs
Defines custom options for automatically mounting devices and controlling the
operation of the automounter.
crond
firewalld
grub
Specifies default settings for the GRUB 2 boot loader. This file is a symbolic
link to /etc/default/grub. For more information, see Section 4.3, About
the GRUB 2 Boot Loader.
init
Controls how the system appears and functions during the boot process.
keyboard
modules (directory)
Contains scripts that the kernel runs to load additional modules at boot time.
A script in the modules directory must have the extension .modules and
it must have 755 executable permissions. For an example, see the bluezuinput.modules script that loads the uinput module. For more information,
see Section 6.5, Specifying Modules to be Loaded at Boot Time.
named
Passes arguments to the name service daemon at boot time. The named
daemon is a Domain Name System (DNS) server that is part of the Berkeley
Internet Name Domain (BIND) distribution. This server maintains a table that
associates host names with IP addresses on the network.
53
nfs
Controls which ports remote procedure call (RPC) services use for NFS v2
and v3. This file allows you to set up firewall rules for NFS v2 and v3. Firewall
configuration for NFS v4 does not require you to edit this file.
ntpd
Passes arguments to the network time protocol (NTP) daemon at boot time.
samba
Passes arguments to the smbd, nmbd, and winbindd daemons at boot time
to support file-sharing connectivity for Windows clients, NetBIOS-over-IP
naming service, and connection management to domain controllers.
selinux
Controls the state of SELinux on the system. This file is a symbolic link to /
etc/selinux/config. For more information, see Section 26.2.3, Setting
SELinux Modes.
snapper
Defines a list of btrfs file systems and thinly-provisioned LVM volumes whose
contents can be recorded as snapshots by the snapper utility. For more
information, see Section 21.7.1, Using snapper with Btrfs Subvolumes and
Section 19.3.6, Using snapper with Thinly-Provisioned Logical Volumes.
sysstat
Configures logging parameters for system activity data collector utilities such
as sadc.
54
Files that contain information about related topics are grouped into virtual directories. For example, a
separate directory exists in /proc for each process that is currently running on the system, and the
directory's name corresponds to the numeric process ID. /proc/1 corresponds to the systemd process,
which has a PID of 1.
You can use commands such as cat, less, and view to examine virtual files within /proc. For example,
/proc/cpuinfo contains information about the system's CPUs:
# cat /proc/cpuinfo
processor
:
vendor_id
:
cpu family
:
model
:
model name
:
stepping
:
cpu MHz
:
cache size
:
physical id
:
siblings
:
core id
:
cpu cores
:
apicid
:
initial apicid
:
fpu
:
fpu_exception
:
cpuid level
:
wp
:
...
0
GenuineIntel
6
42
Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
7
2393.714
6144 KB
0
2
0
2
0
0
yes
yes
5
yes
Certain files under /proc require root privileges for access or contain information that is not humanreadable. You can use utilities such as lspci, free, and top to access the information in these files. For
example, lspci lists all PCI devices on a system:
# lspci
00:00.0
00:01.0
00:01.1
00:02.0
00:03.0
00:04.0
00:05.0
00:06.0
00:07.0
00:0b.0
00:0d.0
Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter
Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
System peripheral: InnoTek Systemberatung GmbH VirtualBox Guest Service
Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01)
USB controller: Apple Inc. KeyLargo/Intrepid USB
Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller
SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode]
(rev 02)
...
Description
PID (Directory)
Command path.
cwd
55
Description
environ Environment variables.
exe
fd/N
File descriptors.
maps
root
stack
status
buddyinfo
bus (directory)
cgroups
Provides information about the resource control groups that are in use
on the system.
cmdline
cpuinfo
crypto
devices
Lists the names and major device numbers of all currently configured
characters and block devices.
dma
Lists the direct memory access (DMA) channels that are currently in
use.
driver (directory)
execdomains
Lists the execution domains for binaries that the Oracle Linux kernel
supports.
filesystems
Lists the file system types that the kernel supports. Entries marked
with nodev are not in use.
fs (directory)
interrupts
Records the number of interrupts per interrupt request queue (IRQ) for
each CPU since system startup.
iomem
ioports
Lists the range of I/O port addresses that the kernel uses with devices.
irq (directory)
Contains information about each IRQ. You can configure the affinity
between each IRQ and the system CPUs.
kcore
Presents the system's physical memory in core file format that you
can examine using a debugger such as crash or gdb. This file is not
human-readable.
56
Description
kmsg
loadavg
locks
Displays information about the file locks that the kernel is currently
holding on behalf of processes. The information provided includes:
lock class (FLOCK or POSIX)
lock type (ADVISORY or MANDATORY)
access type (READ or WRITE)
process ID
major device, minor device, and inode numbers
bounds of the locked region
mdstat
meminfo
modules
mounts
net (directory)
partitions
Lists the major and minor device numbers, number of blocks, and
name of partitions mounted by the system.
scsi/device_info
scsi/scsi and
scsi/sg/*
self
slabinfo
softirqs
stat
Total CPU time (measured in jiffies) spent in user mode, lowpriority user mode, system mode, idle, waiting for I/O, handling
hardirq events, and handling softirq events.
cpuN
57
Description
swaps
sys (directory)
Provides information about the system and also allows you to enable,
disable, or modify kernel features. You can write new settings to any
file that has write permission. See Section 5.2.2, Changing Kernel
Parameters.
The following subdirectory hierarchies of /proc/sys contain virtual
files, some of whose values you can usefully alter:
dev
Device parameters.
fs
kernel
net
Networking parameters.
sysvipc (directory)
tty (directory)
vmstat
Other files take value that take binary or Boolean values. For example, the value of /proc/sys/net/
ipv4/ip_forward determines whether the kernel forwards IPv4 network packets.
# cat /proc/sys/net/ipv4/ip_forward
0
# echo 1 > /proc/sys/net/ipv4/ip_forward
# cat /proc/sys/net/ipv4/ip_forward
1
You can use the sysctl command to view or modify values under the /proc/sys directory.
Note
Even root cannot bypass the file access permissions of virtual file entries under
/proc. If you attempt to change the value of a read-only entry such as /proc/
partitions, there is no kernel code to service the write() system call.
To display all of the current kernel settings:
58
# sysctl -a
kernel.sched_child_runs_first = 0
kernel.sched_min_granularity_ns = 2000000
kernel.sched_latency_ns = 10000000
kernel.sched_wakeup_granularity_ns = 2000000
kernel.sched_shares_ratelimit = 500000
...
Note
The delimiter character in the name of a setting is a period (.) rather than a slash
(/) in a path relative to /proc/sys. For example, net.ipv4.ip_forward
represents net/ipv4/ip_forward and kernel.msgmax represents kernel/
msgmax.
To display an individual setting, specify its name as the argument to sysctl:
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
To change the value of a setting, use the following form of the command:
# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
Changes that you make in this way remain in force only until the system is rebooted. To make
configuration changes persist after the system is rebooted, you must add them to the /etc/sysctl.conf
file. Any changes that you make to this file take effect when the system reboots or if you run the sysctl p command, for example:
# sed -i '/net.ipv4.ip_forward/s/= 0/= 1/' /etc/sysctl.conf
# grep ip_forward /etc/sysctl.conf
net.ipv4.ip_forward = 1
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
...
kernel.shmall = 4294967296
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
For more information, see the sysctl(8) and sysctl.conf(5) manual pages.
fs.file-max
Specifies the maximum number of open files for all processes. Increase the value of this parameter if you
see messages about running out of file handles.
net.core.netdev_max_backlog
Specifies the size of the receiver backlog queue, which is used if an interface receives packets faster than
the kernel can process them. If this queue is too small, packets are lost at the receiver, rather than on the
network.
59
net.core.rmem_max
Specifies the maximum read socket buffer size. To minimize network packet loss, this buffer must be large
enough to handle incoming network packets.
net.core.wmem_max
Specifies the maximum write socket buffer size. To minimize network packet loss, this buffer must be large
enough to handle outgoing network packets.
net.ipv4.tcp_available_congestion_control
Displays the TCP congestion avoidance algorithms that are available for use. Use the modprobe
command if you need to load additional modules such as tcp_htcp to implement the htcp algorithm.
net.ipv4.tcp_congestion_control
Specifies which TCP congestion avoidance algorithm is used.
net.ipv4.tcp_max_syn_backlog
Specifies the number of outstanding SYN requests that are allowed. Increase the value of this parameter
if you see synflood warnings in your logs, and investigation shows that they are occurring because the
server is overloaded by legitimate connection attempts.
net.ipv4.tcp_rmem
Specifies minimum, default, and maximum receive buffer sizes that are used for a TCP socket. The
maximum value cannot be larger than net.core.rmem_max.
net.ipv4.tcp_wmem
Specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. The maximum
value cannot be larger than net.core.wmem_max.
vm.swappiness
Specifies how likely the kernel is to write loaded pages to swap rather than drop pages from the system
page cache. When set to 0, swapping only occurs to avoid an out of memory condition. When set to
100, the kernel swaps aggressively. For a desktop system, setting a lower value can improve system
responsiveness by decreasing latency. The default value is 60.
Caution
This parameter is intended for use with laptops to reduce power consumption by the
hard disk. Do not adjust this value on server systems.
kernel.hung_task_panic
(UEK R3 only) If set to 1, the kernel panics if any kernel or user thread sleeps in the
TASK_UNINTERRUPTIBLE state (D state) for more than kernel.hung_task_timeout_secs seconds.
60
A process remains in D state while waiting for I/O to complete. You cannot kill or interrupt a process in this
state.
The default value is 0, which disables the panic.
Tip
To diagnose a hung thread, you can examine /proc/PID/stack, which displays
the kernel stack for both kernel and user threads.
kernel.hung_task_timeout_secs
(UEK R3 only) Specifies how long a user or kernel thread can remain in D state before a warning message
is generated or the kernel panics (if the value of kernel.hung_task_panic is 1). The default value is
120 seconds. A value of 0 disables the timeout.
kernel.nmi_watchdog
If set to 1 (default), enables the non-maskable interrupt (NMI) watchdog thread in the kernel. If you want
to use the NMI switch or the OProfile system profiler to generate an undefined NMI, set the value of
kernel.nmi_watchdog to 0.
kernel.panic
Specifies the number of seconds after a panic before a system will automatically reset itself.
If the value is 0, the system hangs, which allows you to collect detailed information about the panic for
troubleshooting. This is the default value.
To enable automatic reset, set a non-zero value. If you require a memory image (vmcore), allow enough
time for Kdump to create this image. The suggested value is 30 seconds, although large systems will
require a longer time.
kernel.panic_on_io_nmi
If set to 0 (default), the system tries to continue operations if the kernel detects an I/O channel check
(IOCHK) NMI that usually indicates a uncorrectable hardware error. If set to 1, the system panics.
kernel.panic_on_oops
If set to 0, the system tries to continue operations if the kernel encounters an oops or BUG condition. If set
to 1 (default), the system delays a few seconds to give the kernel log daemon, klogd, time to record the
oops output before the panic occurs.
In an OCFS2 cluster. set the value to 1 to specify that a system must panic if a kernel oops occurs. If
a kernel thread required for cluster operation crashes, the system must reset itself. Otherwise, another
node might not be able to tell whether a node is slow to respond or unable to respond, causing cluster
operations to hang.
kernel.panic_on_stackoverflow
(RHCK only) If set to 0 (default), the system tries to continue operations if the kernel detects an overflow in
a kernel stack. If set to 1, the system panics.
kernel.panic_on_unrecovered_nmi
If set to 0 (default), the system tries to continue operations if the kernel detects an NMI that usually
indicates an uncorrectable parity or ECC memory error. If set to 1, the system panics.
61
kernel.softlockup_panic
If set to 0 (default), the system tries to continue operations if the kernel detects a soft-lockup error
that causes the NMI watchdog thread to fail to update its time stamp for more than twice the value of
kernel.watchdog_thresh seconds. If set to 1, the system panics.
kernel.unknown_nmi_panic
If set to 1, the system panics if the kernel detects an undefined NMI. You would usually generate an
undefined NMI by manually pressing an NMI switch. As the NMI watchdog thread also uses the undefined
NMI, set the value of kernel.unknown_nmi_panic to 0 if you set kernel.nmi_watchdog to 1.
kernel.watchdog_thresh
Specifies the interval between generating an NMI performance monitoring interrupt that the kernel uses
to check for hard-lockup and soft-lockup errors. A hard-lockup error is assumed if a CPU is unresponsive
to the interrupt for more than kernel.watchdog_thresh seconds. The default value is 10 seconds. A
value of 0 disables the detection of lockup errors.
vm.panic_on_oom
If set to 0 (default), the kernels OOM-killer scans through the entire task list and attempts to kill a memoryhogging process to avoid a panic. If set to 1, the kernel panics but can survive under certain conditions. If
a process limits allocations to certain nodes by using memory policies or cpusets, and those nodes reach
memory exhaustion status, the OOM-killer can kill one process. No panic occurs in this case because other
nodes memory might be free and the system as a whole might not yet be out of memory. If set to 2, the
kernel always panics when an OOM condition occurs. Settings of 1 and 2 are for intended for use with
clusters, depending on your preferred failover policy.
Description
block
bus
62
Virtual Directory
Description
class
devices
firmware
module
power
63
64
65
65
66
67
68
This chapter describes how to load, unload, and modify the behavior of kernel modules.
Size
1405
59164
12079
22739
Used by
1
0
0
3
7901
21262
33812
0
0
2 ppdev,parport_pc
Note
This command produces its output by reading the /proc/modules file.
The output shows the module name, the amount of memory it uses, the number of processes using the
module and the names of other modules on which it depends. In the sample output, the module parport
depends on the modules ppdev and parport_pc, which are loaded in advance of parport. Two
processes are currently using all three modules.
To display detailed information about a module, use the modinfo command, for example:
# modinfo ahci
filename:
version:
license:
description:
author:
srcversion:
/lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/drivers/ata/ahci.ko
3.0
GPL
AHCI SATA low-level driver
Jeff Garzik
AC5EC885397BF332DE16389
65
alias:
...
depends:
vermagic:
parm:
parm:
...
pci:v*d*sv*sd*bc01sc06i01*
version
description
srcversion
alias
depends
vermagic
Kernel version that was used to compile the module, which is checked against the current
kernel when the module is loaded.
parm
Modules are loaded into the kernel from kernel object (ko) files in the /lib/
modules/kernel_version/kernel directory. To display the absolute path of a kernel object file,
specify the -n option, for example:
# modinfo -n parport
/lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/drivers/parport/parport.ko
For more information, see the lsmod(5) and modinfo(8) manual pages.
266415
66530
41704
2477
38976
204268
0
1
1
1
1
5
nfs
nfs
nfs
nfs
nfs,lockd,nfs_acl,auth_rpcgss
Use the -v verbose option to show if any additional modules are loaded to resolve dependencies.
# modprobe -v nfs
insmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/net/sunrpc/auth_gss/auth_rpcgss.ko
insmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/fs/nfs_common/nfs_acl.ko
insmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/fs/fscache/fscache.ko
...
66
Note
modprobe does not reload modules that are already loaded. You must first unload
a module before you can load it again.
Use the -r option to unload kernel modules, for example:
# modprobe -rv nfs
rmmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/fs/nfs/nfs.ko
rmmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/fs/lockd/lockd.ko
rmmod /lib/modules/2.6.32-300.27.1.el6uek.x86_64/kernel/fs/fscache/fscache.ko
...
Modules are unloaded in the reverse order that they were loaded. Modules are not unloaded if a process or
another loaded module requires them.
Note
modprobe uses the insmod and rmmod utilities to load and unload modules. As
insmod and rmmod do not resolve module dependencies, do not use these utilities.
For more information, see the modprobe(8) and modules.dep(5) manual pages.
Use spaces to separate multiple parameter/value pairs. Array values are represented by a commaseparated list, for example:
# modprobe foo arrayparm=1,2,3,4
You can also change the values of some parameters for loaded modules and built-in drivers by writing the
new value to a file under /sys/module/module_name/parameters, for example:
# echo 0 > /sys/module/ahci/parameters/skip_host_reset
The /etc/modprobe.d directory contains .conf configuration files specify module options, create
module aliases, and override the usual behavior of modprobe for modules with special requirements.
The /etc/modprobe.conf file that was used with earlier versions of modprobe is also valid if it exists.
Entries in the /etc/modprobe.conf and /etc/modprobe.d/*.conf files use the same syntax.
The following are commonly used commands in modprobe configuration files:
alias
Creates an alternate name for a module. The alias can include shell wildcards. For example,
create an alias for the sd-mod module:
alias block-major-8-* sd_mod
Ignore a module's internal alias that is displayed by the modinfo command. This command
is typically used if the associated hardware is not required, if two or more modules both
support the same devices, or if a module invalidly claims to support a device. For example,
blacklist the alias for the frame-buffer driver cirrusfb:
67
blacklist cirrusfb
Runs a shell command instead of loading a module into the kernel. For example, load the
module snd-emu10k1-synth instead of snd-emu10k1:
install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 && \
/sbin/modprobe snd-emu10k1-synth
options
Defines options for a module,. For example, define the nohwcrypt and qos options for the
b43 module:
options b43 nohwcrypt=1 qos=0
remove
Runs a shell command instead of unloading a module. For example, unmount /proc/fs/
nfsd before unloading the nfsd module:
remove nfsd { /bin/umount /proc/fs/nfsd > /dev/null 2>&1 || :; } ; \
/sbin/modprobe -r --first-time --ignore-remove nfsd
68
69
71
71
74
77
This chapter describes how the system uses device files and how the udev device manager dynamically
creates or removes device node files.
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
disk
disk
root
root
10,
56
640
80
60
3
2880
5,
1
11
100
10, 61
120
253,
0
253,
1
1,
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
Mar
17
17
17
17
17
17
17
17
17
17
17
17
17
08:17
08:17
08:16
08:16
08:17
08:17
08:17
08:17
08:17
08:17
08:16
08:17
08:17
autofs
block
bsg
bus
cdrom -> sr0
char
console
core -> /proc/kcore
cpu
cpu_dma_latency
disk
dm-0
dm-1
root
root
root
root
1,
root
root
root
disk
disk
disk
8,
8,
8,
root
root
root
root
69
lrwxrwxrwx.
...
crw--w----.
crw--w----.
...
crw-rw-rw-.
...
crw-rw-rw-.
1 root
root
1 root
1 root
tty
tty
4,
4,
1 root
root
1,
1 root
root
1,
Block devices support random access to data, seeking media for data, and usually allow data to be
buffered while it is being written or read. Examples of block devices include hard disks, CD-ROM drives,
flash memory, and other addressable memory devices. The kernel writes data to or reads data from a
block device in blocks of a certain number of bytes. In the sample output, sda is the block device file that
corresponds to the hard disk, and it has a major number of 8 and a minor number of 0. sda1 and sda2 are
partitions of this disk, and they have the same major number as sda (8), but their minor numbers are 1 and
2.
Character devices support streaming of data to or from a device, and data is not usually buffered nor is
random access permitted to data on a device. The kernel writes data to or reads data from a character
device one byte at a time. Examples of character devices include keyboards, mice, terminals, pseudoterminals, and tape drives. tty0 and tty1 are character device files that correspond to terminal devices
that allow users to log in from serial terminals or terminal emulators. These files have major number 4 and
minor numbers 0 and 1.
Pseudo-terminals slave devices emulate real terminal devices to interact with software. For example, a
user might log in on a terminal device such as /dev/tty1, which then uses the pseudo-terminal master
device /dev/pts/ptmx to interact with an underlying pseudo-terminal device. The character device files
for pseudo-terminal slaves and master are located in the /dev/pts directory:
# ls -l /dev/pts
total 0
crw--w----. 1 guest
crw--w----. 1 guest
crw--w----. 1 guest
c---------. 1 root
10:11
10:53
10:11
08:16
0
1
2
ptmx
Some device entries, such as stdin for the standard input, are symbolically linked via the self
subdirectory of the proc file system. The pseudo-terminal device file to which they actually point depends
on the context of the process.
# ls -l /proc/self/fd/[012]
total 0
lrwx------. 1 root root 64 Mar 17 10:02 0 -> /dev/pts/1
lrwx------. 1 root root 64 Mar 17 10:02 1 -> /dev/pts/1
lrwx------. 1 root root 64 Mar 17 10:02 2 -> /dev/pts/1
Character devices such as null, random, urandom, and zero are examples of pseudo-devices that
provide access to virtual functionality implemented in software rather than to physical hardware.
/dev/null is a data sink. Data that you write to /dev/null effectively disappears but the write operation
succeeds. Reading from /dev/null returns EOF (end-of-file).
/dev/zero is a data source of an unlimited number of zero-value bytes.
/dev/random and /dev/urandom are data sources of streams of pseudo-random bytes. To maintain
high-entropy output, /dev/random blocks if its entropy pool does not contains sufficient bits of noise. /
dev/urandom does not block and, as a result, the entropy of its output might not be as consistently high
as that of /dev/random. However, neither /dev/random nor /dev/urandom are considered to be truly
random enough for the purposes of secure cryptography such as military-grade encryption.
70
You can find out the size of the entropy pool and the entropy value for /dev/random from virtual files
under /proc/sys/kernel/random:
# cat /proc/sys/kernel/random/poolsize
4096
# cat /proc/sys/kernel/random/entropy_avail
3467
For more information, see the null(4), pts(4), and random(4) manual pages.
The logging priority, which can be set to err, info and debug. The default value is err.
udev_root
Specifies the location of the device nodes. The default value is /dev.
/etc/udev/rules.d/
*.rules
/dev/.udev/rules.d/
*.rules
Udev processes the rules files in lexical order, regardless of which directory they are located. Rules files in
/etc/udev/rules.d override files of the same name in /lib/udev/rules.d.
The following rules are extracted from the file /lib/udev/rules.d/50-udev- default.rules and
illustrate the syntax of udev rules.
# do not edit this file, it will be overwritten on update
SUBSYSTEM=="block", SYMLINK{unique}+="block/%M:%m"
SUBSYSTEM!="block", SYMLINK{unique}+="char/%M:%m"
KERNEL=="pty[pqrstuvwxyzabcdef][0123456789abcdef]", GROUP="tty", MODE="0660"
71
Comment lines begin with a # character. All other non-blank lines define a rule, which is a list of one or
more comma-separated key-value pairs. A rule either assigns a value to a key or it tries to find a match for
a key by comparing its current value with the specified value. The following table shows the assignment
and comparison operators that you can use.
Operator
Description
+=
:=
Assign a value to a key. This value cannot be changed by any further rules.
==
Match the key's current value against the specified value for equality.
!=
Match the key's current value against the specified value for equality.
You can use the following shell-style pattern matching characters in values.
Character
Description
[]
Description
ACTION
Matches the name of the action that led to an event. For example, ACTION="add"
or ACTION="remove".
ENV{key}
KERNEL
Matches the name of the device that is affected by an event. For example,
KERNEL=="dm-*" for disk media.
NAME
Matches the name of a device file or network interface. For example, NAME="?*" for
any name that consists of one or more characters.
72
Match Key
Description
SUBSYSTEM
Matches the subsystem of the device that is affected by an event. For example,
SUBSYSTEM=="tty".
TEST
Description
ENV{key}
Specifies a value for the device property key. For example, GROUP="disk".
GROUP
IMPORT{type}
Import a single property from the boot kernel command line. For
simple flags, udev sets the value of the property to 1. For example,
IMPORT{cmdline}="nodmraid".
db
Interpret the specified value as an index into the device database and
import a single property, which must have already been set by an earlier
event. For example, IMPORT{db}="DM_UDEV_LOW_PRIORITY_FLAG".
file
Interpret the specified value as the name of a text file and import its
contents, which must be in environmental key format. For example,
IMPORT{file}="keyfile".
parent
Interpret the specified value as a key-name filter and import the stored
keys from the database entry for the parent device. For example
IMPORT{parent}="ID_*".
program
MODE
NAME
OPTIONS
OWNER
RUN
Specifies a command to be run after the device file has been created. For example,
RUN+="/usr/bin/eject $kernel", where $kernel is the kernel name of the
device.
SYMLINK
Specifies the name of a symbolic link to a device file. For example, SYMLINK
+="disk/by-uuid/$env{ID_FS_UUID_ENC}", where $env{} is substituted
with the specified device property.
Other assignment keys include ATTR{key}, GOTO, LABEL, RUN, and WAIT_FOR.
The following table shows string substitutions that are commonly used with the GROUP, MODE, NAME,
OWNER, PROGRAM, RUN, and SYMLINK keys.
73
Specifies the value of a device attribute from a file under /sys. For example,
ENV{MATCHADDR}="$attr{address}".
%s{file}
$devpath or
The device path of the device in the sysfs file system under /sys. For example,
RUN+="keyboard-force-release.sh $devpath common-volume-keys".
%p
$env{key} or
%E{key}
$kernel or
%k
$major or
%M
$minor or
%m
$name
Udev expands the strings specified for RUN immediately before its program is executed, which is after udev
has finished processing all other rules for the device. For the other keys, udev expands the strings while it
is processing the rules.
For more information, see the udev(7) manual page.
74
DEVNAME=/dev/sda
DEVTYPE=disk
SUBSYSTEM=block
ID_ATA=1
ID_TYPE=disk
ID_BUS=ata
ID_MODEL=VBOX_HARDDISK
ID_MODEL_ENC=VBOX\x20HARDDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20...
ID_REVISION=1.0
ID_SERIAL=VBOX_HARDDISK_VB579a85b0-bf6debae
ID_SERIAL_SHORT=VB579a85b0-bf6debae
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_SCSI_COMPAT=SATA_VBOX_HARDDISK_VB579a85b0-bf6debae
ID_PATH=pci-0000:00:0d.0-scsi-0:0:0:0
ID_PART_TABLE_TYPE=dos
LVM_SBIN_PATH=/sbin
UDISKS_PRESENTATION_NOPOLICY=0
UDISKS_PARTITION_TABLE=1
UDISKS_PARTITION_TABLE_SCHEME=mbr
UDISKS_PARTITION_TABLE_COUNT=2
UDISKS_ATA_SMART_IS_AVAILABLE=0
DEVLINKS=/dev/block/8:0 /dev/disk/by-id/ata-VBOX_HARDDISK_VB579a85b0-bf6debae ...
75
To display all properties of /dev/sda and its parent devices that udev has found in /sys:
# udevadm info --attribute-walk --name=/dev/sda
...
looking at device '/devices/pci0000:00/0000:00:0d.0/host0/target0:0:0/0:0:0:0/block/sda':
KERNEL=="sda"
SUBSYSTEM=="block"
DRIVER==""
ATTR{range}=="16"
ATTR{ext_range}=="256"
ATTR{removable}=="0"
ATTR{ro}=="0"
ATTR{size}=="83886080"
ATTR{alignment_offset}=="0"
ATTR{capability}=="52"
ATTR{stat}=="
20884
15437 1254282
338919
5743
8644
103994
109005 ...
ATTR{inflight}=="
0
0"
looking at parent device '/devices/pci0000:00/0000:00:0d.0/host0/target0:0:0/0:0:0:0':
KERNELS=="0:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
ATTRS{device_blocked}=="0"
ATTRS{type}=="0"
ATTRS{scsi_level}=="6"
ATTRS{vendor}=="ATA
"
ATTRS{model}=="VBOX HARDDISK
"
ATTRS{rev}=="1.0 "
ATTRS{state}=="running"
ATTRS{timeout}=="30"
ATTRS{iocounterbits}=="32"
ATTRS{iorequest_cnt}=="0x6830"
ATTRS{iodone_cnt}=="0x6826"
ATTRS{ioerr_cnt}=="0x3"
ATTRS{modalias}=="scsi:t-0x00"
ATTRS{evt_media_change}=="0"
ATTRS{dh_state}=="detached"
ATTRS{queue_depth}=="31"
ATTRS{queue_ramp_up_period}=="120000"
ATTRS{queue_type}=="simple"
looking at parent device '/devices/pci0000:00/0000:00:0d.0/host0/target0:0:0':
KERNELS=="target0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS==""
looking at parent device '/devices/pci0000:00/0000:00:0d.0/host0':
KERNELS=="host0"
SUBSYSTEMS=="scsi"
DRIVERS==""
looking at parent device '/devices/pci0000:00/0000:00:0d.0':
KERNELS=="0000:00:0d.0"
SUBSYSTEMS=="pci"
DRIVERS=="ahci"
ATTRS{vendor}=="0x8086"
ATTRS{device}=="0x2829"
ATTRS{subsystem_vendor}=="0x0000"
ATTRS{subsystem_device}=="0x0000"
ATTRS{class}=="0x010601"
ATTRS{irq}=="21"
ATTRS{local_cpus}=="00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003"
ATTRS{local_cpulist}=="0-1"
ATTRS{modalias}=="pci:v00008086d00002829sv00000000sd00000000bc01sc06i01"
ATTRS{numa_node}=="-1"
76
ATTRS{enable}=="1"
ATTRS{broken_parity_status}=="0"
ATTRS{msi_bus}==""
ATTRS{msi_irqs}==""
looking at parent device '/devices/pci0000:00':
KERNELS=="pci0000:00"
SUBSYSTEMS==""
DRIVERS==""
The command starts at the device specified by its device path and walks up the chain of parent devices.
For every device that it finds, it displays all possible attributes for the device and its parent devices in the
match key format for udev rules.
For more information, see the udevadm(8) manual page.
Listing the device files in /dev shows that udev has not yet applied the rule:
# ls /dev/sd* /dev/my_disk
ls: cannot access /dev/my_disk: No such file or directory
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb
2. To simulate how udev applies its rules to create a device, you can use the udevadm test command
with the device path of sdb listed under the /sys/class/block hierarchy, for example:
# udevadm test /sys/class/block/sdb
calling: test
version ...
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.
...
LINK 'my_disk' /etc/udev/rules.d/10-local.rules:1
...
creating link '/dev/my_disk' to '/dev/sdb'
creating symlink '/dev/my_disk' to 'sdb
...
ACTION=add
DEVLINKS=/dev/disk/by-id/ata-VBOX_HARDDISK_VB186e4ce2-f80f170d
/dev/disk/by-uuid/a7dc508d-5bcc-4112-b96e-f40b19e369fe
/dev/my_disk
...
77
After udev processes the rules files, the symbolic link /dev/my_disk has been added:
# ls -F /dev/sd* /dev/my_disk
/dev/my_disk@ /dev/sda /dev/sda1
/dev/sda2
/dev/sdb
78
79
79
80
81
82
82
This chapter describes how to configure the system to run tasks automatically within a specific period of
time, at a specified time and date, or when the system is lightly loaded.
hour
day
month
day-of-week
user
command
0-59.
79
hour
0-23.
day
1-31.
month
day-of-week
user
The user to run the command as, or * for the owner of the crontab file.
command
For the minute through day-of week fields, you can use the following special characters:
*
(forward slash) A step value, for example, /3 in the hour field means every three hours.
For example, the following entry would run a command every five minutes on weekdays:
0-59/5
1-5
command
Run a command at one minute past midnight on the first day of the months April, June, September, and
November:
1
4,6,9,11
command
root can add job definition entries to /etc/crontab, or add crontab-format files to the /etc/cron.d
directory.
Note
If you add an executable job script to the /etc/cron.hourly directory, crond
runs the script once every hour. Your script should check that it is not already
running.
For more information, see the crontab(5) manual page.
80
variables). The file has the same format as /etc/crontab except that the user field is omitted. When you
save changes to the file, these are written to the file /var/spool/cron/username. To list the contents
of your crontab file, use the crontab -l command. To delete your crontab file, use the crontab -r
command.
For more information, see the crontab(1) manual page.
delay
job-id
command
Frequency of job execution specified in days or as @daily, @weekly, or @monthly for once
per day, week, or month.
delay
job-id
command
The following entries are taken from the default /etc/anacrontab file:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
#period in days
1
7
@monthly
By default, anacron runs jobs between 03:00 and 22:00 and randomly delays jobs by between 11 and 50
minutes. The job scripts in /etc/cron.daily, run anywhere between 03:11 and 03:50 every day if the
system is running, or after the system is booted and the time is less than 22:00. The run-parts script
sequentially executes every program within the directory specified as its argument.
Scripts in /etc/cron.weekly run once per week with a delay offset of between 31 and 70 minutes.
Scripts in /etc/cron.monthly run once per week with a delay offset of between 51 and 90 minutes.
For more information, see the anacron(8) and anacrontab(5) manual pages.
81
at takes a time as its argument and reads the commands to be run from the standard input. For example,
run the commands in the file atjob in 20 minutes time:
# at now + 20 minutes < ./atjob
job 1 at 2013-03-19 11:25
The atq command shows the at jobs that are queued to run:
# atq
1 2013-03-19 11:25 a root
The batch command also reads command from the standard input, but it does not run until the system
load average drops below 0.8. For example:
# batch < batchjob
job 2 at 2013-03-19 11:31
To cancel one or more queued jobs, specify their job numbers to the atrm command, for example:
# atrm 1 2
This example sets the minimum interval to 100 seconds and the load-average limit to 3.
2. Restart the atd service:
# systemctl restart atd
3. Verify that the atd daemon is running with the new minimum interval and load-average limit:
82
For more information, see the systemctl(1) and atd(8) manual pages.
83
84
85
85
86
86
87
90
90
This chapter describes how to collect diagnostic information about a system for Oracle Support, and how to
monitor and tune the performance of a system.
See the sosreport(1) manual page for information about how to enable or disable plugins, and how to
set values for plugin options.
To run sosreport:
1. Enter the command, specifying any options that you need to tailor the report to report information about
a problem area.
85
For example, to record only information about Apache and Tomcat, and to gather all the Apache logs:
# sosreport -o apache,tomcat -k apache.log=on
sosreport (version 2.2)
.
.
.
Press ENTER to continue, or CTRL-C to quit.
To enable all boolean options for all loaded plugins except the rpm.rpmva plugin that verifies all
packages, and which takes a considerable time to run:
# sosreport -a -k rpm.rpmva=off
86
The following utilities allow you to collect information about system resource usage and errors, and can
help you to identify performance problems caused by overloaded disks, network, memory, or CPUs:
dmesg
Displays the contents of the kernel ring buffer, which can contain errors about system resource
usage. Provided by the util-linux-ng package.
dstat
Displays statistics about system resource usage. Provided by the dstat package.
free
Displays the amount of free and used memory in the system. Provided by the procps package.
iostat
iotop
Monitors disk and swap I/O on a per-process basis. Provided by the iotop package.
ip
Reports network interface statistics and errors. Provided by the iproute package.
mpstat
sar
ss
top
Provides a dynamic real-time view of the tasks that are running on a system. Provided by the
procps package.
uptime
Displays the system load averages for the past 1, 5, and 15 minutes. Provided by the procps
package.
vmstat
Many of these utilities provide overlapping functionality. For more information, see the individual manual
page for the utility.
See Section 5.2.3, Parameters that Control System Performance for a list of kernel parameters that affect
system performance.
Alternatively, many of the commands allow you to specify the sampling interval in seconds, for example:
# mpstat interval
If installed, the sar command records statistics every 10 minutes while the system is running and retains
this information for every day of the current month. The following command displays all the statistics that
sar recorded for day DD of the current month:
# sar -A -f /var/log/sa/saDD
To run sar command as a background process and collect data in a file that you can display later by using
the -f option:
87
The top command provides a real-time display of CPU activity. By default, top lists the most CPUintensive processes on the system. In its upper section, top displays general information including the load
averages over the past 1, 5 and 15 minutes, the number of running and sleeping processes (tasks), and
total CPU and memory usage. In its lower section, top displays a list of processes, including the process
ID number (PID), the process owner, CPU usage, memory usage, running time, and the command name.
By default, the list is sorted by CPU usage, with the top consumer of CPU listed first. Type f to select
which fields top displays, o to change the order of the fields, or O to change the sort field. For example,
entering On sorts the list on the percentage memory usage field (%MEM).
88
sar -B reports memory paging statistics, including pgscank/s, which is the number of memory pages
scanned by the kswapd daemon per second, and pgscand/s, which is the number of memory pages
scanned directly per second.
sar -W reports swapping statistics, including pswpin/s and pswpout/s, which are the numbers of
pages per second swapped in and out per second.
If %memused is near 100% and the scan rate is continuously over 200 pages per second, the system has a
memory shortage.
Once a system runs out of real or physical memory and starts using swap space, its performance
deteriorates dramatically. If you run out of swap space, your programs or the entire operating system are
likely to crash. If free or top indicate that little swap space remains available, this is also an indication
you are running low on memory.
The output from the dmesg command might include notification of any problems with physical memory that
were detected at boot time.
89
VERS represents the version number of OSWatcher, for example 730 for OSWatcher 7.30.
Extracting the tar file creates a directory named oswbb, which contains all the directories and files that
are associated with OSWbb, including the startOSWbb.sh script.
4. To enable the collection of iostat information for NFS volumes, edit the OSWatcher.sh script in the
oswbb directory, and set the value of nfs_collect to 1:
nfs_collect=1
90
The optional frequency and duration arguments specifying how often in seconds OSWbb should collect
data and the number of hours for which OSWbb should run. The default values are 30 seconds and 48
hours. The following example starts OSWbb recording data at intervals of 60 seconds, and has it record
data for 12 hours:
# ./startOSWbb.sh 60 12
...
Testing for discovery of OS Utilities...
VMSTAT found on your system.
IOSTAT found on your system.
MPSTAT found on your system.
IFCONFIG found on your system.
NETSTAT found on your system.
TOP found on your system.
Testing for discovery of OS CPU COUNT
oswbb is looking for the CPU COUNT on your system
CPU COUNT will be used by oswbba to automatically look for cpu problems
CPU COUNT found on your system.
CPU COUNT = 4
Discovery completed.
Starting OSWatcher Black Box v7.3.0 on date and time
With SnapshotInterval = 60
With ArchiveInterval = 12
...
Data is stored in directory: OSWbba_archive
Starting Data Collection...
oswbb heartbeat: date and time
oswbb heartbeat: date and time + 60 seconds
...
OSWbba_archive is the path of the archive directory that contains the OSWbb log files.
To stop OSWbb prematurely, run the stopOSWbb.sh script from the oswbb directory.
# ./stopOSWbb.sh
OSWbb collects data in the following directories under the oswbb/archive directory:
Directory
Description
oswifconfig
oswiostat
oswmeminfo
oswmpstat
oswnetstat
oswprvtnet
If you have enable private network tracing for RAC, contains information about the
status of the private networks.
oswps
oswslabinfo
oswtop
oswvmstat
91
OSWbba_archive is the path of the archive directory that contains the OSWbb log files.
You can use OSWbba to display the following types of performance graph:
Process run, wait and block queues.
CPU time spent running in system, user, and idle mode.
Context switches and interrupts.
Free memory and available swap.
Reads per second, writes per second, service time for I/O requests, and percentage utilization of
bandwidth for a specified block device.
You can also use OSWbba to save the analysis to a report file, which reports instances of system
slowdown,spikes in run queue length, or memory shortage, describes probable causes, and offers
suggestions of how to improve performance.
# java -jar oswbba.jar -i OSWbba_archive -A
For more information about OSWbb and OSWbba, refer to the OSWatcher Black Box User Guide (Article
ID 301137.1) and the OSWatcher Black Box Analyzer User Guide (Article ID 461053.1), which are
available from My Oracle Support (MOS) at http://support.oracle.com.
92
The Kernel Dump Configuration GUI starts. If Kdump is currently disabled, the green Enable button is
selectable and the Disable button is greyed out.
2. Click Enable to enable Kdump.
3. You can select the following settings tags to adjust the configuration of Kdump.
Basic Settings
Allows you to specify the amount of memory to reserve for Kdump. The
default setting is 128 MB.
93
Target Settings
Allows you to specify the target location for the vmcore dump file on
a locally accessible file system, to a raw disk device, or to a remote
directory using NFS or SSH over IPv4. The default location is /var/
crash.
You cannot save a dump file on an eCryptfs file system, on remote
directories that are NFS mounted on the rootfs file system, or on
remote directories that access require the use of IPv6, SMB, CIFS, FCoE,
wireless NICs, multipathed storage, or iSCSI over software initiators to
access them.
Filtering Settings
Allows to select which type of data to include in or exclude from the dump
file. Selecting or deselecting the options alters the value of the argument
that Kdump specifies to the -d option of the core collector program,
makedumpfile.
Expert Settings
Allows you to choose which kernel to use, edit the command line options
that are passed to the kernel and the core collector program, choose
the default action if the dump fails, and modify the options to the core
collector program, makedumpfile.
The Unbreakable Enterprise Kernel supports the use of the
crashkernel=auto setting for UEK Release 3 Quarterly Update 1
and later. If you use the crashkernel=auto setting, the output of the
dmesg command shows crashkernel=XM@0M, which is normal. The
setting actually reserves 128 MB plus 64 MB for each terabyte of physical
memory.
Note
You cannot configure crashkernel=auto
for Xen or for the UEK prior to UEK Release
3 Quarterly Update 1. Only standard settings
such as crashkernel=128M@48M are
supported. For systems with more than 128
GB of memory, the recommended setting is
crashkernel=512M@64M.
You can select one of five default actions should the dump fail:
mount rootfs and run /sbin/
init
reboot
shell
halt
94
poweroff
Description
/boot/grub2/grub.cfg
/etc/kdump.conf
Sets the location where the dump file can be written, the filtering level
for the makedumpfile command, and the default behavior to take if
the dump fails. See the comments in the file for information about the
supported parameters.
If you edit these files, you must reboot the system for the changes to take effect.
For more information, see the kdump.conf(5) manual page.
where cluster_name is the name of the cluster. To set the value after each reboot of the system, add
this line to /etc/rc.local. To restore the default behavior, set the value of fence_method to reset
instead of panic and remove the line from /etc/rc.local.
For more information, see Section 23.3.5, Configuring the Behavior of Fenced Nodes.
95
Running crash
2. Download the appropriate debuginfo and debuginfo-common packages for the vmcore or kernel
that you want to examine from https://oss.oracle.com/ol6/debuginfo/:
If you want to examine the running Unbreakable Enterprise Kernel on the system, use commands
such as the following to download the packages:
# export DLP="https://oss.oracle.com/ol6/debuginfo"
# wget ${DLP}/kernel-uek-debuginfo-`uname -r`.rpm
# wget ${DLP}/kernel-uek-debuginfo-common-`uname -r`.rpm
If you want to examine the running Red Hat Compatible Kernel on the system, use commands such
as the following to download the packages:
# export DLP="https://oss.oracle.com/ol6/debuginfo"
# wget ${DLP}/kernel-debuginfo-`uname -r`.rpm
# wget ${DLP}/kernel-debuginfo-common-`uname -r`.rpm
If you want to examine a vmcore file that relates to a different kernel than is currently running,
download the appropriate debuginfo and debuginfo-common packages for the kernel that
produce the vmcore, for example:
# export DLP="https://oss.oracle.com/ol6/debuginfo"
# wget ${DLP}/kernel-uek-debuginfo-2.6.32-300.27.1.el6uek.x86_64.rpm
# wget ${DLP}/kernel-uek-debuginfo-common-2.6.32-300.27.1.el6uek.x86_64.rpm
Note
If the vmcore file was produced by Kdump, you can use the following crash
command to determine the version:
# crash --osrelease /var/tmp/vmcore/2013-0211-2358.45-host03.28.core
2.6.39-200.24.1.el6uek.x86_64
The vmlinux kernel object file (also known as the namelist file) that crash requires is installed in /
usr/lib/debug/lib/modules/kernel_version/.
Running crash
To examine a vmcore file, specify the path to the file as an argument, for example:
# crash /var/tmp/vmcore/2013-0211-2358.45-host03.28.core
The following crash output is from a vmcore file that was dumped after a system panic:
KERNEL:
DUMPFILE:
CPUS:
DATE:
UPTIME:
LOAD AVERAGE:
TASKS:
NODENAME:
RELEASE:
VERSION:
MACHINE:
MEMORY:
PANIC:
PID:
COMMAND:
TASK:
CPU:
STATE:
/usr/lib/debug/lib/modules/2.6.39-200.24.1.el6uek.x86_64/vmlinux
/var/tmp/vmcore/2013-0211-2358.45-host03.28.core
2
Fri Feb 11 16:55:41 2013
04:24;54
0.00, 0.01, 0.05
84
host03.mydom.com
2.6.39-200.24.1.el6uek.x86_64
#1 SMP Sat Jun 23 02:39:07 EDT 2012
x86_64 (2992 MHz)
2 GB
"Oops: 0002" (check log for details)
1696
"insmod
c74de000
0
TASK_RUNNING (PANIC)
crash>
The output includes the number of CPUs, the load average over the last 1 minute, last 5 minutes, and
last 15 minutes, the number of tasks running, the amount of memory, the panic string, and the command
that was executing at the time the dump was created. In this example, an attempt by insmod to install a
module resulted in an oops violation.
At the crash> prompt, you can enter help or ? to display the available crash commands. Enter help
command to display more information for a specified command.
crash commands can be grouped into several different groups according to purpose:
Kernel Data Structure Analysis
Commands
Display kernel text and data structures. See Section 10.2.3, Kernel
Data Structure Analysis Commands.
Helper commands
97
The pointer-to command can be used instead struct or union. The gdb module calls the
appropriate function. For example:
crash> *buffer_head
struct buffer_head {
long unsigned int b_state;
struct buffer_head *b_this_page;
struct page *b_page;
sector_t b_blocknr;
size_t b_size;
char *b_data;
struct block_device *b_bdev;
bh_end_io_t *b_end_io;
void *b_private;
struct list_head b_assoc_buffers;
struct address_space *b_assoc_map;
atomic_t b_count;
}
SIZE: 104
dis
Disassembles source code instructions of a complete kernel function, from a specified address
for a specified number of instructions, or from the beginning of a function up to a specified
address. For example:
crash> dis fixup_irqs
0xffffffff81014486 <fixup_irqs>:
0xffffffff81014487 <fixup_irqs+1>:
0xffffffff8101448a <fixup_irqs+4>:
0xffffffff8101448c <fixup_irqs+6>:
0xffffffff8101448e <fixup_irqs+8>:
0xffffffff81014490 <fixup_irqs+10>:
0xffffffff81014492 <fixup_irqs+12>:
0xffffffff81014493 <fixup_irqs+13>:
0xffffffff81014497 <fixup_irqs+17>:
...
push
mov
push
push
push
push
push
sub
nopl
%rbp
%rsp,%rbp
%r15
%r14
%r13
%r12
%rbx
$0x18,%rsp
0x0(%rax,%rax,1)
struct
98
int hotpluggable;
struct sys_device sysdev;
}
SIZE: 88
sym
Translates a kernel symbol name to a kernel virtual address and section, or a kernel virtual
address to a symbol name and section. You can also query (-q) the symbol list for all symbols
containing a specified string or list (-l) all kernel symbols. For example:
crash> sym jiffies
ffffffff81b45880 (A) jiffies
crash> sym -q runstate
c590 (d) per_cpu__runstate
c5c0 (d) per_cpu__runstate_snapshot
ffffffff8100e563 (T) xen_setup_runstate_info
crash> sym -l
0 (D) __per_cpu_start
0 (D) per_cpu__irq_stack_union
4000 (D) per_cpu__gdt_page
5000 (d) per_cpu__exception_stacks
b000 (d) per_cpu__idt_desc
b010 (d) per_cpu__xen_cr0_value
b018 (D) per_cpu__xen_vcpu
b020 (D) per_cpu__xen_vcpu_info
b060 (d) per_cpu__mc_buffer
c570 (D) per_cpu__xen_mc_irq_flags
c578 (D) per_cpu__xen_cr3
c580 (D) per_cpu__xen_current_cr3
c590 (d) per_cpu__runstate
c5c0 (d) per_cpu__runstate_snapshot
...
union
Similar to the struct command, displaying kernel data types that are defined as unions instead
of structures.
whatis
Displays the definition of structures, unions, typedefs or text or data symbols. For example:
crash> whatis linux_binfmt
struct linux_binfmt {
struct list_head lh;
struct module *module;
int (*load_binary)(struct linux_binprm *, struct pt_regs *);
int (*load_shlib)(struct file *);
int (*core_dump)(long int, struct pt_regs *, struct file *, long unsigned int);
long unsigned int min_coredump;
int hasvdso;
}
SIZE: 64
Displays a kernel stack trace of the current context or of a specified PID or task. In the case of a
dump that followed a kernel panic, the command traces the functions that were called leading up
to the panic. For example:
crash> bt
PID: 10651 TASK: d1347000 CPU: 1
COMMAND: "insmod"
#0 [d1547e44] die at c010785a
#1 [d1547e54] do_invalid_op at c0107b2c
#2 [d1547f0c] error_code (via invalid_op) at c01073dc
...
99
You can use the -l option to display the line number of the source file that corresponds to each
function call in a stack trace.
crash> bt -l 1
PID: 1
TASK: ffff88007d032040 CPU: 1
COMMAND: "init"
#0 [ffff88007d035878] schedule at ffffffff8144fdd4
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/kernel/sched.c: 3091
#1 [ffff88007d035950] schedule_hrtimeout_range at ffffffff814508e4
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/arch/x86/include/asm/current.h: 14
#2 [ffff88007d0359f0] poll_schedule_timeout at ffffffff811297d5
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/arch/x86/include/asm/current.h: 14
#3 [ffff88007d035a10] do_select at ffffffff81129d72
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/fs/select.c: 500
#4 [ffff88007d035d80] core_sys_select at ffffffff8112a04c
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/fs/select.c: 575
#5 [ffff88007d035f10] sys_select at ffffffff8112a326
/usr/src/debug/kernel-2.6.32/linux-2.6.32.x86_64/fs/select.c: 615
#6 [ffff88007d035f80] system_call_fastpath at ffffffff81011cf2
/usr/src/debug////////kernel-2.6.32/linux-2.6.32.x86_64/arch/x86/kernel/entry_64.S:
488
RIP: 00007fce20a66243 RSP: 00007fff552c1038 RFLAGS: 00000246
RAX: 0000000000000017 RBX: ffffffff81011cf2 RCX: ffffffffffffffff
RDX: 00007fff552c10e0 RSI: 00007fff552c1160 RDI: 000000000000000a
RBP: 0000000000000000
R8: 0000000000000000
R9: 0000000000000200
R10: 00007fff552c1060 R11: 0000000000000246 R12: 00007fff552c1160
R13: 00007fff552c10e0 R14: 00007fff552c1060 R15: 00007fff552c121f
ORIG_RAX: 0000000000000017 CS: 0033 SS: 002b
bt is probably the most useful crash command. It has a large number of options that you can
use to examine a kernel stack trace. For more information, enter help bt.
dev
Displays character and block device data. The -d and -i options display disk I/O statistics and I/
O port usage. For example:
crash> dev
CHRDEV
NAME
CDEV
OPERATIONS
1
mem
ffff88007d2a66c0 memory_fops
4
/dev/vc/0
ffffffff821f6e30 console_fops
4
tty
ffff88007a395008 tty_fops
4
ttyS
ffff88007a3d3808 tty_fops
5
/dev/tty
ffffffff821f48c0 tty_fops
...
BLKDEV
NAME
GENDISK
OPERATIONS
1
ramdisk
ffff88007a3de800 brd_fops
259
blkext
(none)
7
loop
ffff880037809800 lo_fops
8
sd
ffff8800378e9800 sd_fops
9
md
(none)
...
crash> dev -d
MAJOR GENDISK
NAME
REQUEST QUEUE
TOTAL ASYNC
8 0xffff8800378e9800 sda
0xffff880037b513e0
10
0
11 0xffff880037cde400 sr0
0xffff880037b50b10
0
0
253 0xffff880037902c00 dm-0
0xffff88003705b420
0
0
253 0xffff880037d5f000 dm-1
0xffff88003705ab50
0
0
crash> dev -i
RESOURCE
RANGE
NAME
ffffffff81a9e1e0 0000-ffff PCI IO
ffffffff81a96e30 0000-001f dma1
ffffffff81a96e68 0020-0021 pic1
ffffffff81a96ea0 0040-0043 timer0
ffffffff81a96ed8 0050-0053 timer1
ffffffff81a96f10 0060-0060 keyboard
...
100
SYNC
10
0
0
0
DRV
0
0
0
0
files
Displays information about files that are open in the current context or in the context of a specific
PID or task. For example:
crash> files 12916
PID: 12916 TASK: ffff8800276a2480 CPU: 0
COMMAND: "firefox"
ROOT: /
CWD: /home/guest
FD
FILE
DENTRY
INODE
TYPE PATH
0 ffff88001c57ab00 ffff88007ac399c0 ffff8800378b1b68 CHR /null
1 ffff88007b315cc0 ffff88006046f800 ffff8800604464f0 REG /home/guest/.xsession-errors
2 ffff88007b315cc0 ffff88006046f800 ffff8800604464f0 REG /home/guest/.xsession-errors
3 ffff88001c571a40 ffff88001d605980 ffff88001be45cd0 REG /home/guest/.mozilla/firefox
4 ffff88003faa7300 ffff880063d83440 ffff88001c315bc8 SOCK
5 ffff88003f8f6a40 ffff88007b41f080 ffff88007aef0a48 FIFO
...
fuser
Displays the tasks that reference a specified file name or inode address as the current root
directory, current working directory, open file descriptor, or that memory map the file. For
example:
crash>
PID
2990
3116
3142
3147
3162
3185
...
irq
fuser /home/guest
TASK
ffff88007a2a8440
ffff8800372e6380
ffff88007c54e540
ffff88007aa1e440
ffff88007a2d04c0
ffff88007c00a140
COMM
"gnome-session"
"gnome-session"
"metacity"
"gnome-panel"
"nautilus"
"bluetooth-appl
kmem
USAGE
cwd
cwd
cwd
cwd
cwd
cwd
<ioapic_chip>
"IO-APIC"
<startup_ioapic_irq>
<default_shutdown>
<default_enable>
<default_disable>
<ack_apic_edge>
<mask_IO_APIC_irq>
PAGES
512658
20867
491791
176201
8375
229933
39551
TOTAL
2 GB
81.5 MB
1.9 GB
688.3 MB
32.7 MB
898.2 MB
154.5 MB
TOTAL SWAP
SWAP USED
SWAP FREE
1032190
2067
1030123
3.9 GB
8.1 MB
3.9 GB
PERCENTAGE
---4% of TOTAL MEM
95% of TOTAL MEM
34% of TOTAL MEM
1% of TOTAL MEM
44% of TOTAL MEM
7% of TOTAL MEM
---0% of TOTAL SWAP
99% of TOTAL SWAP
kmem has a large number of options. For more information, enter help kmem.
log
Displays the kernel message buffer in chronological order. This is the same data that dmesg
displays but the output can include messages that never made it to syslog or disk.
101
Helper Commands
mach
Displays machine-specific information such as the cpuinfo structure and the physical memory
map.
mod
Displays information about the currently installed kernel modules. The -s and -S options load
debug data (if available) from the specified module object files to enable symbolic debugging.
mount
net
ps
ST
IN
RU
IN
IN
%MEM
4.0
6.9
0.1
0.1
VSZ
RSS
215488 84880
277632 145612
108464
1984
108464
1896
COMM
Xorg
crash
bash
bash
pte
Translates a page table entry (PTE) to the physical page address and page bit settings. If the PTE
refers to a swap location, the command displays the swap device and offset.
runq
Displays the list of tasks that are on the run queue of each CPU.
sig
Displays signal-handling information for the current context or for a specified PID or task.
swap
task
Displays the contents of the task_struct for the current context or for a specified PID or task.
timer
vm
Displays the virtual memory data, including the addresses of mm_struct and the page directory,
resident set size, and total virtual memory size for the current context or for a specified PID or
task.
vtop
Translates a user or kernel virtual address to a physical address. The command also displays the
PTE translation, vm_area_struct data for user virtual addresses, mem_map page data for a
physical page, and the swap location or file location if the page is not mapped.
waitq
Translates a hexadecimal value to ASCII. With no argument, the command displays an ASCII
chart.
btop
eval
Evaluates an expression and displays the result in hexadecimal, decimal, octal, and binary. For
example:
crash> eval 4g / 0x100
hexadecimal: 1000000 (16MB)
decimal: 16777216
octal: 100000000
binary: 0000000000000000000000000000000000000001000000000000000000000000
102
list
Displays the contents of a linked list of data objects, typically structures, starting at a specified
address.
ptob
ptov
search
Searches for a specified value in a specified range of user virtual memory, kernel virtual
memory, or physical memory.
rd
Displays a selected range of user virtual memory, kernel virtual memory, or physical memory
using the specified format.
wr
Defines an alias for a command. With no argument, the command displays the
current list of aliases.
exit, q, or quit
extend
foreach
Execute a bt, files, net, task, set, sig, vm, or vtop command on multiple
tasks.
gdb
repeat
Repeats a command indefinitely until you type Ctrl-C. This command is only
useful when you use crash to examine a live system.
set
Sets the context to a specified PID or task. With no argument, the command
displays the current context.
103
Use ps | grep UN to check for processes in the TASK_UNINTERRUPTIBLE state (D state), usually
because they are waiting on I/O. Such processes contribute to the load average and cannot be killed.
Use files to display the files that a process had open.
You can shell indirection operators to save output from a command to a file for later analysis or to pipe the
output through commands such as grep, for example:
crash> foreach files > files.txt
crash> foreach bt | grep bash
PID: 3685
TASK: ffff880058714580
PID: 11853 TASK: ffff88001c6826c0
CPU: 1
CPU: 0
COMMAND: "bash"
COMMAND: "bash"
104
Table of Contents
11 Network Configuration ................................................................................................................
11.1 About Network Interfaces ................................................................................................
11.2 About Network Interface Names ......................................................................................
11.3 About Network Configuration Files ...................................................................................
11.3.1 /etc/hosts .............................................................................................................
11.3.2 /etc/nsswitch.conf .................................................................................................
11.3.3 /etc/resolv.conf .....................................................................................................
11.3.4 /etc/sysconfig/network ...........................................................................................
11.4 Command-line Network Configuration Interfaces ...............................................................
11.5 Configuring Network Interfaces Using Graphical Interfaces ................................................
11.6 About Network Interface Bonding ....................................................................................
11.6.1 Configuring Network Interface Bonding ..................................................................
11.7 About Network Interface Teaming ....................................................................................
11.7.1 Configuring Network Interface Teaming .................................................................
11.7.2 Adding Ports to and Removing Ports from a Team .................................................
11.7.3 Changing the Configuration of a Port in a Team ....................................................
11.7.4 Removing a Team ................................................................................................
11.7.5 Displaying Information About Teams .....................................................................
11.8 Configuring VLANs with Untagged Data Frames ...............................................................
11.8.1 Using the ip Command to Create VLAN Devices ...................................................
11.9 Configuring Network Routing ...........................................................................................
12 Network Address Configuration ..................................................................................................
12.1 About the Dynamic Host Configuration Protocol ...............................................................
12.2 Configuring a DHCP Server ............................................................................................
12.3 Configuring a DHCP Client ..............................................................................................
12.4 About Network Address Translation .................................................................................
13 Name Service Configuration .......................................................................................................
13.1 About DNS and BIND .....................................................................................................
13.2 About Types of Name Servers ........................................................................................
13.3 About DNS Configuration Files ........................................................................................
13.3.1 /etc/named.conf ....................................................................................................
13.3.2 About Resource Records in Zone Files .................................................................
13.3.3 About Resource Records for Reverse-name Resolution .........................................
13.4 Configuring a Name Server .............................................................................................
13.5 Administering the Name Service ......................................................................................
13.6 Performing DNS Lookups ................................................................................................
14 Network Time Configuration .......................................................................................................
14.1 About the chronyd Daemon .............................................................................................
14.1.1 Configuring the chronyd Service ...........................................................................
14.2 About the NTP Daemon ..................................................................................................
14.2.1 Configuring the ntpd Service .................................................................................
14.3 About PTP ......................................................................................................................
14.3.1 Configuring the PTP Service .................................................................................
14.3.2 Using PTP as a Time Source for NTP ...................................................................
15 Web Service Configuration .........................................................................................................
15.1 About the Apache HTTP Server ......................................................................................
15.2 Installing the Apache HTTP Server ..................................................................................
15.3 Configuring the Apache HTTP Server ..............................................................................
15.4 Testing the Apache HTTP Server ....................................................................................
15.5 Configuring Apache Containers .......................................................................................
15.5.1 About Nested Containers ......................................................................................
107
109
109
111
112
112
112
112
113
113
114
116
116
116
117
118
118
118
119
120
120
120
123
123
123
124
125
127
127
128
128
128
131
132
133
134
134
137
137
137
139
139
141
142
144
145
145
145
145
148
148
149
108
150
153
153
153
153
154
154
155
155
156
156
159
159
159
160
160
162
163
164
164
165
167
171
172
172
175
175
176
176
179
181
185
185
185
187
109
111
112
112
112
112
113
113
114
116
116
116
117
118
118
118
119
120
120
120
This chapter describes how to configure a system's network interfaces and network routing.
In this example, there are two configuration files for motherboard-based Ethernet interfaces, ifcfg-em1
and ifcfg-em2, and one for the loopback interface, ifcfg-lo. The system reads the configuration files
at boot time to configure the network interfaces.
On your system, you might see other names for network interfaces. See Section 11.2, About Network
Interface Names.
The following are sample entries from an ifcfg-em1 file for a network interface that obtains its IP address
using the Dynamic Host Configuration Protocol (DHCP):
DEVICE="em1"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=dhcp
109
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System em1"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
PEERDNS=yes
PEERROUTES=yes
If the interface is configured with a static IP address, the file contains entries such as the following:
DEVICE="em1"
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System em1"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
HWADDR=08:00:27:16:C3:33
IPADDR=192.168.1.101
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
PEERDNS=yes
PEERROUTES=yes
The following configuration parameters are typically used in interface configuration files:
BOOTPROTO
dhcp
none
BROADCAST
DEFROUTE
DEVICE
Name of the physical network interface device (or a PPP logical device).
GATEWAYN
IPv4 gateway address for the interface. As an interface can be associated with
several combinations of IP address, network mask prefix length, and gateway
address, these are numbered starting from 0.
HWADDR
IPADDRN
IPV4_FAILURE_FATAL
IPV6_DEFAULTGW
IPV6_FAILURE_FATAL
IPV6ADDR
IPv6 address of the interface in CIDR notation, including the network mask
prefix length. For example: IPV6ADDR="2001:0db8:1e11:115b::1/32"
110
IPV6INIT
MASTER
Specifies the name of the master bonded interface, of which this interface is
slave.
NAME
NETWORK
NM_CONTROLLED
ONBOOT
PEERDNS
PEERROUTES
Whether the information for the routing table entry that defines the default
gateway for the interface is obtained from the DHCP server.
PREFIXN
SLAVE
TYPE
Interface type.
USERCTL
Whether users other than root can control the state of this interface.
UUID
pBsS[fF][dD]
PCI device with bus number B, slot number S, function number F, and
device ID D.
pBsS[fF][uP]...[cC][iI]
USB device with bus number B, slot number S, function number F, port
number P, configuration number C, and interface number I.
111
sS[fF][dD]
xM
For example, an Ethernet port on the motherboard might be named eno1 or em1, depending on whether
the value of biosdevname is 0 or 1.
The kernel assigns a legacy, unpredictable network interface name (ethN and wlanN) only if it cannot
discover any information about the device that would allow it to disambiguate the device from other such
devices. You can use the net.ifnames=0 boot parameter to reinstate the legacy naming scheme.
Caution
Using the net.ifnames or biosdevname boot parameters to change the naming
scheme can rendering existing firewall rules invalid. Changing the naming scheme
can also affect other software that refers to network interface names.
11.3.1 /etc/hosts
The /etc/hosts file associates host names with IP addresses. It allows the system to look up (resolve)
the IP address of a host given its name, or the name given the IP address. Most networks use DNS
(Domain Name Service) to perform address or name resolution. Even if your network uses DNS, it is usual
to include lines in this file that specify the IPv4 and IPv6 addresses of the loopback device, for example:
127.0.0.1
::1
The first and second column contains the IP address and host name. Additional columns contain aliases
for the host name.
For more information, see the hosts(5) manual page.
11.3.2 /etc/nsswitch.conf
The /etc/nsswitch.conf file configures how the system uses various databases and name resolution
mechanisms. The first field of entries in this file identifies the name of the database. The second field
defines a list of resolution mechanisms in the order in which the system attempts to resolve queries on the
database.
The following example hosts definition from /etc/nsswitch.conf indicates that the system first
attempts to resolve host names and IP addresses by querying files (that is, /etc/hosts) and, if that
fails, next by querying a DNS server, and last of all, by querying NIS+ (NIS version 3) :
hosts:
11.3.3 /etc/resolv.conf
The /etc/resolv.conf file defines how the system uses DNS to resolve host names and IP addresses.
This file usually contains a line specifying the search domains and up to three lines that specify the IP
112
/etc/sysconfig/network
addresses of DNS server. The following entries from /etc/resolv.conf configure two search domains
and three DNS servers:
search us.mydomain.com mydomain.com
nameserver 192.168.154.3
nameserver 192.168.154.4
nameserver 10.216.106.3
If your system obtains its IP address from a DHCP server, it is usual for the system to configure the
contents of this file with information also obtained using DHCP.
For more information, see the resolv.conf(5) manual page.
11.3.4 /etc/sysconfig/network
The /etc/sysconfig/network file specifies additional information that is valid to all network interfaces
on the system. The following entries from /etc/sysconfig/network define that IPv4 networking is
enabled, IPv6 networking is not enabled, the host name of the system, and the IP address of the default
network gateway:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=host20.mydomain.com
GATEWAY=192.168.1.1
device status
TYPE
STATE
ethernet connected
ethernet connected
loopback unmanaged
You can use the ip command to display the status of an interface, for debugging, or for system tuning. For
example, to display the status of all active interfaces:
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:16:c3:33 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global em1
inet6 fe80::a00:27ff:fe16:c333/64 scope link
valid_lft forever preferred_lft forever
For each network interface, the output shows the current IP address, and the status of the interface. To
display the status of a single interface such as em1, specify its name as shown here:
# ip addr show dev em1
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:16:c3:33 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global em1
inet6 fe80::a00:27ff:fe16:c333/64 scope link
valid_lft forever preferred_lft forever
113
You can also use ip to set properties and activate a network interface. The following example sets the IP
address of the em2 interface and activates it:
# ip addr add 10.1.1.1/24 dev em2
# ip link set em2 up
Note
You might be used to using the ifconfig command to perform these operations.
However, ifconfig is considered obsolete and will eventually be replaced
altogether by the ip command.
Any settings that you configure for network interfaces using ip do not persist across system reboots.
To make the changes permanent, set the properties in the /etc/sysconfig/network-scripts/
ifcfg-interface file.
Any changes that you make to an interface file in /etc/sysconfig/network-scripts do not take
effect until you restart the network service or bring the interface down and back up again. For example, to
restart the network service:
# systemctl restart network
To restart an individual interface, you can use the ifup or ifdown commands, which invoke the script in /
etc/sysconfig/network-scripts that corresponds to the interface type, for example:
# ifdown em1
# ifup em1
Connection successfully activated
(D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Alternatively, you can use the ip command to stop and start network activity on an interface without
completely tearing down and rebuilding its configuration:
# ip link set em1 down
# ip link set em1 up
The ethtool utility is useful for diagnosing potentially mismatched settings that affect performance, and
allows you to query and set the low-level properties of a network device. Any changes that you make using
ethtool do not persist across a reboot. To make the changes permanent, modify the settings in the
device's ifcfg-interface file in /etc/sysconfig/network-scripts.
For more information, see the ethtool(8), ifup(8), ip(8), and nmcli(1) manual pages.
114
To create a new network interface, click +, select the interface type (VPN, Bond, Bridge, or VLAN). To
edit an existing interface, select it from the list and click the gear icon.
Alternatively, you can use the Network Connections editor to configure wired, wireless, mobile broadband,
Virtual Private Network (VPN), Digital Subscriber Link (DSL), and virtual (bond, bridge, team, and VLAN)
interfaces. You can open this window by using the nm-connection-editor command. Figure 11.2
shows the Network Connections editor.
Figure 11.2 Network Connections Editor
To create a new network interface, click Add, select the type of interface (hardware, virtual, or VPN)
and click Create. To edit an existing interface, select it from the list and click Edit. To remove a selected
interface, click Delete.
115
You can also use the nmcli command to manage network connections through NetworkManager. For
more information, see the nmcli(1) manual page.
This example sets the name of the bond to bond0 and its mode to balance-rr. For more information
about the available options for load balancing or ARP link monitoring, see /usr/share/doc/
iputils-*/README.bonding and the nmcli(1) manual page.
2. Add each interface to the bond:
# nmcli con add type bond-slave ifname em1 master bond0
# nmcli con add type bond-slave ifname em2 master bond0
After restarting the service, the bonded interface is available for use.
Network interface teaming is similar to network interface bonding and provides a way of implementing link
aggregation that is relatively maintenance-free, as well as being simpler to modify, expand, and debug as
compared with bonding.
A lightweight kernel driver implements teaming and the teamd daemon implements load-balancing and
failover schemes termed runners. The following standard runners are defined:
activebackup
Monitors the link for changes and selects the active port that is used to send packets.
broadcast
lacp
Provides load balancing by implementing the Link Aggregation Control Protocol 802.3ad
on the member ports.
loadbalance
In passive mode, uses the BPF hash function to select the port that is used to send
packets.
In active mode, uses a balancing algorithm to distribute outgoing packets over the
available ports.
Selects a port at random to send each outgoing packet.
random
Note
UEK R3 does not currently support this runner mode.
roundrobin
For specialized applications, you can create customized runners that teamd can interpret. The teamdctl
command allows you to control the operation of teamd.
For more information, see the teamd.conf(5) manual page.
117
"prio": -10,
"sticky": true
},
"em4": {
"prio": 100
}
}
}
Enclose the JSON-format definition in single quotes and do not split it over multiple lines.
For more information, see the teamdctl(8) manual page.
118
You can use the teamnl command to display information about the component ports of the team:
# teamnl team0 ports
5: em4: up 1000Mbit FD
4: em3: up 1000Mbit FD
To display the current state of the team, use the teamdctl command, for example:
# teamdctl team0 state
setup:
runner: activebackup
ports:
em3
link watches:
link summary: down
instance[link_watch_0]:
name: ethtool
link: down
em4
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: em4
You can also use teamdctl to display the JSON configuration of the team and each of its constituent
ports:
# teamdctl team0 config dump
{
"device": "team0",
"link_watch": {
"name": "ethtool"
},
"mcast_rejoin": {
"count": 1
},
"notify_peers": {
"count": 1
},
"ports": {
"em3": {
"prio": -10,
"sticky": true
},
"em4": {
"prio": 100
}
},
119
"runner": {
"name": "activebackup"
}
}
For more information, see the teamdctl(8) and teamnl(8) manual pages.
This example sets up the VLAN device bond0-pvid10 with a PVID of 10 for the bonded interface bond0.
In addition to the regular interface, bond0, which uses the physical LAN, you now have a VLAN device,
bond0-pvid10, which can use untagged frames to access the virtual LAN.
Note
You do not need to create virtual interfaces for the component interfaces of a
bonded interface. However, you must set the PVID on each switch port to which
they connect.
You can also use the command to set up a VLAN device for a non-bonded interface, for example:
# nmcli con add type vlan con-name em1-pvid5 ifname em1-pvid5 dev em1 id 5
To obtain information about the configured VLAN interfaces, view the files in the /proc/net/vlan
directory.
120
To create a default route for IPv4 network packets, include an entry for GATEWAY in the /etc/
sysconfig/network file. For example, the following entry configures the IP address of the gateway
system:
GATEWAY=192.0.2.1
If your system has more than one network interface, you can specify which interface should be used:
GATEWAY=192.0.2.1
GATEWAYDEV=em1
A single statement is usually sufficient to define the gateway for IPv6 packets, for example:
IPV6_DEFAULTGW="2001:db8:1e10:115b::2%em1"
Any changes that you make to /etc/sysconfig/network do not take effect until you restart the
network service:
# systemctl restart network
To display the routing table, use the ip route show command, for example:
# ip route show
10.0.2.0/24 dev em1 proto kernel scope link
default via 10.0.2.2 dev em1 proto static
src 10.0.2.15
This example shows that packets destined for the local network (10.0.2.0/24) do not use the gateway. The
default entry means that any packets destined for addresses outside the local network are routed via the
gateway 10.0.2.2.
Note
You might be used to using the route command to configure routing. However,
route is considered obsolete and will eventually be replaced altogether by the ip
command.
You can also use the netstat -rn command to display this information:
Kernel IP routing table
Destination
Gateway
10.0.2.0
0.0.0.0
0.0.0.0
10.0.2.2
Genmask
255.255.255.0
0.0.0.0
Flags
U
UG
MSS Window
0 0
0 0
irtt Iface
0 em1
0 em1
To add or delete a route from the table, use the ip route add or ip route del commands. For
example, to replace the entry for the static default route:
# ip route del default
# ip route show
10.0.2.0/24 dev em1 proto kernel scope link src 10.0.2.15
# ip ro add default via 10.0.2.1 dev em1 proto static
# ip route show
10.0.2.0/24 dev em1 proto kernel scope link src 10.0.2.15
default via 10.0.2.1 dev em1 proto static
To add a route to the network 10.0.3.0/24 via 10.0.3.1 over interface em2, and then delete that route:
# ip route add 10.0.4.0/24 via 10.0.2.1 dev em2
# ip route show
10.0.2.0/24 dev em1 proto kernel scope link src 10.0.2.15
10.0.3.0/24 via 10.0.3.1 dev em2
default via 10.0.2.2 dev em1 proto static
# ip route del 10.0.3.0/24
121
# ip route show
10.0.2.0/24 dev em1 proto kernel scope link
default via 10.0.2.2 dev em1 proto static
src 10.0.2.15
The ip route get command is a useful feature that allows you to query the route on which the system
will send packets to reach a specified IP address, for example:
# ip route get 23.6.118.140
23.6.118.140 via 10.0.2.2 dev em1 src 10.0.2.15
cache mtu 1500 advmss 1460 hoplimit 64
In this example, packets to 23.6.118.140 are sent out of the em1 interface via the gateway 10.0.2.2.
Any changes that you make to the routing table using ip route do not persist across system reboots.
To permanently configure static routes, you can configure them by creating a route-interface file in/
etc/sysconfig/network-scripts for the interface. For example, you would configure a static route
for the em1 interface in a file named route-em1. An entry in these files can take the same format as the
arguments to the ip route add command.
For example, to define a default gateway entry for em1, create an entry such as the following in routeem1:
default via 10.0.2.1 dev em1
The following entry in route-em2 would define a route to 10.0.3.0/24 via 10.0.3.1 over em2:
10.0.3.0/24 via 10.0.3.1 dev em2
Any changes that you make to a route-interface file do not take effect until you restart either the
network service or the interface.
For more information, see the ip(8) and netstat(8) manual pages.
122
123
123
124
125
This chapter describes how to configure a DHCP server, DHCP client, and Network Address Translation.
2. Edit the /etc/dhcp/dhcpd.conf file to store the settings that the DHCP server can provide to the
clients.
The following example configures the domain name, a range of client addresses on the 192.168.2.0/24
subnet from 192.168.2.101 through 192.168.2.254 together with the IP addresses of the default
123
gateway and the DNS server, the default and maximum lease times in seconds, and a static IP address
for the application server svr01 that is identified by its MAC address:
option
option
option
option
domain-name "mydom.org";
domain-name-servers 192.168.2.1, 10.0.1.4;
broadcast-address 192.168.2.255;
routers 192.168.2.1;
The DHCP server sends the information in the option lines to each client when it requests a lease
on an IP address. An option applies only to a subnet if you define it inside a subnet definition. In the
example, the options are global and apply to both the subnet and host definitions. The subnet and
host definitions have different settings for the maximum lease time.
Note
In Oracle Linux 7, the DHCP server no longer reads its configuration from /
etc/sysconfig/dhcpd. Instead, it reads /etc/dhcp/dhcpd.conf to
determine the interfaces on which it should listen.
For more information and examples, see /usr/share/doc/dhcp-version/dhcpd.conf.sample
and the dhcpd(8) and dhcp-options(5) manual pages.
3. Touch the /var/lib/dhcpd/dhcpd.leases file, which stores information about client leases:
# touch /var/lib/dhcpd/dhcpd.leases
4. Enter the following commands to start the DHCP service and ensure that it starts after a reboot:
# systemctl start dhcpd
# systemctl enable dhcpd
For information about configuring a DHCP relay, see the dhcrelay(8) manual page.
124
4. To specify options for the client, such as the requested lease time and the network interface on which
to request an address from the server, create the file /etc/dhclient.conf containing the required
options.
The following example specifies that the client should use the em2 interface, request a lease time of 24
hours, and identify itself using its MAC address:
interface "em2" {
send dhcp-lease-time 86400;
send dhcp-client-identifier 80:56:3e:00:10:00;
}
When the client has requested and obtained a lease, information about this lease is stored in /var/
lib/dhclient/dhclient-interface.leases.
For more information, see the dhclient(8) manual page.
You can use the Firewall Configuration GUI (firewall-config) to configure masquerading and port
forwarding.
125
126
127
128
128
128
131
132
133
134
134
This chapter describes how to use BIND to set up a DNS name server.
The root domain, represented by the final period in the FQDN, is usually omitted, except in DNS
configuration files:
wiki.us.mydom.com
In this example, the top-level domain is com, mydom is a subdomain of com, us is a subdomain of mydom,
and wiki is the host name. Each of these domains are grouped into zones for administrative purposes.
A DNS server, or name server, stores the information that is needed to resolve the component domains
inside a zone. In addition, a zone's DNS server stores pointers to the DNS servers that are responsible for
resolving each subdomain.
If a client outside the us.mydom.com domain requests that its local name server resolve a FQDN such as
wiki.us.mydom.com into an IP address for which the name server is not authoritative, the name server
queries a root name server for the address of a name server that is authoritative for the com domain.
Querying this name server returns the IP address of a name server for mydom.com. In turn, querying this
name server returns the IP address of the name server for us.oracle.com, and querying this final name
server returns the IP address for the FQDN. This process is known as a recursive query, where the local
name server handles each referral from an external name server to another name server on behalf of the
resolver.
Iterative queries rely on the resolver being able to handle the referral from each external name server to
trace the name server that is authoritative for the FQDN. Most resolvers use recursive queries and so
cannot use name servers that support only iterative queries. Fortunately, most
Oracle Linux provides the Berkeley Internet Name Domain (BIND) implementation of DNS. The bind
package includes the DNS server daemon (named), tools for working with DNS such as rndc, and a
number of configuration files, including:
127
/etc/named.conf
Contains settings for named and lists the location and characteristics of
the zone files for your domain. Zone files are usually stored in /var/
named.
/etc/named.rfc1912.zones
Contains several zone sections for resolving local loopback names and
addresses.
/var/named/named.ca
Forwards all queries to another name server and caches the results,
which reduces local processing, external access, and network traffic.
In practice, a name server can be a combination of several of these types in complex configurations.
13.3.1 /etc/named.conf
The main configuration file for named is /etc/named.conf, which contains settings for named and the
top-level definitions for zones, for example:
128
/etc/named.conf
include "/etc/rndc.key";
controls {
inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }
};
zone "us.mydom.com" {
type master;
file "master-data";
allow-update { key "rndc-key"; };
notify yes;
};
zone "mydom.com" IN {
type slave;
file "sec/slave-data";
allow-update { key "rndc-key"; };
masters {10.1.32.1;};
};
zone "2.168.192.in-addr.arpa" IN {
type master;
file "reverse-192.168.2";
allow-update { key rndc-key; };
notify yes;
};
The include statement allows external files to be referenced so that potentially sensitive data such as key
hashes can be placed in a separate file with restricted permissions.
The controls statement defines access information and the security requirements that are necessary to
use the rndc command with the named server:
inet
Specifies which hosts can run rndc to control named. In this example, rndc must be run on the
local host (127.0.0.1).
keys
Specifies the names of the keys that can be used. The example specifies using the key named
rndc-key, which is defined in /etc/rndc.key. Keys authenticate various actions by named and
are the primary method of controlling remote access and administration.
The zone statements define the role of the server in different zones.
The following zone options are used:
type
Specifies that this system is the master name server for the zone us.mydom.com and
a slave server for mydom.com. 2.168.192.in-addr.arpa is a reverse zone for
resolving IP addresses to host names. See Section 13.3.3, About Resource Records for
Reverse-name Resolution .
file
Specifies the path to the zone file relative to /var/named. The zone file for
us.mydom.com is stored in /var/named/master-data and the transferred zone data
for mydom.com is cached in /var/named/sec/slave-data.
allow-update
Specifies that a shared key must exist on both the master and a slave name server for
a zone transfer to take place from the master to the slave. The following is an example
record for a key in /etc/rndc.key:
key "rndc-key" {
algorithm hmac-md5;
secret "XQX8NmM41+RfbbSdcqOejg==";
};
129
/etc/named.conf
Specifies whether to notify the slave name servers when the zone information is
updated.
masters
The next example is taken from the default /etc/named.conf file that is installed with the bind package,
and which configures a caching-only name server.
options {
listen-on port 53 { 127.0.0.1; };
listen-on-v6 port 53 { ::1; };
directory
"/var/named";
dump-file
"/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localnets; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
The options statement defines global server configuration options and sets defaults for other statements.
listen-on
directory
dump-file
statistics-file
memstatistics-file
allow-query
recursion
dnssec-enable
130
dnssec-validation
Whether the name server should validate replies from DNSSECenabled zones.
dnssec-lookaside
The maximum time that a name server caches a record before it checks
whether a newer one is available.
Class
Type
Data
A (address)
AAAA (address)
MX (mail exchange)
NS (name server)
PTR (pointer)
The following example shows the contents of a typical zone file such as /var/named/master-data:
131
$TTL 86400
; 1 day
@ IN SOA dns.us.mydom.com. root.us.mydom.com. (
57 ; serial
28800 ; refresh (8 hours)
7200 ; retry (2 hours)
2419200 ; expire (4 weeks)
86400 ; minimum (1 day)
)
IN NS
dns.us.mydom.com.
dns
us.mydom.com
svr01
www
host01
host02
host03
...
IN
IN
IN
IN
IN
IN
IN
A
A
A
CNAME
A
A
A
192.168.2.1
192.168.2.1
192.168.2.2
svr01
192.168.2.101
192.168.2.102
192.168.2.103
dns.us.mydom.com.
The fully qualified domain name of the name server, including a trailing period
(.) for the root domain.
root.us.mydom.com.
serial
refresh
The time after which a master name server notifies slave name servers that they
should refresh their database.
retry
If a refresh fails, the time that a slave name server should wait before attempting
another refresh.
expire
The maximum elapsed time that a slave name server has to complete a refresh
before its zone records are no longer considered authoritative and it will stop
answering queries.
minimum
The minimum time for which other servers should cache information obtained
from this zone.
132
The characteristics for a zone's in-addr.arpa or ip6.arpa domains are usually defined in /etc/
named.conf, for example:
zone "2.168.192.in-addr.arpa" IN {
type master;
file "reverse-192.168.2";
allow-update { key rndc-key; };
notify yes;
};
The zone's name consists of in-addr.arpa preceded by the network portion of the IP address for the
domain with its dotted quads written in reverse order.
If your network does not have a prefix length that is a multiple of 8, see RFC 2317 for the format that you
should use instead.
The PTR records in in-addr.arpa or ip6.arpa domains define host names that correspond to the host
portion of the IP address. The following example is take from the /var/named/reverse-192.168.2
zone file:
$TTL 86400
;
@ IN SOA dns.us.mydom.com. root.us.mydom.com. (
57 ;
28800 ;
7200 ;
2419200 ;
86400 ;
)
IN NS
dns.us.mydom.com.
1
1
2
101
102
103
...
IN
IN
IN
IN
IN
IN
PTR
PTR
PTR
PTR
PTR
PTR
dns.us.mydom.com.
us.mydom.com.
svr01.us.mydom.com.
host01.us.mydom.com.
host02.us.mydom.com.
host03.us.mydom.com.
This line causes NetworkManager to add the following entry to /etc/resolv.conf when the
network service starts:
nameserver 127.0.0.1
133
5. Restart the network service, restart the named service, and configure named to start following system
reboots:
# systemctl restart network
# systemctl start named
# systemctl enable named
If you modify the named configuration file or zone files, rndc reload instructs named to reload the files:
# rndc reload
server reload successful
For more information, see the named(8), rndc(8) and rndc-confgen(8) manual pages.
134
Perform a reverse lookup for the domain name that corresponds to an IP address:
$ host 192.168.2.101
Use the -v and -t options to display verbose information about records of a certain type:
$ host -v -t MX www.mydom.com
Trying "www.mydom.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49643
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;www.mydom.com.
IN MX
;; ANSWER SECTION:
www.mydom.com. 135 IN CNAME www.mydom.com.acme.net.
www.mydom.com.acme.net. 1240 IN CNAME d4077.c.miscacme.net.
;; AUTHORITY SECTION:
c.miscacme.net. 2000 IN SOA m0e.miscacme.net. hostmaster.misc.com. ...
Received 163 bytes from 10.0.0.1#53 in 40 ms
The -a option (equivalent to -v -t ANY) displays all available records for a zone:
$ host -a www.us.mydom.com
Trying "www.us.mydom.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40030
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.us.mydom.com.
IN ANY
;; ANSWER SECTION:
www.us.mydom.com. 263 IN CNAME www.us.mydom.acme.net.
Received 72 bytes from 10.0.0.1#53 in 32 ms
135
136
137
137
139
139
141
142
144
This chapter describes how to configure a system to use the chrony, Network Time Protocol (NTP), or
Precision Time Protocol (PTP) daemons for setting the system time.
137
server NTP_server_2
server NTP_server_3
driftfile /var/lib/chrony/drift
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
The commandkey directive specifies the keyfile entry that chronyd uses to authenticate both
chronyc commands and NTP packets. The generatecommandkey directive causes chronyd to
generate an SHA1-based password automatically when the service starts.
To configure chronyd to act as an NTP server for a specified client or subnet, use the allow directive,
for example:
server NTP_server_1
server NTP_server_2
server NTP_server_3
allow 192.168.2/24
driftfile /var/lib/chrony/drift
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
If a system has only intermittent access to NTP servers, the following configuration might be
appropriate:
server NTP_server_1 offline
server NTP_server_2 offline
server NTP_server_3 offline
driftfile /var/lib/chrony/drift
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
If you specify the offline keyword, chronyd does not poll the NTP servers until it is told that network
access is available. You can use the chronyc -a online and chronyc -a offline command to
inform chronyd of the state of network access.
3. If remote access to the local NTP service is required, configure the system firewall to allow access to
the NTP service in the appropriate zones, for example:
# firewall-cmd --zone=zone --add-service=ntp
success
# firewall-cmd --zone=zone --permanent --add-service=ntp
success
4. Start the chronyd service and configure it to start following a system reboot.
# systemctl start chronyd
# systemctl enable chronyd
You can use the chronyc command to display information about the operation of chronyd or to change
its configuration, for example:
# chronyc -a
chrony version version
...
200 OK
chronyc> sources
210 Number of sources = 4
MS Name/IP address
Stratum Poll Reach LastRx Last sample
===============================================================================
^+ service1-eth3.debrecen.hp
2
6
37
21 -2117us[-2302us] +/50ms
138
^* ns2.telecom.lt
2
6
37
21
-811us[ -997us] +/40ms
^+ strato-ssd.vpn0.de
2
6
37
21
+408us[ +223us] +/78ms
^+ kvm1.websters-computers.c
2
6
37
22 +2139us[+1956us] +/54ms
chronyc> sourcestats
210 Number of sources = 4
Name/IP Address
NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
service1-eth3.debrecen.hp
5
4
259
-0.394
41.803 -2706us
502us
ns2.telecom.lt
5
4
260
-3.948
61.422
+822us
813us
strato-ssd.vpn0.de
5
3
259
1.609
68.932
-581us
801us
kvm1.websters-computers.c
5
5
258
-0.263
9.586 +2008us
118us
chronyc> tracking
Reference ID
: 212.59.0.2 (ns2.telecom.lt)
Stratum
: 3
Ref time (UTC) : Tue Sep 30 12:33:16 2014
System time
: 0.000354079 seconds slow of NTP time
Last offset
: -0.000186183 seconds
RMS offset
: 0.000186183 seconds
Frequency
: 28.734 ppm slow
Residual freq
: -0.489 ppm
Skew
: 11.013 ppm
Root delay
: 0.065965 seconds
Root dispersion : 0.007010 seconds
Update interval : 64.4 seconds
Leap status
: Normal
chronyc> exit
Using the -a option to chronyc is equivalent to entering the authhash and password subcommands,
and avoids you having to specify the hash type and password every time that you use chronyc:
# cat /etc/chrony.keys
1 SHA1 HEX:4701E4D70E44B8D0736C8A862CFB6B8919FE340E
# chronyc
...
chronyc> authhash SHA1
chronyc> password HEX:4701E4D70E44B8D0736C8A862CFB6B8919FE340E
200 OK
For more information, see the chrony(1) and chronyc(1) manual pages, /usr/share/doc/
chrony-version/chrony.txt, or use the info chrony command.
139
Note
The default configuration assumes that the system has network access to public
NTP servers with which it can synchronise. The firewall rules for your internal
networks might well prevent access to these servers but instead allow access to
local NTP servers.
The following example shows a sample NTP configuration for a system that can access three NTP
servers:
server NTP_server_1
server NTP_server_2
server NTP_server_3
server 127.127.1.0
fudge
127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
The server and fudge entries for 127.127.1.0 cause ntpd to use the local system clock if the remote
NTP servers are not available. The restrict entry allows remote systems only to synchronise their
time with the local NTP service.
For more information about configuring ntpd, see http://doc.ntp.org/4.2.6p5/manyopt.html.
3. Create the drift file.
# touch /var/lib/ntp/drift
4. If remote access to the local NTP service is required, configure the system firewall to allow access to
the NTP service in the appropriate zones, for example:
# firewall-cmd --zone=zone --add-service=ntp
success
# firewall-cmd --zone=zone --permanent --add-service=ntp
success
5. Start the ntpd service and configure it to start following a system reboot.
# systemctl start ntpd
# systemctl enable ntpd
You can use the ntpq and ntpstat commands to display information about the operation of ntpd, for
example:
# ntpq -p
remote
refid
st t when poll reach
delay
offset jitter
==============================================================================
*ns1.proserve.nl 193.67.79.202
2 u
21
64 377
31.420
10.742
3.689
-pomaz.hu
84.2.46.19
3 u
22
64 377
59.133
13.719
5.958
+server.104media 193.67.79.202
2 u
24
64 377
32.110
13.436
5.222
+public-timehost 193.11.166.20
2 u
28
64 377
57.214
9.304
6.311
# ntpstat
synchronised to NTP server (80.84.224.85) at stratum 3
time correct to within 76 ms
polling server every 64
For more information, see the ntpd(8), ntpd.conf(5), ntpq(8), and ntpstat(8) manual pages and
http://doc.ntp.org/4.2.6p5/.
140
About PTP
141
hardware-raw-clock
...
(SOF_TIMESTAMPING_RAW_HARDWARE)
The output from ethtool in this example shows that the em1 interface supports both hardware and
software time stamping capabilities.
With software time stamping, ptp4l synchronises the system clock to an external grandmaster clock.
If hardware time stamping is available, ptp4l can synchronise the PTP hardware clock to an external
grandmaster clock. In this case, you use the phc2sys daemon to synchronise the system clock with the
PTP hardware clock.
2. Edit /etc/sysconfig/ptp4l and define the start-up options for the ptp4l daemon.
Grandmaster clocks and slave clocks require that you define only one interface.
For example, to use hardware time stamping with interface em1 on a slave clock:
OPTIONS="-f /etc/ptp4l.conf -i em1 -s"
To use software time stamping instead of hardware time stamping, specify the -S option:
OPTIONS="-f /etc/ptp4l.conf -i em1 -S -s"
Note
The -s option specifies that the clock operates only as a slave (slaveOnly
mode). Do not specify this option for a grandmaster clock or a boundary clock.
For a grandmaster clock, omit the -s option, for example:
OPTIONS="-f /etc/ptp4l.conf -i em1"
A boundary clock requires that you define at least two interfaces, for example:
OPTIONS="-f /etc/ptp4l.conf -i em1 -i em2"
You might need to edit the file /etc/ptp4l.conf to make further adjustments to the configuration of
ptp4l, for example:
For a grandmaster clock, set the value of the priority1 parameter to a value between 0 and 127,
where lower values have higher priority when the BMC algorithm selects the grandmaster clock. For
a configuration that has a single grandmaster clock, a value of 127 is suggested.
If you set the value of summary_interval to an integer value N instead of 0, ptp4l writes
N
0
summary clock statistics to /var/log/messages every 2 seconds instead of every second (2 =
10
1). For example, a value of 10 would correspond to an interval of 2 or 1024 seconds.
The logging_level parameter controls the amount of logging information that ptp4l records.
The default value of logging_level is 6, which corresponds to LOG_INFO. To turn off logging
completely, set the value of logging_level to 0. Alternatively, specify the -q option to ptp4l.
142
4. Start the ptp4l service and configure it to start following a system reboot.
# systemctl start ptp4l
# systemctl enable ptp4l
Note
The slave network interface on a boundary clock is the one that it uses to
communicate with the grandmaster clock.
The -w option specifies that phc2sys waits until ptp4l has synchronised the PTP hardware clock
before attempting to synchronise the system clock.
On a grandmaster clock, which derives its system time from a reference time source such as GPS,
CDMA, NTP, or a radio time signal, synchronise the network interface's PTP hardware clock from
the system clock, for example:
OPTIONS="-c em1 -s CLOCK_REALTIME -w"
You can use the pmc command to query the status of ptp4l operation. The following example shows the
results of running pmc on a slave clock system that is directly connected to the grandmaster clock system
without any intermediate boundary clocks:
# pmc -u -b 0 'GET TIME_STATUS_NP'
sending: GET TIME_STATUS_NP
080027.fffe.7f327b-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP
master_offset
-98434
ingress_time
1412169090025854874
cumulativeScaledRateOffset +1.000000000
scaledLastGmPhaseChange
0
gmTimeBaseIndicator
0
lastGmPhaseChange
0x0000'0000000000000000.0000
gmPresent
true
gmIdentity
080027.fffe.d9e453
# pmc -u -b 0 'GET CURRENT_DATA_SET'
143
The unique identifier of the grandmaster clock, which is based on the MAC address
of its network interface.
gmPresent
meanPathDelay
offsetFromMaster
The most recent measurement of the time difference in nanoseconds relative to the
grandmaster clock.
stepsRemoved
The number of network steps between this system and the grandmaster clock.
For more information, see the phc2sys(8), pmc(8), and ptp4l(8) manual pages, http://www.zhaw.ch/
en/engineering/institutes-centres/ines/downloads/documents.html, and IEEE 1588.
127.127.1.0
127.127.1.0 stratum 0
Note
Do not configure any additional server lines in the file.
For more information, see Section 14.2.1, Configuring the ntpd Service.
144
4. Create firewall rules to allow access to the ports on which the HTTP server listens, for example:
# firewall-cmd --zone=zone --add-service=http
# firewall-cmd --permanent --zone=zone --add-service=http
The main configuration file for the Apache HTTP server is /etc/httpd/conf/httpd.conf. You can
modify the directives in this file to customize Apache for your environment.
145
145
145
145
148
148
149
150
DocumentRoot directorypath
The top level directory for Apache server content. The apache user
requires read access to any files and read and execute access to the
directory and any of its sub-directories. Do not place a slash at the end
of the directory path.
For example:
DocumentRoot /var/www/html
2. Use the restorecon command to apply the file type to the entire
content directory hierarchy.
# /sbin/restorecon -R -v content_dir
ErrorLog filename |
syslog[:facility]
Listen [IP_address:]port
The Apache HTTP server can load external modules (dynamic shared
objects or DSOs) to extend its functionality. The module argument is
the name of the DSO, and filename is the path name of the module
relative to ServerRoot.
146
For example:
LoadModule auth_basic_module modules/mod_auth_basic.so
Order deny,allow |
allow,deny
ServerName FQDN[:port]
ServerRoot directorypath
The top of the directory hierarchy where the httpd server keeps its
configuration, error, and log files. Do not place a slash at the end of the
directory path.
For example:
ServerRoot /etc/httpd
Timeout seconds
UserDir directory-path
... | disabled [user
...] | enabled user ...
For example:
UserDir disabled root guest
UserDir enabled oracle alice
UserDir www http://www.mydom.com/
The root and guest users are disabled from content publishing.
Assuming that ServerName is set to www.mydom.com, browsing
http://www.example.com/~alice displays alice's web
page, which must be located at ~alice/www or http://
www.example.com/alice (that is, in the directory alice relative to
ServerRoot).
Note
You would usually change the settings in the
<IfModule mod_userdir.c> container to
allow users to publish user content.
For more information, see http://httpd.apache.org/docs/current/mod/directives.html.
148
Applies directives if the specified module has been loaded, or, when the
exclamation point (!) is specified, if the module has not been loaded.
The following example disallows user-published content if
mod_userdir.c has been loaded:
<IfModule mod_userdir.c>
UserDir disabled
</IfModule>
Systems outside mydom.com cannot use GET and PUT with the URI.
<LimitExcept method ...>
Places limits on all except the specified HTTP methods for use with a
Uniform Resource Identifier (URI).
The following example disallows any system from using any method
other than GET and POST:
<LimitExcept GET POST>
Order deny,allow
Deny from all
</Limit>
VirtualHost
IP_address:port ...
149
In the example, the AllowOverride directive specifies the following directive classes:
AuthConfig
FileInfo
Limit
The Options directive controls the features of the server for the directory hierarchy, for example:
FollowSymLinks
Includes
IncludesNoExec
Prevents the server from running #exec cmd and #exec cgi server-side
includes.
Indexes
MultiViews
Allows the server to determine the file to use that best matches the client's
requirements based on the MIME type when several versions of the file exist
with different extensions.
SymLinksIfOwnerMatch
Allows the server to follow a symbolic link if the file or directory being pointed
to has the same owner as the symbolic link.
150
To configure a virtual host, you use the <VirtualHost hostname> container. You must also divide all
served content between the virtual hosts that you configure.
The following example shows a simple name-based configuration for two virtual hosts:
NameVirtualHost *:80
<VirtualHost *:80>
ServerName websvr1.mydom.com
ServerAlias www.mydom-1.com
DocumentRoot /var/www/http/websvr1
ErrorLog websvr1.error_log
</VirtualHost>
<VirtualHost *:80>
ServerName websvr2.mydom.com
ServerAlias www.mydom-2.com
DocumentRoot /var/www/http/sebsvr2
ErrorLog websvr2.error_log
</VirtualHost>
151
152
153
153
153
154
154
155
155
156
156
This chapter describes email programs and protocols that are available with Oracle Linux, and how to set
up a basic Sendmail client.
153
main.cf
master.cf
Specifies how the Postfix master daemon and other Postfix processes interact to deliver
email.
transport
Specifies the mapping between destination email addresses and relay hosts.
By default, Postfix does not accept network connections from any system other than the local host. To
enable mail delivery for other hosts, edit /etc/postfix/main.cf and configure their domain, host
name, and network information.
Restart the Postfix service after making any configuration changes:
# systemctl restart postfix
For more information, see postfix(1) and other Postfix manual pages, Section 16.5, Forwarding
Email, /usr/share/doc/postfix-version, and http://www.postfix.org/documentation.html.
154
Contains Procmail, which acts as the default local MDA for Sendmail. This package is
installed as a dependency of the sendmail package.
sendmail
sendmail-cf
so that it reads:
dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
The leading dnl stands for delete to new line, and effectively comments out the line.
After you have edited sendmail.mc, restart the sendmail service to regenerate sendmail.cf:
# systemctl restart sendmail
However, Sendmail does not use the regenerated configuration file until you restart the server.
Other important Sendmail configuration files in /etc/mail include:
access
Configures a relay host that processes outbound mail from the local host to other
systems. This is the default configuration:
Connect: localhost.localdomain
Connect: localhost
Connect: 127.0.0.1
RELAY
RELAY
RELAY
To configure Sendmail to relay mail from other systems on a local network, add an
entry such as the following:
Connect: 192.168.2
RELAY
155
Forwarding Email
Configures forwarding of email from one domain to another. The following example
forwards email sent to the yourorg.org domain to the SMTP server for the
mydom.com domain:
mailertable
yourorg.org
virtusertable
smtp:[mydom.com]
Configures serving of email to multiple domains. Each line starts with a destination
address followed by the address to which Sendmail forwards the email. For example,
the following entry forwards email addressed to any user at yourorg.org to the same
user name at mydom.com:
@yourorg.org
Each of these configuration files has a corresponding database (.db) file in /etc/mail that Sendmail
reads. After making any changes to any of the configuration files, restart the sendmail service. To
regenerate the database files, run the /etc/mail/make all command. As for sendmail.cf, Sendmail
does not use the regenerated database files until you restart the server.
root
usr01, usr02, usr03, [email protected]
To direct email to a file, specify an absolute path name instead of the destination address. To specify a
command, precede it with a pipe character (|). The next example erases email sent to nemo by sending it
to /dev/null, and runs a script named aggregator to process emails sent to fixme:
nemo:
fixme:
/dev/null
|/usr/local/bin/aggregator
After changing the file, run the command newaliases to rebuild the indexed database file.
For more information, see the aliases(5) manual page.
b. In the auth directory, create a file smtp-auth that contains the authentication information for the
SMTP server, for example:
156
where smtp.isp.com is the FQDN of the SMTP server, and username and password are the
name and password of the account.
c. Create the database file from smtp-auth, and make both files read-writable only by root:
# cd /etc/mail/auth
# makemap hash smtp-auth < smtp-auth
# chmod 600 smtp-auth smtp-auth.db
to read:
define('SMART_host', 'smtp.isp.com')dnl
where port is the port number used by the SMTP server (for example, 587 for SMARTTLS or 465 for
SSL/TLS).
4. Edit /etc/sysconfig/sendmail and set the value of DAEMON to no:
DAEMON=no
This entry disables sendmail from listening on port 25 for incoming email.
5. Restart the sendmail service:
# systemctl restart sendmail
157
158
159
159
160
160
162
163
164
164
165
167
171
172
172
175
175
176
176
179
181
This chapter describes how to configure the Keepalived and HAProxy technologies for balancing access to
network services while maintaining continuous access to those services.
2. Edit /etc/haproxy/haproxy.cfg to configure HAProxy on each server. See Section 17.2.1, About
the HAProxy Configuration File.
3. Enable IP forwarding and binding to non-local IP addresses:
# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
# echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
# sysctl -p
159
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
4. Enable access to the services or ports that you want HAProxy to handle.
For example, to enable access to HTTP and make this rule persist across reboots, enter the following
commands:
# firewall-cmd --zone=zone --add-service=http
success
# firewall-cmd --permanent --zone=zone --add-service=http
success
Defines global settings such as the syslog facility and level to use for
logging, the maximum number of concurrent connections allowed, and
how many processes to start in daemon mode.
defaults
listen
frontend
backend
160
Figure 17.1 shows an HAProxy server (10.0.0.10), which is connected to an externally facing network
(10.0.0/24) and to an internal network (192.168.1/24). Two web servers, websvr1 (192.168.1.71) and
websvr2 (192.168.1.72), are accessible on the internal network. The IP address 10.0.0.10 is in the private
address range 10.0.0/24, which cannot be routed on the Internet. An upstream network address translation
(NAT) gateway or a proxy server provides access to and from the Internet.
Figure 17.1 Example HAProxy Configuration for Load Balancing
161
This configuration balances HTTP traffic between the two back-end web servers websvr1 and websvr2,
whose firewalls are configured to accept incoming TCP requests on port 80.
After implementing simple /var/www/html/index.html files on the web servers and using curl to test
connectivity, the following output demonstrate how HAProxy balances the traffic between the servers and
how it handles the httpd service stopping on websvr1:
$ while true; do curl http://10.0.0.10; sleep 1; done
This is HTTP server websvr1 (192.168.1.71).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr1 (192.168.1.71).
This is HTTP server websvr2 (192.168.1.72).
...
This is HTTP server websvr2 (192.168.1.72).
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr2 (192.168.1.72).
...
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr1 (192.168.1.71).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr1 (192.168.1.71).
...
^C
$
In this example, HAProxy detected that the httpd service had restarted on websvr1 and resumed using
that server in addition to websvr2.
By combining the load balancing capability of HAProxy with the high availability capability of Keepalived
or Oracle Clusterware, you can configure a backup load balancer that ensures continuity of service in the
event that the master load balancer fails. See Section 17.10, Making HAProxy Highly Available Using
Keepalived and Section 17.12, Making HAProxy Highly Available Using Oracle Clusterware.
See Section 17.2, Installing and Configuring HAProxy for details of how to install and configure HAProxy.
HAProxy includes an additional Set-Cookie: header that identifies the web server in its response
to the client, for example: Set-Cookie: WEBSVR=N; path=page_path. If a client subsequently
162
About Keepalived
specifies the WEBSVR cookie in a request, HAProxy forwards the request to the web server whose server
cookievalue matches the value of WEBSVR.
The following example demonstrates how an inserted cookie ensures session persistence:
$ while true; do curl http://10.0.0.10; sleep 1; done
This is HTTP server websvr1 (192.168.1.71).
This is HTTP server websvr2 (192.168.1.72).
This is HTTP server websvr1 (192.168.1.71).
^C
$ curl http://10.0.0.10 -D /dev/stdout
HTTP/1.1 200 OK
Date: ...
Server: Apache/2.4.6 ()
Last-Modified: ...
ETag: "26-5125afd089491"
Accept-Ranges: bytes
Content-Length: 38
Content-Type: text/html; charset=UTF-8
Set-Cookie: WEBSVR=2; path=/
This is
$ while
This is
This is
This is
^C
To enable persistence selectively on a web server, use the cookie directive to specify that HAProxy
should expect the specified cookie, usually a session ID cookie or other existing cookie, to be prefixed with
the server cookie value and a ~ delimiter, for example:
cookie SESSIONID prefix
server websvr1 192.168.1.71:80 weight 1 maxconn 512 cookie 1 check
server websvr2 192.168.1.72:80 weight 1 maxconn 512 cookie 2 check
If the value of SESSIONID is prefixed with a server cookie value, for example: Set-Cookie:
SESSIONID=N~Session_ID;, HAProxy strips the prefix and delimiter from the SESSIONID cookie before
forwarding the request to the web server whose server cookie value matches the prefix.
The following example demonstrates how using a prefixed cookie enables session persistence:
$ while
This is
This is
This is
^C
A real web application would usually set the session ID on the server side, in which case the first HAProxy
response would include the prefixed cookie in the Set-Cookie: header.
163
4. Add firewall rules to allow VRRP communication using the multicast IP address 224.0.0.18 and the
VRRP protocol (112) on each network interface that Keepalived will control, for example:
# firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 \
--in-interface enp0s8 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
success
# firewall-cmd --direct --permanent --add-rule ipv4 filter OUTPUT 0 \
--out-interface enp0s8 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
success
# firewall-cmd --reload
success
static_ipaddress,
static_routes
vrrp_sync_group
164
vrrp_instance
vrrp_script
virtual_server_group
virtual_server
165
The configuration of the backup server is the same except for the values of
notification_email_from, state, priority, and possibly interface if the system hardware
configuration is different:
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server localhost
166
smtp_connect_timeout 30
}
vrrp_instance VRRP1 {
state BACKUP
#
Specify the network interface to which the virtual address is assigned
interface enp0s8
virtual_router_id 41
#
Set the value of priority lower on the backup server than on the master server
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1066
}
virtual_ipaddress {
10.0.0.100/24
}
}
In the event that the master server (svr1) fails, keepalived assigns the virtual IP address 10.0.0.100/24
to the enp0s8 interface on the backup server (svr2), which becomes the master server.
To determine whether a server is acting as the master, you can use the ip command to see whether the
virtual address is active, for example:
# ip addr list enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:cb:a6:8d brd ff:ff:ff:ff:ff:ff
inet 10.0.0.72/24 brd 10.0.0.255 scope global enp0s8
inet 10.0.0.100/24 scope global enp0s8
inet6 fe80::a00:27ff:fecb:a68d/64 scope link
valid_lft forever preferred_lft forever
Alternatively, search for Keepalived messages in /var/log/messages that show transitions between
states, for example:
...51:55
...
...53:08
...53:09
...53:09
...53:09
VRRP_Instance(VRRP1)
VRRP_Instance(VRRP1)
VRRP_Instance(VRRP1)
VRRP_Instance(VRRP1)
Note
Only one server should be active as the master at any time. If more than one
server is configured as the master, it is likely that there is a problem with VRRP
communication between the servers. Check the network settings for each interface
on each server and check that the firewall allows both incoming and outgoing VRRP
packets for multicast IP address 224.0.0.18.
See Section 17.5, Installing and Configuring Keepalived for details of how to install and configure
Keepalived.
167
Figure 17.3 shows that the Keepalived master server has network addresses 192.168.1.10, 192.168.1.1
(virtual), 10.0.0.10, and 10.0.0.100 (virtual). The Keepalived backup server has network addresses
192.168.1.11 and 10.0.0.11. The web servers websvr1 and websvr2 have network addresses 10.0.0.71
and 10.0.0.72 respectively.
Figure 17.3 Example Keepalived Configuration for Load Balancing in NAT Mode
168
}
vrrp_instance internal {
state MASTER
interface enp0s9
virtual_router_id 92
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1215
}
#
Define the virtual IP address for the internal network interface
virtual_ipaddress {
10.0.0.100/24
}
}
# Define a virtual HTTP server on the virtual IP address 192.168.1.1
virtual_server 192.168.1.1 80 {
delay_loop 10
protocol TCP
#
Use round-robin scheduling in this example
lb_algo rr
#
Use NAT to hide the back-end servers
lb_kind NAT
#
Persistence of client sessions times out after 2 hours
persistence_timeout 7200
real_server 10.0.0.71 80 {
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
real_server 10.0.0.72 80 {
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
}
This configuration is similar to that given in Section 17.6, Configuring Simple Virtual IP Address Failover
Using Keepalived with the additional definition of a vrrp_sync_group section so that the network
interfaces are assigned together on failover, and a virtual_server section to define the real back-end
servers that Keepalived uses for load balancing. The value of lb_kind is set to NAT (Network Address
Translation), which means that the Keepalived server handles both inbound and outbound network traffic
from and to the client on behalf of the back-end servers.
The configuration of the backup server is the same except for the values of
notification_email_from, state, priority, and possibly interface if the system hardware
configuration is different:
global_defs {
notification_email {
[email protected]
}
169
notification_email_from [email protected]
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_sync_group VRRP1 {
#
Group the external and internal VRRP instances so they fail over together
group {
external
internal
}
}
vrrp_instance external {
state BACKUP
interface enp0s8
virtual_router_id 91
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1215
}
#
Define the virtual IP address for the external network interface
virtual_ipaddress {
192.168.1.1/24
}
}
vrrp_instance internal {
state BACKUP
interface enp0s9
virtual_router_id 92
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1215
}
#
Define the virtual IP address for the internal network interface
virtual_ipaddress {
10.0.0.100/24
}
}
# Define a virtual HTTP server on the virtual IP address 192.168.1.1
virtual_server 192.168.1.1 80 {
delay_loop 10
protocol TCP
#
Use round-robin scheduling in this example
lb_algo rr
#
Use NAT to hide the back-end servers
lb_kind NAT
#
Persistence of client sessions times out after 2 hours
persistence_timeout 7200
real_server 10.0.0.71 80 {
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
real_server 10.0.0.72 80 {
weight 1
TCP_CHECK {
170
connect_timeout 5
connect_port 80
}
}
}
2. Configure NAT mode (masquerading) on the external network interface, for example:
# firewall-cmd
success
# firewall-cmd
success
# firewall-cmd
yes
# firewall-cmd
no
--zone=public --add-masquerade
--permanent --zone=public --add-masquerade
--zone=public --query-masquerade
--zone=internal --query-masquerade
171
3. If not already enabled for your firewall, configure forwarding rules between the external and internal
network interfaces, for example:
# firewall-cmd --direct --permanent --add-rule ipv4 filter
-i enp0s8 -o enp0s9 -m state --state RELATED,ESTABLISHED
success
# firewall-cmd --direct --permanent --add-rule ipv4 filter
-i enp0s9 -o enp0s8 -j ACCEPT
success
# firewall-cmd --direct --permanent --add-rule ipv4 filter
-j REJECT --reject-with icmp-host-prohibited
success
# firewall-cmd --reload
FORWARD 0 \
-j ACCEPT
FORWARD 0 \
FORWARD 0 \
4. Enable access to the services or ports that you want Keepalived to handle.
For example, to enable access to HTTP and make this rule persist across reboots, enter the following
commands:
# firewall-cmd --zone=public --add-service=http
success
# firewall-cmd --permanent --zone=public --add-service=http
success
src 10.0.0.71
src 10.0.0.71
To make the default route for enp0s8 persist across reboots, create the file /etc/sysconfig/
network-scripts/route-enp0s8:
# echo "default via 10.0.0.100 dev enp0s8" > /etc/sysconfig/network-scripts/route-enp0s8
172
173
}
}
real_server 10.0.0.72 80 {
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
}
The virtual server configuration is similar to that given in Section 17.7, Configuring Load Balancing Using
Keepalived in NAT Mode except that the value of lb_kind is set to DR (Direct Routing), which means that
the Keepalived server handles all inbound network traffic from the client before routing it to the back-end
servers, which reply directly to the client, bypassing the Keepalived server. This configuration reduces the
load on the Keepalived server but is less secure as each back-end server requires external access and is
potentially exposed as an attack surface. Some implementations use an additional network interface with a
dedicated gateway for each web server to handle the response network traffic.
The configuration of the backup server is the same except for the values of
notification_email_from, state, priority, and possibly interface if the system hardware
configuration is different:
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance external {
state BACKUP
interface enp0s8
virtual_router_id 91
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1215
}
virtual_ipaddress {
10.0.0.1/24
}
}
virtual_server 10.0.0.1 80 {
delay_loop 10
protocol TCP
lb_algo rr
#
Use direct routing
lb_kind DR
persistence_timeout 7200
real_server 10.0.0.71 80 {
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
real_server 10.0.0.72 80 {
174
weight 1
TCP_CHECK {
connect_timeout 5
connect_port 80
}
}
}
2. To define a virtual IP address that persists across reboots, edit /etc/sysconfig/networkscripts/ifcfg-iface and add IPADDR1 and PREFIX1 entries for the virtual IP address, for
example:
...
175
NAME=enp0s8
...
IPADDR0=10.0.0.72
GATEWAY0=10.0.0.100
PREFIX0=24
IPADDR1=10.0.0.1
PREFIX1=24
...
This example defines the virtual IP address 10.0.0.1 for enp0s8 in addition to the existing real IP
address of the back-end server.
3. Reboot the system and verify that the virtual IP address has been set up:
# ip addr show enp0s8
2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:cb:a6:8d brd ff:ff:ff:ff:ff:ff
inet 10.0.0.72/24 brd 10.0.0.255 scope global enp0s8
inet 10.0.0.1/24 brd 10.0.0.255 scope global secondary enp0s8
inet6 fe80::a00:27ff:fecb:a68d/64 scope link
valid_lft forever preferred_lft forever
These commands set a firewall mark value of 123 on packets that are destined for ports 80 or 443 at the
specified virtual IP address.
You must also declare the firewall mark (fwmark) value to Keepalived by setting it on the virtual server
instead of a destination virtual IP address and port, for example:
virtual_server fwmark 123 {
...
}
This configuration causes Keepalived to route the packets based on their firewall mark value rather than
the destination virtual IP address and port. When used in conjunction with session persistence, firewall
marks help ensure that all ports used by a client session are handled by the same server.
176
One HAProxy server (10.0.0.11) is configured as a Keepalived master server with the virtual IP address
10.0.0.10 and the other (10.0.0.12) is configured as a Keepalived backup server. Two web servers,
websvr1 (192.168.1.71) and websvr2 (192.168.1.72), are accessible on the internal network. The IP
address 10.0.0.10 is in the private address range 10.0.0/24, which cannot be routed on the Internet. An
upstream network address translation (NAT) gateway or a proxy server provides access to and from the
Internet.
Figure 17.5 Example of a Combined HAProxy and Keepalived Configuration with Web Servers on a
Separate Network
The HAProxy configuration on both 10.0.0.11 and 10.0.0.12 is very similar to Section 17.3, Configuring
Simple Load Balancing Using HAProxy. The IP address on which HAProxy listens for incoming requests is
the virtual IP address that Keepalived controls.
global
daemon
log 127.0.0.1 local0 debug
maxconn 50000
nbproc 1
defaults
mode http
timeout connect 5s
timeout client 25s
timeout server 25s
timeout queue 10s
# Handle Incoming HTTP Connection Requests on the virtual IP address controlled by Keepalived
listen http-incoming
mode http
bind 10.0.0.10:80
# Use each server in turn, according to its weight value
balance roundrobin
# Verify that service is available
option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
177
It is also possible to configure HAProxy and Keepalived directly on the web servers as shown in
Figure 17.6. As in the previous example, one HAProxy server (10.0.0.11) is configured as the Keepalived
master server with the virtual IP address 10.0.0.10 and the other (10.0.0.12) is configured as a Keepalived
backup server. The HAProxy service on the master listens on port 80 and forwards incoming requests to
one of the httpd services, which listen on port 8080.
Figure 17.6 Example of a Combined HAProxy and Keepalived Configuration with Integrated Web
Servers
The HAProxy configuration is the same as the previous example except for the IP addresses and ports of
the web servers.
...
server websvr1 10.0.0.11:8080 weight 1 maxconn 512 check
server websvr2 10.0.0.12:8080 weight 1 maxconn 512 check
The firewall on each server must be configured to accept incoming TCP requests on port 8080.
The Keepalived configuration for both example configurations is similar to that given in Section 17.6,
Configuring Simple Virtual IP Address Failover Using Keepalived.
The master server has the following Keepalived configuration:
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance VRRP1 {
state MASTER
#
Specify the network interface to which the virtual address is assigned
interface enp0s8
#
The virtual router ID must be unique to each VRRP instance that you define
virtual_router_id 41
#
Set the value of priority higher on the master server than on a backup server
178
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1066
}
virtual_ipaddress {
10.0.0.10/24
}
}
The configuration of the backup server is the same except for the values of
notification_email_from, state, priority, and possibly interface if the system hardware
configuration is different:
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance VRRP1 {
state BACKUP
#
Specify the network interface to which the virtual address is assigned
interface enp0s8
virtual_router_id 41
#
Set the value of priority lower on the backup server than on the master server
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1066
}
virtual_ipaddress {
10.0.0.10/24
}
}
In the event that the master server (haproxy1) fails, keepalived assigns the virtual IP address
10.0.0.10/24 to the enp0s8 interface on the backup server (haproxy2), which becomes the master
server.
See Section 17.2, Installing and Configuring HAProxy and Section 17.5, Installing and Configuring
Keepalived for details of how to install and configure HAProxy and Keepalived.
179
$2
$3
notify_backup
program_path,
notify_backup
"program_path arg ..."
notify_fault
program_path,
notify_fault
"program_path arg ..."
notify_master
program_path,
notify_master
"program_path arg ..."
The following executable script could be used to handle the general-purpose version of notify:
#!/bin/bash
ENDSTATE=$3
NAME=$2
TYPE=$1
case $ENDSTATE in
"BACKUP") # Perform action for transition
exit 0
;;
"FAULT") # Perform action for transition
exit 0
;;
"MASTER") # Perform action for transition
exit 0
;;
*)
echo "Unknown state ${ENDSTATE}
exit 1
;;
esac
to BACKUP state
to FAULT state
to MASTER state
Tracking scripts are programs that Keepalived runs at regular intervals, according to a vrrp_script
definition:
vrrp_script script_name {
script
"program_path arg ..."
interval i # Run script every i seconds
fall f
# If script returns non-zero f times in succession, enter FAULT state
rise r
# If script returns zero r times in succession, exit FAULT state
timeout t
# Wait up to t seconds for script before assuming non-zero exit code
weight w
# Reduce priority by w on fall
}
180
state MASTER
interface enp0s8
virtual_router_id 21
priority 200
advert_int 1
virtual_ipaddress {
10.0.0.10/24
}
track_script {
script_name
...
}
}
If a configured script returns a non-zero exit code f times in succession, Keepalived changes the state of
the VRRP instance or group to FAULT, removes the virtual IP address 10.0.0.10 from enp0s8, reduces
the priority value by w and stops sending multicast VRRP packets. If the script subsequently returns a zero
exit code r times in succession, the VRRP instance or group exits the FAULT state and transitions to the
MASTER or BACKUP state depending on its new priority.
If you want a server to enter the FAULT state if one or more interfaces goes down, you can also use a
track_interface clause, for example:
track_interface {
enp0s8
enp0s9
}
A possible application of tracking scripts is to deal with a potential split-brain condition in the case that
some of the Keepalived servers lose communication. For example, a script could track the existence of
other Keepalived servers or use shared storage or a backup communication channel to implement a voting
mechanism. However, configuring Keepalived to avoid a split brain condition is complex and it is difficult to
avoid corner cases where a scripted solution might not work.
For an alternative solution, see Section 17.12, Making HAProxy Highly Available Using Oracle
Clusterware.
181
For a high-availability configuration, Oracle recommends that the network, heartbeat, and storage
connections are multiply redundant and that at least three voting disks are configured.
The following steps outline how to configure such a cluster:
1. Install Oracle Clusterware on each system that will serve as a cluster node.
2. Install the haproxy and httpd packages on each node.
3. Use the appvipcfg command to create a virtual IP address for HAProxy and a separate virtual IP
address for each HTTPD service instance. For example, if there are two HTTPD service instances, you
would need to create three different virtual IP addresses.
4. Implement cluster scripts to start, stop, clean, and check the HAProxy and HTTPD services on each
node. These scripts must return 0 for success and 1 for failure.
5. Use the shared storage to share the configuration files, HTML files, logs, and all directories and files
that the HAProxy and HTTPD services on each node require to start.
If you have an Oracle Linux Support subscription, you can use OCFS2 or ASM/ACFS with the shared
storage as an alternative to NFS or other type of shared file system.
6. Configure each HTTPD service instance so that it binds to the correct virtual IP address. Each service
instance must also have an independent set of configuration, log, and other required files, so that all of
the service instances can coexist on the same server if one node fails.
7. Use the crsctl command to create a cluster resource for HAProxy and for each HTTPD service
instance. If there are two or more HTTPD service instances, binding of these instances should initially
be distributed amongst the cluster nodes. The HAProxy service can be started on either node initially.
You can use Oracle Clusterware as the basis of a more complex solution that protects a multi-tiered
system consisting of front-end load balancers, web servers, database servers and other components.
182
For more information, see the Oracle Clusterware 11g Administration and Deployment Guide and the
Oracle Clusterware 12c Administration and Deployment Guide.
183
184
The password must contain at least six characters. If the password is longer than eight characters, only
the first eight characters are used for authentication. An obfuscated version of the password is stored in
$HOME/.vnc/passwd unless the name of a file is specified with the vncpasswd command.
3. Create a service unit configuration file for each VNC desktop that is to be made available on the
system.
a. Copy the [email protected] template file, for example:
# cp /lib/systemd/system/[email protected] \
/etc/systemd/system/vncserver@\:display.service
185
where display is the unique display number of the VNC desktop starting from 1. Use a backslash
character (\) to escape the colon (:) character.
Each VNC desktop is associated with a user account. For ease of administration if you have
multiple VNC desktops, you can include the name of the VNC user in the name of the service unit
configuration file, for example:
# cp /lib/systemd/system/[email protected] \
/etc/systemd/system/vncserver-username@\:display.service
Optionally, you can add command-line arguments for the VNC server. In the following example, the
VNC server only accepts connections from localhost, which means the VNC desktop can only be
accessed locally or through an SSH tunnel:
ExecStart=/sbin/runuser -l vncuser -c "/usr/bin/vncserver %i -localhost"
PIDFile=/home/vncuser/.vnc/%H%i.pid
b. For each VNC desktop, start the service, and configure the service to start following a system
reboot:
# systemctl start vncserver@\:display.service
# systemctl enable vncserver@\:display.service
Note
If you make any changes to a service unit configuration file, you must reload the
configuration file and restart the service.
5. Configure the firewall to allow access to the VNC desktops.
If users will access the VNC desktops through an SSH tunnel and the SSH service is enabled on
the system, you do not need to open additional ports in the firewall. SSH is enabled by default. For
information on enabling SSH, see Section 27.3, Configuring an OpenSSH Server.
If users will access the VNC desktops directly, you must open the required port for each desktop. The
required ports can be calculated by adding the VNC desktop service display number to 5900 (the
default VNC server port). So if the display number is 1, the required port is 5901 and if the display
number is 67, the required port is 5967.
To open ports 5900 to 5903, you can use the following commands:
# firewall-cmd --zone=zone --add-service=vnc-server
# firewall-cmd --zone=zone --add-service=vnc-server --permanent
To open additional ports, for example port 5967, use the following commands:
186
When the installation is complete, use the systemctl get-default command to check that
the default system state is multi-user.target (multi-user command-line environment). Use
the systemctl set-default command reset the default system state or to change it to the
graphical.target (multi-user graphical environment) if you prefer.
The $HOME/.vnc/xstartup file is a shell script that specifies the X applications to run when the VNC
desktop is started. For example, to run a KDE Plasma Workspace, you could edit the file as follows:
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
#exec /etc/X11/xinit/xinitrc
startkde &
If you make any changes to a user's $HOME/.vnc/xstartup file, you must restart the VNC desktop
for the changes to take effect:
# systemctl restart vncserver@\:display.service
See the vncserver(1), Xvnc(1), and vncpasswd(1) manual pages for more information.
To connect to a VNC desktop through an SSH tunnel, use the -via option for the vncviewer
command to specify the user name and host for the SSH connection, and use localhost:display
to specify the VNC desktop. For example:
187
Start the TigerVNC client, and connect to localhost:display, where display is the source
port number configured in the SSH tunnel. You might have to configure the firewall on the client to
permit the connection.
188
Table of Contents
19 Storage Management .................................................................................................................
19.1 About Disk Partitions .......................................................................................................
19.1.1 Managing Partition Tables Using fdisk ...................................................................
19.1.2 Managing Partition Tables Using parted ................................................................
19.1.3 Mapping Partition Tables to Devices .....................................................................
19.2 About Swap Space .........................................................................................................
19.2.1 Viewing Swap Space Usage .................................................................................
19.2.2 Creating and Using a Swap File ...........................................................................
19.2.3 Creating and Using a Swap Partition .....................................................................
19.2.4 Removing a Swap File or Swap Partition ...............................................................
19.3 About Logical Volume Manager .......................................................................................
19.3.1 Initializing and Managing Physical Volumes ...........................................................
19.3.2 Creating and Managing Volume Groups ................................................................
19.3.3 Creating and Managing Logical Volumes ...............................................................
19.3.4 Creating Logical Volume Snapshots ......................................................................
19.3.5 Creating and Managing Thinly-Provisioned Logical Volumes ...................................
19.3.6 Using snapper with Thinly-Provisioned Logical Volumes .........................................
19.4 About Software RAID ......................................................................................................
19.4.1 Creating Software RAID Devices ..........................................................................
19.5 Creating Encrypted Block Devices ...................................................................................
19.6 SSD Configuration Recommendations for btrfs, ext4, and swap .........................................
19.7 About Linux-IO Storage Configuration ..............................................................................
19.7.1 Configuring an iSCSI Target .................................................................................
19.7.2 Configuring an iSCSI Initiator ................................................................................
19.7.3 Updating the Discovery Database .........................................................................
19.8 About Device Multipathing ...............................................................................................
19.8.1 Configuring Multipathing .......................................................................................
20 File System Administration .........................................................................................................
20.1 Making File Systems .......................................................................................................
20.2 Mounting File Systems ....................................................................................................
20.2.1 About Mount Options ...........................................................................................
20.3 About the File System Mount Table .................................................................................
20.4 Configuring the Automounter ...........................................................................................
20.5 Mounting a File Containing a File System Image ..............................................................
20.6 Creating a File System on a File .....................................................................................
20.7 Checking and Repairing a File System ............................................................................
20.7.1 Changing the Frequency of File System Checking .................................................
20.8 About Access Control Lists .............................................................................................
20.8.1 Configuring ACL Support ......................................................................................
20.8.2 Setting and Displaying ACLs ................................................................................
20.9 About Disk Quotas ..........................................................................................................
20.9.1 Enabling Disk Quotas on File Systems ..................................................................
20.9.2 Assigning Disk Quotas to Users and Groups .........................................................
20.9.3 Setting the Grace Period ......................................................................................
20.9.4 Displaying Disk Quotas ........................................................................................
20.9.5 Enabling and Disabling Disk Quotas .....................................................................
20.9.6 Reporting on Disk Quota Usage ...........................................................................
20.9.7 Maintaining the Accuracy of Disk Quota Reporting .................................................
21 Local File System Administration ................................................................................................
21.1 About Local File Systems ................................................................................................
21.2 About the Btrfs File System .............................................................................................
191
195
195
196
198
200
200
201
201
201
202
202
202
203
204
205
205
206
207
208
209
210
211
212
214
216
216
217
223
223
224
225
226
227
228
228
229
230
230
231
231
232
233
233
234
234
234
234
235
237
237
239
21.3
21.4
21.5
21.6
21.7
192
239
241
241
242
242
244
245
245
246
246
247
247
247
247
248
249
250
250
250
251
252
252
254
254
254
254
255
255
256
256
257
257
258
260
260
263
263
263
263
266
266
266
268
271
271
273
273
274
275
276
276
276
278
280
193
281
281
283
283
283
284
284
284
285
287
287
287
287
288
288
194
195
196
198
200
200
201
201
201
202
202
202
203
204
205
205
206
207
208
209
210
211
212
214
216
216
217
This chapter describes how to configure and manage disk partitions, swap space, logical volumes,
software RAID, block device encryption, iSCSI storage, and multipathing.
195
up to 11 logical partitions. The primary partition that contains the logical partitions is known as an extended
partition. The MBR scheme supports disks up to 2 TB in size.
On hard disks with a GUID Partition Table (GPT), you can configure up to 128 partitions and there is no
concept of extended or logical partitions. You should configure a GPT if the disk is larger than 2 TB.
You can create and manage MBRs by using the fdisk command. If you want to create a GPT, use
parted instead.
Note
When partitioning a block storage device, align primary and logical partitions on
one-megabyte (1048576 bytes) boundaries. If partitions, file system blocks, or
RAID stripes are incorrectly aligned and overlap the boundaries of the underlying
storage's sectors or pages, the device controller has to modify twice as many
sectors or pages than if correct alignment is used. This recommendation applies to
most block storage devices, including hard disk drives (spinning rust), solid state
drives (SSDs), LUNs on storage arrays, and host RAID adapters.
196
Start
2048
1026048
End
1026047
83886079
Blocks
512000
41430016
Id
83
8e
System
Linux
Linux LVM
The example output shows that /dev/sda is a 42.9 GB disk. As modern hard disks support logical block
addressing (LBA), any information about the numbers of heads and sectors per track is irrelevant and
probably fictitious. The start and end offsets of each partition from the beginning of the disk are shown in
units of sectors. The partition table is displayed after the device summary, and shows:
Device
Boot
Specifies * if the partition contains the files that the GRUB bootloader needs to boot
the system. Only one partition can be bootable.
The start and end offsets in sectors. All partitions are aligned on one-megabyte
boundaries.
Blocks
Id and System
The partition type. The following partition types are typically used with Oracle Linux:
5 Extended
82 Linux swap
83 Linux
Linux partition for a file system that is not managed by LVM. This is
the default partition type.
8e Linux LVM
The n command creates a new partition. For example, to create partition table entries for two Linux
partitions on /dev/sdc, one of which is 5 GB in size and the other occupies the remainder of the disk:
# fdisk -cu /dev/sdc
...
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-25165823, default 2048): 2048
Last sector, +sectors or +size{K,M,G} (2048-25165823, default 25165823): +5G
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 2
First sector (10487808-25165823, default 10487808): <Enter>
Using default value 10487808
197
Start
2048
10487808
End
10487807
25165823
Blocks
5242880
7339008
Id
83
83
System
Linux
Linux
The t command allows you to change the type of a partition. For example, to change the partition type of
partition 2 to Linux LVM:
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Command (m for help): p
...
Device Boot
Start
/dev/sdc1
2048
/dev/sdc2
10487808
End
10487807
25165823
Blocks
5242880
7339008
Id
83
8e
System
Linux
Linux LVM
After creating the new partition table, use the w command to write the table to the disk and exit fdisk.
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
If you enter q instead, fdisk exits without committing the changes to disk.
For more information, see the cfdisk(8) and fdisk(8) manual pages.
198
Start
1049kB
525MB
End
525MB
42.9GB
Size
524MB
42.4GB
Type
primary
primary
File system
ext4
Flags
boot
lvm
Typically, you would set the disk label type to gpt or msdos for an Oracle Linux system, depending on
whether the disk device supports GPT. You are prompted to confirm that you want to overwrite the existing
disk label.
The mkpart command creates a new partition:
(parted) mkpart
Partition name? []? <Enter>
File system type? [ext2]? ext4
Start? 1
End? 5GB
For disks with an msdos label, you are also prompted to enter the partition type, which can be primary,
extended, or logical. The file system type is typically set to one of fat16, fat32, ext4, or linuxswap for an Oracle Linux system. If you are going to create an btrfs, ext*, ocfs2, or xfs file system on
the partition, specify ext4. Unless you specify units such as GB for gigabytes, the start and end offsets of
a partition are assumed to be in megabytes. To specify the end of the disk for End, enter a value of -0.
To display the new partition, use the print command:
(parted) print
Number
Start
1
1049kB
End
5000MB
Size
4999MB
File system
ext4
Name
Flags
199
system.img
204800 /dev/loop0 2048
12288000 /dev/loop0 206848
4096000 /dev/loop0 212494848
2 /dev/loop0 16590848
This output shows that the drive image contains four partitions, and the first column are the names of the
device files that can be created in /dev/mapper.
The -a option creates the device mappings:
# kpartx -a system.img
# ls /dev/mapper
control loop0p1 loop0p2
loop0p3
loop0p4
If a partition contains a file system, you can mount it and view the files that it contains, for example:
# mkdir /mnt/sysimage
# mount /dev/mapper/loop0p1 /mnt/sysimage
# ls /mnt/sysimage
config-2.6.32-220.el6.x86_64
config-2.6.32-300.3.1.el6uek.x86_64
efi
grub
initramfs-2.6.32-220.el6.x86_64.img
initramfs-2.6.32-300.3.1.el6uek.x86_64.img
...
# umount /mnt/sysimage
200
Type
partition
file
Size
4128760
999992
Used
388
0
Priority
-1
-2
In this example, the system is using both a 4-gigabyte swap partition on /dev/sda2 and a one-gigabyte
swap file, /swapfile. The Priority column shows that the system preferentially swaps to the swap
partition rather than to the swap file.
You can also view /proc/meminfo or use utilities such as free, top, and vmstat to view swap space
usage, for example:
# grep Swap /proc/meminfo
SwapCached:
248 kB
SwapTotal:
5128752 kB
SwapFree:
5128364 kB
# free | grep Swap
Swap:
5128752
388
5128364
4. Add an entry to /etc/fstab for the swap file so that the system uses it following the next reboot:
/swapfile
swap
swap
defaults
0 0
201
# swapon /swapfile
4. Add an entry to /etc/fstab for the swap partition so that the system uses it following the next reboot:
/dev/sda2
swap
swap
defaults
0 0
2. Remove the entry for the swap file or swap partition from /etc/fstab.
3. Optionally, remove the swap file or swap partition if you do not want to use it in future.
For example, set up /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde as physical volumes:
# pvcreate -v /dev/sd[bcde]
202
To display information about physical volumes, you can use the pvdisplay, pvs, and pvscan
commands.
To remove a physical volume from the control of LVM, use the pvremove command:
# pvremove device
Other commands that are available for managing physical volumes include pvchange, pvck, pvmove,
and pvresize.
For more information, see the lvm(8), pvcreate(8), and other LVM manual pages.
For example, create the volume group myvg from the physical volumes /dev/sdb, /dev/sdc, /dev/
sdd, and /dev/sde:
# vgcreate -v myvg /dev/sd[bcde]
Wiping cache of LVM-capable devices
Adding physical volume /dev/sdb to volume group myvg
Adding physical volume /dev/sdc to volume group myvg
Adding physical volume /dev/sdd to volume group myvg
Adding physical volume /dev/sde to volume group myvg
Archiving volume group myvg metadata (seqno 0).
Creating volume group backup /etc/lvm/backup/myvg (seqno 1).
Volume group myvg successfully created
LVM divides the storage space within a volume group into physical extents, which are the smallest unit that
LVM uses when allocating storage to logical volumes. The default size of an extent is 4 MB.
The allocation policy for the volume group and logical volume determines how LVM allocates extents from
a volume group. The default allocation policy for a volume group is normal, which applies rules such as
not placing parallel stripes on the same physical volume. The default allocation policy for a logical volume
is inherit, which means that the logical volume uses the same policy as for the volume group. You
can change the default allocation policies by using the lvchange or vgchange commands, or you can
override the allocation policy when you create a volume group or logical volume. Other allocation policies
include anywhere, contiguous and cling.
To add physical volumes to a volume group, use the vgextend command:
# vgextend [options] volume_group physical_volume ...
To remove physical volumes from a volume group, use the vgreduce command:
# vgreduce [options] volume_group physical_volume ...
To display information about volume groups, you can use the vgdisplay, vgs, and vgscan commands.
203
Other commands that are available for managing volume groups include vgchange, vgck, vgexport,
vgimport, vgmerge, vgrename, and vgsplit.
For more information, see the lvm(8), vgcreate(8), and other LVM manual pages.
For example, create the logical volume mylv of size 2 GB in the volume group myvg:
# lvcreate -v --size 2g --name mylv myvg
Setting logging type to disk
Finding volume group myvg
Archiving volume group myvg metadata (seqno 1).
Creating logical volume mylv
Create volume group backup /etc/lvm/backup/myvg (seqno 2).
...
lvcreate uses the device mapper to create a block device file entry under /dev for each logical volume
and uses udev to set up symbolic links to this device file from /dev/mapper and /dev/volume_group.
For example, the device that corresponds to the logical volume mylv in the volume group myvg might be /
dev/dm-3, which is symbolically linked by /dev/mapper/myvolg-myvol and /dev/myvolg/myvol.
Note
Always use the devices in /dev/mapper or /dev/volume_group. These names
are persistent and are created automatically by the device mapper early in the
boot process. The /dev/dm-* devices are not guaranteed to be persistent across
reboots.
Having created a logical volume, you can configure and use it in the same way as you would a physical
storage device. For example, you can configure a logical volume as a file system, swap partition,
Automatic Storage Management (ASM) disk, or raw device.
To display information about logical volumes, you can use the lvdisplay, lvs, and lvscan commands.
To remove a logical volume from a volume group, use the lvremove command:
# lvremove volume_group/logical_volume
Note
You must specify both the name of the volume group and the logical volume.
Other commands that are available for managing logical volumes include lvchange, lvconvert,
lvmdiskscan, lvmsadc, lvmsar, lvrename, and lvresize.
For more information, see the lvm(8), lvcreate(8), and other LVM manual pages.
204
You can mount and modify the contents of the snapshot independently of the original volume or preserve it
as a record of the state of the original volume at the time that you took the snapshot. The snapshot usually
takes up less space than the original volume, depending on how much the contents of the volumes diverge
over time. In the example, we assume that the snapshot only requires one quarter of the space of the
original volume. You can use the value shown by the Snap% column in the output from the lvs command
to see how much data is allocated to the snapshot. If the value of Snap% approaches 100%, indicating that
a snapshot is running out of storage, use lvresize to grow it. Alternatively, you can reduce a snapshot's
size to save storage space. To merge a snapshot with its original volume, use the lvconvert command,
specifying the --merge option.
To remove a logical volume snapshot from a volume group, use the lvremove command as you would for
a logical volume:
# lvremove volume_group/logical_volume_snapshot
For more information, see the lvcreate(8) and lvremove (8) manual pages.
For example, create the thin pool mytp of size 1 GB in the volume group myvg:
# lvcreate --size 1g --thin myvg/mytp
Logical volume "mytp" created
You can then use lvcreate with the --thin option to create a thinly-provisioned logical volume with a
size specified by the --virtualsize option, for example:
# lvcreate --virtualsize size --thin volume_group/thin_pool_name \
--name logical_volume
For example, create the thinly-provisioned logical volume mytv with a virtual size of 2 GB using the thin
pool mytp, whose size is currently less than the size of the volume:
# lvcreate --virtualsize 2g --thin myvg/mytp --name mytv
Logical volume "mytv" created
If you create a thin snapshot of a thinly-provisioned logical volume, do not specify the size of the snapshot,
for example:
205
If you were to specify a size for the thin snapshot, its storage would not be provisioned from the thin pool.
If there is sufficient space in the volume group, you can use the lvresize command to increase the size
of a thin pool, for example:
# lvresize -L+1G myvg/mytp
Extending logical volume mytp to 2 GiB
Logical volume mytp successfully resized
For details of how to use the snapper command to create and manage thin snapshots, see
Section 19.3.6, Using snapper with Thinly-Provisioned Logical Volumes.
For more information, see the lvcreate(8) and lvresize(8) manual pages.
Here config_name is the name of the configuration, fs_type is the file system type (ext4 or xfs),
and fs_name is the path of the file system. The command adds an entry for config_name to /etc/
sysconfig/snapper, creates the configuration file /etc/snapper/configs/config_name, and sets
up a .snapshots subdirectory for the snapshots.
By default, snapper sets up a cron.hourly job to create snapshots in the .snapshot subdirectory of
the volume and a cron.daily job to clean up old snapshots. You can edit the configuration file to disable
or change this behavior. For more information, see the snapper-configs(5) manual page.
There are three types of snapshot that you can create using snapper:
post
You use a post snapshot to record the state of a volume after a modification. A post snapshot
should always be paired with a pre snapshot that you take immediately before you make the
modification.
pre
You use a pre snapshot to record the state of a volume before a modification. A pre snapshot
should always be paired with a post snapshot that you take immediately after you have
completed the modification.
single
You can use a single snapshot to record the state of a volume but it does not have any
association with other snapshots of the volume.
For example, the following commands create pre and post snapshots of a volume:
# snapper -c config_name create -t pre -p
N
... Modify the volume's contents ...
# snapper -c config_name create -t post --pre-num N -p
N'
The -p option causes snapper to display the number of the snapshot so that you can reference it when
you create the post snapshot or when you compare the contents of the pre and post snapshots.
206
To display the files and directories that have been added, removed, or modified between the pre and post
snapshots, use the status subcommand:
# snapper -c config_name status N..N'
To display the differences between the contents of the files in the pre and post snapshots, use the diff
subcommand:
# snapper -c config_name diff N..N'
To undo the changes in the volume from post snapshot N' to pre snapshot N:
# snapper -c config_name undochange N..N'
RAID-0 (striping)
RAID-1 (mirroring)
207
A more resilient variant of RAID-5 that can recover from the loss of
two drives in an array. RAID-6 is used when data redundancy and
resilience are important, but performance is not. RAID-6 is intermediate
in expense between RAID-5 and RAID-1.
For example, to create a RAID-1 device /dev/md0 from /dev/sdf and /dev/sdg:
# mdadm --create /dev/md0 --level=1 -raid-devices=2 /dev/sd[fg]
If you want to include spare devices that are available for expansion, reconfiguration, or replacing failed
drives, use the --spare-devices option to specify their number, for example:
# mdadm --create /dev/md1 --level=5 -raid-devices=3 --spare-devices=1 /dev/sd[bcde]
Note
The number of RAID and spare devices must equal the number of devices that
you specify.
2. Add the RAID configuration to /etc/mdadm.conf:
# mdadm --examine --scan >> /etc/mdadm.conf
Note
This step is optional. It helps mdadm to assemble the arrays at boot time.
For example, the following entries in /etc/mdadm.conf define the devices and arrays that
correspond to /dev/md0 and /dev/md1:
208
DEVICE /dev/sd[c-g]
ARRAY /dev/md0 devices=/dev/sdf,/dev/sdg
ARRAY /dev/md1 spares=1 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
To display summary and detailed information about MD RAID devices, you can use the --query and -detail options with mdadm.
For more information, see the md(4), mdadm(8), and mdadm.conf(5) manual pages.
<source device>
/dev/sdd
<key file>
none
209
<options>
luks
This entry causes the operating system to prompt you to enter the passphrase at boot time.
Having created an encrypted volume and its device mapping, you can configure and use it in the same way
as you would a physical storage device. For example, you can configure it as an LVM physical volume,
file system, swap partition, Automatic Storage Management (ASM) disk, or raw device. For example, you
would create an entry in the /etc/fstab to mount the mapped device (/dev/mapper/cryptsfs), not
the physical device (/dev/sdd).
To verify the status of an encrypted volume, use the following command:
# cryptsetup status cryptfs
/dev/mapper/cryptfs is active.
type: LUKS1
cipher: aes-cbs-essiv:sha256
keysize: 256 bits
device: /dev/xvdd1
offset: 4096 sectors
size:
6309386 sectors
mode:
read/write
Should you need to remove the device mapping, unmount any file system that the encrypted volume
contains, and run the following command:
# cryptsetup luksClose /dev/mapper/cryptfs
For more information, see the crypsetup(8) and crypttab(5) manual pages.
210
Setting the ssd option does not imply that discard is also set.
If you configure swap files or partitions on an SSD, reduce the tendency of the kernel to perform
anticipatory writes to swap, which is controlled by the value of the vm.swappiness kernel parameter and
displayed as /proc/sys/vm/swappiness. The value of vm.swappiness can be in the range 0 to 100,
where a higher value implies a greater propensity to write to swap. The default value is 60. The suggested
value when swap has been configured on SSD is 1. You can use the following commands to change the
value:
# echo "vm.swappiness = 1" >> /etc/sysctl.conf
# sysctl -p
...
vm.swappiness = 1
A hardware-based iSCSI initiator uses a dedicated iSCSI HBA. Oracle Linux supports iSCSI initiator
functionality in software. The kernel-resident device driver uses the existing network interface card (NIC)
211
and network stack to emulate a hardware iSCSI initiator. As the iSCSI initiator functionality is not available
at the level of the system BIOS, you cannot boot an Oracle Linux system from iSCSI storage .
To improve performance, some network cards implement TCP/IP Offload Engines (TOE) that can create a
TCP frame for the iSCSI packet in hardware. Oracle Linux does not support TOE, although suitable drivers
may be available directly from some card vendors.
For more information about LIO, see http://linux-iscsi.org/wiki/Main_Page.
2. Change to the /backstores/block directory and create a block storage object for the disk partitions
that you want to provide as LUNs, for example:
/> cd /backstores/block
/backstores/block> create name=LUN_0 dev=/dev/sdb
Created block storage object LUN_0 using /dev/sdb.
/backstores/block> create name=LUN_1 dev=/dev/sdc
Created block storage object LUN_1 using /dev/sdc.
The names that you assign to the storage objects are arbitrary.
3. Change to the /iscsi directory and create an iSCSI target:
/> cd /iscsi
/iscsi> create
Created target iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344.
Created TPG 1.
List the target portal group (TPG) hierarchy, which is initially empty:
/iscsi> ls
o- iscsi .......................................................... [Targets: 1]
o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 .............. [TPGs: 1]
o- tpg1 ............................................. [no-gen-acls, no-auth]
o- acls ........................................................ [ACLs: 0]
o- luns ........................................................ [LUNs: 0]
o- portals .................................................. [Portals: 0]
4. Change to the luns subdirectory of the TPG directory hierarchy and add the LUNs to the target portal
group:
212
/iscsi> cd iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344/tpg1/luns
/iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_0
Created LUN 0.
/iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_1
Created LUN 1.
5. Change to the portals subdirectory of the TPG directory hierarchy and specify the IP address and
port of the iSCSI endpoint:
/iscsi/iqn.20...344/tpg1/luns> cd ../portals
/iscsi/iqn.20.../tpg1/portals> create 10.150.30.72 3260
Using default IP port 3260
Created network portal 10.150.30.72:3260.
6. Configure the access rights for logins by initiators. For example, to configure demonstration
mode that does not require authentication, change to the TGP directory and set the values of the
authentication and demo_mode_write_protect attributes to 0 and generate_node_acls
cache_dynamic_acls to 1:
/iscsi/iqn.20.../tpg1/portals> cd ..
/iscsi/iqn.20...14f87344/tpg1> set attribute authentication=0 demo_mode_write_protect=0 \
generate_node_acls=1 cache_dynamic_acls=1
Parameter authentication is now '0'.
Parameter demo_mode_write_protect is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
Caution
Demonstration mode is inherently insecure. For information about
configuring secure authentication modes, see http://linux-iscsi.org/wiki/
ISCSI#Define_access_rights.
7. Change to the root directory and save the configuration so that it persists across reboots of the system:
/iscsi/iqn.20...14f87344/tpg1> cd /
/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
213
2. Use the SendTargets discovery method to discover the iSCSI targets at a specified IP address:
# iscsiadm -m discovery -t sendtargets -p 10.150.30.72
10.150.30.72:3260,1 iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344
Note
An alternate discovery method is Internet Storage Name Service (iSNS).
The command also starts the iscsid service if it is not already running.
The following command displays information about the targets that is now stored in the discovery
database:
# iscsiadm -m discoverydb -t st -p 10.150.30.72
# BEGIN RECORD 6.2.0.873-14
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 10.150.30.72
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = <empty>
discovery.sendtargets.auth.password = <empty>
discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# END RECORD
4. Verify that the session is active, and display the available LUNs:
# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-14
Target: iqn.2003-01.com.mydom.host01.x8664:sn.ef8e14f87344 (non-flash)
Current Portal: 10.0.0.2:3260,1
Persistent Portal: 10.0.0.2:3260,1
214
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.mydom:ed7021225d52
Iface IPaddress: 10.0.0.2
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 5
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
.
.
.
************************
Attached SCSI devices:
************************
Host Number: 8 State: running
scsi8 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
scsi8 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc State: running
The LUNs are represented as SCSI block devices (sd*) in the local /dev directory, for example:
# fdisk -l | grep /dev/sd[bc]
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
You can view the initialization messages for the LUNs in the /var/log/messages file, for example:
# grep sdb /var/log/messages
...
May 18 14:19:36 localhost kernel: [12079.963376] sd 8:0:0:0: [sdb] Attached SCSI disk
...
You can configure and use a LUN in the same way as you would any other physical storage device.
For example, you can configure it as an LVM physical volume, file system, swap partition, Automatic
Storage Management (ASM) disk, or raw device.
Specify the _netdev option when creating mount entries for iSCSI LUNs in /etc/fstab, for example:
UUID=084591f8-6b8b-c857-f002-ecf8a3b387f3
/iscsi_mount_point
ext4
_netdev
This option indicates the file system resides on a device that requires network access, and prevents the
system from attempting to mount the file system until the network has been enabled.
Note
Specify an iSCSI LUN in /etc/fstab by using UUID=UUID rather than the
device path. A device path can change after re-connecting the storage or
215
rebooting the system. You can use the blkid command to display the UUID of
a block device.
Any discovered LUNs remain available across reboots provided that the target
continues to serve those LUNs and you do not log the system off the target.
For more information, see the iscsiadm(8) and iscsid(8) manual pages.
To delete records from the database that are no longer supported by the target:
# iscsiadm -m discoverydb -t st -p 10.150.30.72 -o delete --discover
216
Configuring Multipathing
Without DM-Multipath, the system treats each path as being separate even though it connects the server
to the same storage device. DM-Multipath creates a single multipath device, /dev/mapper/mpathN, that
subsumes the underlying devices, /dev/sdc and /dev/sdf.
You can configure the multipathing service (multipathd) to handle I/O from and to a multipathed device
in one of the following ways:
Active/Active
Active/Passive (standby
failover)
I/O uses only one path. If the active path fails, DM-Multipath switches I/
O to a standby path. This is the default configuration.
Note
DM-Multipath can provide failover in the case of path failure, such as in a SAN
fabric. Disk media failure must be handled by using either a software or hardware
RAID solution.
217
Configuring Multipathing
This command also starts the multipathd service and configures the service to start after system
reboots.
Skip the remaining steps of this procedure.
To edit /etc/multipath.conf and set up a more complex configuration such as active/active,
follow the remaining steps in this procedure.
3. Initialize the /etc/multipath.conf file:
# mpathconf --enable
/dev
10
"round-robin 0"
multibus
"/lib/udev/scsi_id --whitelisted --device=/dev/%n"
alua
readsector0
100
8192
priorities
immediate
fail
yes
blacklist {
# Blacklist by WWID
wwid "*"
# Blacklist by device name
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
# Blacklist by device type
device {
vendor
"COMPAQ "
product
"HSV110 (C)COMPAQ"
}
}
blacklist_exceptions {
wwid "3600508b4000156d700012000000b0000"
wwid "360000970000292602744533032443941"
}
multipaths {
multipath {
wwid
alias
path_grouping_policy
path_checker
path_selector
failback
rr_weight
no_path_retry
}
multipath {
wwid
alias
3600508b4000156d700012000000b0000
blue
multibus
readsector0
"round-robin 0"
manual
priorities
5
360000970000292602744533032443941
green
218
Configuring Multipathing
}
}
devices {
device {
vendor
product
path_grouping_policy
getuid_callout
path_selector
features
hardware_handler
path_checker
prio
rr_weight
rr_min_io
}
}
"SUN"
"(StorEdge 3510|T4"
multibus
"/sbin/scsi_id --whitelisted --device=/dev/%n"
"round-robin 0"
"0"
"0"
directio
const
uniform
1000
blacklist
blacklist_exceptions
multipaths
devices
219
Configuring Multipathing
yes
"/bin/scsi_id --whitelisted --replace-whitespace --device=/dev/%n
multipaths {
multipath {
wwid 360000970000292602744533030303730
}
}
In this standby failover configuration, I/O continues through a remaining active network interface if a
network interfaces fails on the iSCSI initiator.
For more information about configuring entries in /etc/multipath.conf, refer to the
multipath.conf(5) manual page.
5. Start the multipathd service and configure the service to start after system reboots:
# systemctl start multipathd
# systemctl enable multipathd
Multipath devices are identified in /dev/mapper by their World Wide Identifier (WWID), which is globally
unique. Alternatively, if you set the value of user_friendly_names to yes in the defaults section of
/etc/multipath.conf or by specifying the --user_friendly_names n option to mpathconf, the
device is named mpathN where N is the multipath group number. An alias attribute in the multipaths
section of /etc/multipath.conf specifies the name of the multipath device instead of a name based
on either the WWID or the multipath group number.
You can use the multipath device in /dev/mapper to reference the storage in the same way as you would
any other physical storage device. For example, you can configure it as an LVM physical volume, file
system, swap partition, Automatic Storage Management (ASM) disk, or raw device.
To display the status of DM-Multipath, use the mpathconf command, for example:
# mpathconf
multipath is enabled
find_multipaths is enabled
user_friendly_names is enabled
dm_multipath modules is loaded
multipathd is running
To display the current multipath configuration, specify the -ll option to the multipath command, for
example:
# multipath -ll
mpath1(360000970000292602744533030303730) dm-0 SUN,(StorEdge 3510|T4
size=20G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| - 5:0:0:2 sdb 8:16
active ready running
-+- policy=round-robin 0 prio=1 status=active
- 5:0:0:3 sdc 8:32
active ready running
220
Configuring Multipathing
If you edit /etc/multipath.conf, restart the multipathd service to make it re-read the file:
# systemctl restart multipathd
221
222
223
224
225
226
227
228
228
229
230
230
231
231
232
233
233
234
234
234
234
235
This chapter describes how to create, mount, check, and repair file systems, how to configure Access
Control Lists, how to configure and manage disk quotas.
mkfs is a front end for builder utilities in /sbin such as mkfs.ext4. You can use either the mkfs
command with the -t fstype option or the builder utility to specify the type of file system to build. For
example, the following commands are equivalent ways of creating an ext4 file system with the label
Projects on the device /dev/sdb1:
# mkfs -t ext4 -L Projects /dev/sdb1
# mkfs.ext4 -L Projects /dev/sdb1
If you do not specify the file system type to makefs , it creates an ext2 file system.
To display the type of a file system, use the blkid command:
# blkid /dev/sdb1
/dev/sdb1: UUID="ad8113d7-b279-4da8-b6e4-cfba045f66ff" TYPE="ext4" LABEL="Projects"
The blkid command also display information about the device such as its UUID and label.
Each file system type supports a number of features that you can enable or disable by specifying additional
options to mkfs or the build utility. For example, you can use the -J option to specify the size and location
of the journal used by the ext3 and ext4 file system types.
For more information, see the blkid(8), mkfs(8), and mkfs.fstype(8) manual pages.
223
You can use an existing directory as a mount point, but its contents are hidden until you unmount the
overlying file system.
The mount command attaches the device containing the file system to the mount point:
# mount [options] device mount_point
You can specify the device by its name, UUID, or label. For example, the following commands are
equivalent ways of mounting the file system on the block device /dev/sdb1:
# mount /dev/sdb1 /var/projects
# mount UUID="ad8113d7-b279-4da8-b6e4-cfba045f66ff" /var/projects
# mount LABEL="Projects" /var/projects
If you do not specify any arguments, mount displays all file systems that the system currently has
mounted, for example:
# mount
/dev/mapper/vg_host01-lv_root on / type ext4 (rw)
...
In this example, the LVM logical volume /dev/mapper/vg_host01-lv_root is mounted on /. The file
system type is ext4 and is mounted for both reading and writing. (You can also use the command cat /
proc/mounts to display information about mounted file systems.)
The df command displays information about home much space remains on mounted file systems, for
example:
# df -h
Filesystem
/dev/mapper/vg_host01-lv_root
...
Size
36G
You can use the -B (bind) option to the mount command to attach a block device at multiple mount points.
You can also remount part of a directory hierarchy, which need not be a complete file system, somewhere
else. For example, the following command mounts /var/projects/project1 on /mnt:
# mount -B /var/projects/project1 /mnt
Each directory hierarchy acts as a mirror of the other. The same files are accessible in either location,
although any submounts are not replicated. These mirrors do not provide data redundancy.
You can also mount a file over another file, for example:
# touch /mnt/foo
# mount -B /etc/hosts /mnt/foo
In this example, /etc/hosts and /mnt/foo represent the same file. The existing file that acts as a
mount point is not accessible until you unmount the overlying file.
The -B option does not recursively attach any submounts below a directory hierarchy. To include
submounts in the mirror, use the -R (recursive bind) option instead.
When you use -B or -R, the file system mount options remain the same as those for the original mount
point. To modify, the mount options, use a separate remount command, for example:
224
You can mark the submounts below a mount point as being shared, private, or slave:
mount --make-shared
mount_point
mount --make-private
mount_point
mount --make-slave
mount_point
To prevent a mount from being mirrored by using the -B or -R options, mark its mount point as being
unbindable:
# mount --make-unbindable mount_point
To move a mounted file system, directory hierarchy, or file between mount points, use the -M option, for
example:
# touch /mnt/foo
# mount -M /mnt/foo /mnt/bar
Alternatively, you can specify the block device provided that it is mounted on only one mount point.
For more information, see the mount(8) and umount(8) manual pages.
Allows the file system to be mounted automatically by using the mount -a command.
exec
Allows the execution of any binary files located in the file system.
loop
Uses a loop device (/dev/loop*) to mount a file that contains a file system image. See
Section 20.5, Mounting a File Containing a File System Image, Section 20.6, Creating a File
System on a File, and the losetup(8) manual page.
Note
The default number of available loop devices is 8. You can use the
kernel boot parameter max_loop=N to configure up to 255 devices.
Alternatively, add the following entry to /etc/modprobe.conf:
options loop max_loop=N
where N is the number of loop devices that you require (from 0 to 255),
and reboot the system.
225
noauto
Disallows the file system from being mounted automatically by using mount -a.
noexec
Disallows the execution of any binary files located in the file system.
nouser
Disallows any user other than root from mounting or unmounting the file system.
remount
Remounts the file system if it is already mounted. You would usually combine this option with
another option such as ro or rw to change the behavior of a mounted file system.
ro
rw
user
For example, mount /dev/sdd1 as /test with read-only access and only root permitted to mount or
unmount the file system:
# mount -o nouser,ro /dev/sdd1 /test
Mount an ISO image file on /mount/cdrom with read-only access by using the loop device:
# mount -o ro,loop ./OracleLinux-R6-U1-Server-x86_64-dvd.iso /media/cdrom
Remount the /test file system with both read and write access, but do not permit the execution of any
binary files that are located in the file system:
# mount -o remount,rw,noexec /test
/boot
/
swap
ext4
ext4
swap
defaults
defaults
defaults
1 2
1 1
0 0
The first field is the device to mount specified by the device name, UUID, or device label, or the
specification of a remote file system. A UUID or device label is preferable to a device name if the device
name could change, for example:
LABEL=Projects
/var/projects
ext4
defaults
1 2
The second field is either the mount point for a file system or swap to indicate a swap partition.
The third field is the file system type, for example ext4 or swap.
The fourth field specifies any mount options.
The fifth column is used by the dump command. A value of 1 means dump the file system; 0 means the file
system does not need to be dumped.
The sixth column is used by the file system checker, fsck, to determine in which order to perform file
system checks at boot time. The value should be 1 for the root file system, 2 for other file systems. A
value of 0 skips checking, as is appropriate for swap, file systems that are not mounted at boot time, or for
binding of existing mounts.
226
For bind mounts, only the first four fields are specified, for example:
path
mount_point
none
bind
The first field specifies the path of the file system, directory hierarchy, or file that is to be mounted on
the mount point specified by the second field. The mount point must be a file if the path specifies a file;
otherwise, it must be a directory. The third and fourth fields are specified as none and bind.
For more information, see the fstab(5) manual page.
2. Edit the /etc/auto.master configuration file to define map entries. Each map entry specifies a
mount point and a map file that contains definitions of the remote file systems that can be mounted, for
example:
//misc
/net
/etc/auto.direct
/etc/auto.misc
-hosts
Here, the /-, /misc, and /net entries are examples of a direct map, an indirect map, and a host map
respectively. Direct map entries always specify /- as the mount point. Host maps always specify the
keyword -hosts instead of a map file.
A direct map contains definitions of directories that are automounted at the specified absolute path. In
the example, the auto.direct map file might contain an entry such as:
/usr/man
-fstype=nfs,ro,soft
host01:/usr/man
This entry mounts the file system /usr/man exported by host01 using the options ro and soft, and
creates the /usr/man mount point if it does not already exist. If the mount point already exists , the
mounted file system hides any existing files that it contains.
As the default file system type is NFS, the previous example can be shortened to read:
/usr/man
-ro,soft
host01:/usr/man
An indirect map contains definitions of directories (keys) that are automounted relative to the mount
point (/misc) specified in /etc/auto.master. In the example, the /etc/auto.misc map file might
contain entries such as the following:
xyz
cd
abc
fenetres
-ro,soft
host01:/xyz
-fstype=iso9600,ro,nosuid,nodev
:/dev/cdrom
-fstype=ext3
:/dev/hda1
-fstype=cifs,credentials=credfile
://fenetres/c
227
The /misc directory must already exist, but the automounter creates a mount point for the keys xyz,
cd , and so on if they does not already exist, and removes them when it unmounts the file system. For
example, entering a command such as ls /misc/xyz causes the automounter to the mount the /
xyz directory exported by host01 as /misc/xyz.
The cd and abc entries mount local file systems: an ISO image from the CD-ROM drive on /misc/cd
and an ext3 file system from /dev/hda1 on /misc/abc. The fenetres entry mounts a Samba share
as /misc/fenetres.
If a host map entry exists and a command references an NFS server by name relative to the mount
point (/net), the automounter mounts all directories that the server exports below a subdirectory of
the mount point named for the server. For example, the command cd /net/host03 causes the
automounter to mount all exports from host03 below the /net/host03 directory. By default, the
automounter uses the mount options nosuid,nodev,intr options unless you override the options in
the host map entry, for example:
/net
-hosts
-suid,dev,nointr
Note
The name of the NFS server must be resolvable to an IP address in DNS or in
the /etc/hosts file.
For more information, including details of using maps with NIS, NIS+, and LDAP, see the
hosts.master(5) manual page.
3. Start the autofs service, and configure the service to start following a system reboot:
# systemctl stat autofs
# systemctl enable autofs
You can configure various settings for autofs in /etc/sysconfig/autofs, such as the idle timeout
value after which a file system is automatically unmounted.
If you modify /etc/auto.master or /etc/sysconfig/autofs, restart the autofs service to make it
re-read these files:
# systemctl restart autofs
For more information, see the automount(8), autofs(5), and auto.master(5) manual pages.
/ISO
iso9660
228
ro,loop
0 0
/mnt
ext4
rw,loop
0 0
229
filesystem be a device name, a mount point, or a label or UUID specifier, for example:
# fsck UUID=ad8113d7-b279-4da8-b6e4-cfba045f66ff
By default, fsck prompts you to choose whether it should apply a suggested repair to the file system. If
you specify the -y option, fsck assumes a yes response to all such questions.
For the ext2, ext3, and ext4 file system types, other commands that are used to perform file system
maintenance include dumpe2fs and debugfs. dumpe2fs prints super block and block group information
for the file system on a specified device. debugfs is an interactive file system debugger that requires
expert knowledge of the file system architecture. Similar commands exist for most file system types and
also require expert knowledge.
For more information, see the fsck(8) manual page.
where device specifies the block device corresponding to the file system.
A mount_count of 0 or -1 disables automatic checking based on the number of mounts.
Tip
Specifying a different value of mount_count for each file system reduces the
probability that the system checks all the file systems at the same time.
To specify the maximum interval between file system checks:
# tune2fs -i interval[unit] device
The unit can be d, w, or m for days, weeks, or months. The default unit is d for days. An interval of 0
disables checking that is based on the time that has elapsed since the last check. Even if the interval is
exceeded, the file system is not checked until it is next mounted.
For more information, see the tune2fs(8) manual page.
230
directory. A default ACL entry is set on directories only, and specifies default access information for any file
within the directory that does not have an access ACL.
2. Edit /etc/fstab and change the entries for the file systems with which you want to use ACLs so that
they include the appropriate option that supports ACLs, for example:
LABEL=/work
/work
ext4
acl
0 0
For mounted Samba shares, use the cifsacl option instead of acl.
3. Remount the file systems, for example:
# mount -o remount /work
Sets the access ACL for the user specified by name or user ID. The
permissions apply to the owner if a user is not specified.
[d:]g:group[:permissions]
Sets the access ACL for a group specified by name or group ID. The
permissions apply to the owning group if a group is not specified.
[d:]m[:][:permissions]
Sets the effective rights mask, which is the union of all permissions of
the owning group and all of the user and group entries.
[d:]o[:][:permissions]
Sets the access ACL for other (everyone else to whom no other rule
applies).
The permissions are r, w, and x for read, write, and execute as used with chmod.
The d: prefix is used to apply the rule to the default ACL for a directory.
To display a file's ACL, use the getfacl command, for example:
# getfacl foofile
# file: foofile
# owner: bob
# group: bob
user::rwuser::fiona:r-user::jack:rwuser::jill:rwgroup::r-mask::r-other::r--
231
If extended ACLs are active on a file, the -l option to ls displays a plus sign (+) after the permissions, for
example:
# ls -l foofile
-rw-r--r--+ 1 bob bob
The following are examples of how to set and display ACLs for directories and files.
Grant read access to a file or directory by a user.
# setfacl -m u:user:r file
Display the name, owner, group, and ACL for a file or directory.
# getfacl file
Remove write access to a file for all groups and users by modifying the effective rights mask rather than
the ACL.
# setfacl -m m::rx file
The -b option removes all extended ACL entries from a file or directory.
# setfacl -b file
Set a default ACL of read and execute access for other on a directory:
# setfacl -m d:o:rx directory
Promote the ACL settings of a directory to default ACL settings that can be inherited.
# getfacl --access directory | setfacl -d -M- directory
For more information, see the acl(5), setfacl(1), and getfacl(1) manual pages.
232
specified limit. A hard limit specifies the maximum number of blocks or inodes available to a user or group
on the file system. Users or groups can exceed a soft limit for a period of time known as a grace period.
2. Include the usrquota or grpquota options in the file system's /etc/fstab entry, for example:
/dev/sdb1
/home
ext4
usrquota,grpquota
0 0
This command creates the files aquota.user and aquota.group in the root of the file system (/
home in this example).
For more information, see the quotacheck(8) manual page.
or for a group:
# edquota -g group
The command opens a text file opens in the default editor defined by the EDITOR environment variable,
allowing you to specify the limits for the user or group, for example:
Disk quotas for user guest (uid 501)
Filesystem blocks soft hard inodes
/dev/sdb1
10325
0
0
1054
soft
0
hard
0
The blocks and inodes entries show the user's currently usage on a file system.
Tip
Setting a limit to 0 disables quota checking and enforcement for the
corresponding blocks or inodes category.
2. Edit the soft and hard block limits for number of blocks and inodes, and save and close the file.
Alternatively, you can use the setquota command to configure quota limits from the command-line. The p option allows you to apply quota settings from one user or group to another user or group.
For more information, see the edquota(8) and setquota(8) manual pages.
233
The command opens a text file opens in the default editor defined by the EDITOR environment variable,
allowing you to specify the grace period, for example:
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
Filesystem
Block grace period
Inode grace period
/dev/sdb1
7days
7days
2. Edit the grace periods for the soft limits on the number of blocks and inodes, and save and close the
file.
For more information, see the edquota(8) manual page.
To display information about file systems where usage is over the quota limits:
# quota -q
Users can also use the quota command to display their own and their group's usage.
For more information, see the quota(1) manual page.
To disable disk quotas for all users, groups, and file systems:
# quotaoff -aguv
To re-enable disk quotas for all users, groups, and file systems:
# quotaon -aguv
234
235
236
237
239
239
241
241
242
242
244
245
245
246
246
247
247
247
247
248
249
250
250
250
251
252
252
254
254
254
254
255
255
256
256
257
257
258
260
260
This chapter describes administration tasks for the btrfs, ext3, ext4, OCFS2, and XFS local file systems.
237
ext4
In addition to the features of ext3, the ext4 file system supports extents
(contiguous physical blocks), pre-allocation, delayed allocation, faster
file system checking, more robust journaling, and other enhancements.
The maximum supported file or file system size is 50 TB.
ocfs2
Although intended as a general-purpose, high-performance, highavailability, shared-disk file system intended for use in clusters, it is
possible to use Oracle Cluster File System version 2 (OCFS2) as a
standalone, non-clustered file system.
Although it might seem that there is no benefit in mounting OCFS2
locally as compared to alternative file systems such as ext4 or btrfs, you
can use the reflink command with OCFS2 to create copy-on-write
clones of individual files in a similar way to using the cp --reflink
command with the btrfs file system. Typically, such clones allow you to
save disk space when storing multiple copies of very similar files, such
as VM images or Linux Containers. In addition, mounting a local OCFS2
file system allows you to subsequently migrate it to a cluster file system
without requiring any conversion.
See Section 21.16, Creating a Local OCFS2 File System.
The maximum supported file or file system size is 16 TB.
vfat
The vfat file system (also known as FAT32) was originally developed for
MS-DOS. It does not support journaling and lacks many of the features
that are available with other file system types. It is mainly used to
exchange data between Microsoft Windows and Oracle Linux systems.
The maximum supported file size or file system size is 2 GB.
xfs
238
To see what file system types your system supports, use the following command:
# ls /sbin/mkfs.*
/sbin/mkfs.btrfs
/sbin/mkfs.cramfs
/sbin/mkfs.ext2
/sbin/mkfs.ext3
/sbin/mkfs.ext4
/sbin/mkfs.ext4dev
/sbin/mkfs.msdos
/sbin/mkfs.vfat
/sbin/mkfs.xfs
These executables are used to make the file system type specified by their extension. mkfs.msdos and
mkfs.vfat are alternate names for mkdosfs. mkfs.cramfs creates a compressed ROM, read-only
cramfs file system for use by embedded or small-footprint systems.
239
Command
Description
mkfs.btrfs block_device
Create a btrfs file system with a label that you can use
when mounting the file system. For example:
mkfs.btrfs -L myvolume /dev/sdb2
Note
The device must correspond to a
partition if you intend to mount it by
specifying the name of its label.
mkfs.btrfs block_device1
block_device2 ...
Stripe the file system data and mirror the file system
metadata across several devices. For example:
mkfs.btrfs /dev/sdd /dev/sde
When you want to mount the file system, you can specify it by any of its component devices, for example:
# mkfs.btrfs -d raid10 -m raid10 /dev/sd[fghijk]
# mount /dev/sdf /raid10_mountpoint
To find out the RAID configuration of a mounted btrfs file system, use this command:
# btrfs filesystem df mountpoint
Note
The btrfs filesystem df command displays more accurate information about
the space used by a btrfs file system than the df command does.
Use the following form of the btrfs command to display information about all the btrfs file systems on a
system:
240
Description
Description
compress=lzo
compress=zlib
LZO offers a better compression ratio, while zlib offers faster compression.
You can also compress a btrfs file system at the same time that you defragment it.
To defragment a btrfs file system, use the following command:
# btrfs filesystem defragment filesystem_name
241
You can also defragment, and optionally compress, individual file system objects, such as directories and
files, within a btrfs file system.
# btrfs filesystem defragment [-c] file_name ...
Note
You can set up automatic defragmentation by specifying the autodefrag option
when you mount the file system. However, automatic defragmentation is not
recommended for large databases or for images of virtual machines.
Defragmenting a file or a subvolume that has a copy-on-write copy results breaks
the link between the file and its copy. For example, if you defragment a subvolume
that has a snapshot, the disk usage by the subvolume and its snapshot will increase
because the snapshot is no longer a copy-on-write image of the subvolume.
Snapshots are a type of subvolume that records the contents of their parent subvolumes at the time that
you took the snapshot. If you take a snapshot of a btrfs file system and do not write to it, the snapshot
records the state of the original file system and forms a stable image from which you can make a backup.
If you make a snapshot writable, you can treat it as a alternate version of the original file system. The copyon-write functionality of btrfs file system means that snapshots are quick to create, and consume very little
disk space initially.
Note
Taking snapshots of a subvolume is not a recursive process. If you create a
snapshot of a subvolume, every subvolume or snapshot that the subvolume
contains is mapped to an empty directory of the same name inside the snapshot.
242
The following table shows how to perform some common snapshot operations:
Command
Description
You can mount a btrfs subvolume as though it were a disk device. If you mount a snapshot instead of its
parent subvolume, you effectively roll back the state of the file system to the time that the snapshot was
taken. By default, the operating system mounts the parent btrfs volume, which has an ID of 0, unless you
use set-default to change the default subvolume. If you set a new default subvolume, the system will
mount that subvolume instead in future. You can override the default setting by specifying either of the
following mount options:
Mount Option
Description
subvolid=snapshot-ID
subvol=pathname/snapshot_path
When you have rolled back a file system by mounting a snapshot, you can take snapshots of the snapshot
itself to record its state.
When you no longer require a subvolume or snapshot, use the following command to delete it:
# btrfs subvolume delete subvolume_path
243
Note
Deleting a subvolume deletes all subvolumes that are below it in the b-tree
hierarchy. For this reason, you cannot remove the topmost subvolume of a btrfs file
system, which has an ID of 0.
For details of how to use the snapper command to create and manage btrfs snapshots, see
Section 21.7.1, Using snapper with Btrfs Subvolumes.
Here config_name is the name of the configuration and fs_name is the path of the mounted btrfs
subvolume. The command adds an entry for config_name to /etc/sysconfig/snapper, creates the
configuration file /etc/snapper/configs/config_name, and sets up a .snapshots subvolume for
the snapshots.
For example, the following command sets up the snapper configuration for a btrfs root file system:
# snapper -c root create-config -f btrfs /
By default, snapper sets up a cron.hourly job to create snapshots in the .snapshot subdirectory of
the subvolume and a cron.daily job to clean up old snapshots. You can edit the configuration file to
disable or change this behavior. For more information, see the snapper-configs(5) manual page.
There are three types of snapshot that you can create using snapper:
post
pre
single
You can use a single snapshot to record the state of a subvolume but it
does not have any association with other snapshots of the subvolume.
For example, the following commands create pre and post snapshots of a subvolume:
# snapper -c config_name create -t pre -p
N
... Modify the subvolume's contents...
# snapper -c config_name create -t post --pre-num N -p
N'
The -p option causes snapper to display the number of the snapshot so that you can reference it when
you create the post snapshot or when you compare the contents of the pre and post snapshots.
To display the files and directories that have been added, removed, or modified between the pre and post
snapshots, use the status subcommand:
# snapper -c config_name status N..N'
244
To display the differences between the contents of the files in the pre and post snapshots, use the diff
subcommand:
# snapper -c config_name diff N..N'
To undo the changes in the subvolume from post snapshot N' to pre snapshot N:
# snapper -c config_name undochange N..N'
You can specify multiple instances of the -v option to display increasing amounts of debugging output. The
-f option allows you to save the output to a file. Both of these options are implicit in the following usage
examples.
The following form of the send operation writes a complete description of how to convert one subvolume
into another:
# btrfs send -p parent_subvol sent_subvol
If a subvolume such as a snapshot of the parent volume, known as a clone source, will be available during
the receive operation from which some of the data can be recovered, you can specify the clone source to
reduce the size of the output file:
# btrfs send [-p parent_subvol] -c clone_src [-c clone_src] ... subvol
You can specify the -c option multiple times if there is more than one clone source. If you do not specify
the parent subvolume, btrfs chooses a suitable parent from the clone sources.
You use the receive operation to regenerate the sent subvolume at a specified path:
# btrfs receive [-f sent_file] mountpoint
245
2. Run sync to ensure that the snapshot has been written to disk:
# sync
3. Create a subvolume or directory on a btrfs file system as a backup area to receive the snapshot, for
example, /backupvol.
4. Send the snapshot to /backupvol:
# btrfs send /vol/backup_0 | btrfs receive /backupvol
b. Run sync to ensure that the snapshot has been written to disk:
# sync
c. Send only the differences between the reference backup and the new backup to the backup area:
# btrfs send -p /vol/backup_0 /vol/backup_1 | btrfs receive /backupvol
For example:
# btrfs qgroup limit 1g /myvol/subvol1
# btrfs qgroup limit 512m /myvol/subvol2
To find out the quota usage for a subvolume, use the btrfs qgroup show path command:
246
source_dev and target_dev specify the device to be replaced (source device) and the replacement
device (target device). mountpoint specifies the file system that is using the source device. The target
device must be the same size as or larger than the source device. If the source device is no longer
available or you specify the -r option, the data is reconstructed by using redundant data obtained from
other devices (such as another available mirror). The source device is removed from the file system when
the operation is complete.
You can use the btrfs replace status mountpoint and btrfs replace cancel mountpoint
commands to check the progress of the replacement operation or to cancel the operation.
247
To convert an ext2, ext3, or ext4 file system other than the root file system to btrfs:
1. Unmount the file system.
# umount mountpoint
2. Run the correct version of fsck (for example, fsck.ext4) on the underlying device to check and
correct the integrity of file system.
# fsck.extN -f device
4. Edit the file /etc/fstab, and change the file system type of the file system to btrfs, for example:
/dev/sdb
/myfs
btrfs
defaults
0 0
In this example, the installation root file system subvolume has an ID of 5. The subvolume with ID 258
(install) is currently mounted as /. Figure 21.1, Layout of the root File System Following Installation
illustrates the layout of the file system:
Figure 21.1 Layout of the root File System Following Installation
The top-level subvolume with ID 5 records the contents of the root file system file system at the end of
installation. The default subvolume (install) with ID 258 is currently mounted as the active root file
system.
248
The mount command shows the device that is currently mounted as the root file system:
# mount
/dev/mapper/vg_btrfs-lv_root on / type btrfs (rw)
...
To mount the installation root file system volume, you can use the following commands:
# mkdir /instroot
# mount -o subvolid=5 /dev/mapper/vg-btrfs-lv-root /instroot
If you list the contents of /instroot, you can see both the contents of the installation root file system
volume and the install snapshot, for example:
# ls /instroot
bin
cgroup etc
boot dev
home
install
lib
lib64
media
misc
mnt
net
opt
proc
root
sbin
selinux
srv
sys
tmp
usr
var
The contents of / and /instroot/install are identical as demonstrated in the following example
where a file (foo) created in /instroot/install is also visible in /:
# touch /instroot/install/foo
# ls /
bin
cgroup etc home
lib
boot dev
foo instroot lib64
# ls /instroot/install
bin
cgroup etc home
lib
boot dev
foo instroot lib64
# rm -f /foo
# ls /
bin
cgroup etc
instroot lib64
boot dev
home lib
media
# ls /instroot/install
bin
cgroup etc
instroot lib64
boot dev
home lib
media
media
misc
mnt
net
opt
proc
root
sbin
selinux
srv
sys
tmp
usr
var
media
misc
mnt
net
opt
proc
root
sbin
selinux
srv
sys
tmp
usr
var
misc
mnt
net
opt
proc
root
sbin
selinux
srv
sys
tmp
usr
var
misc
mnt
net
opt
proc
root
sbin
selinux
srv
sys
tmp
usr
var
2. Change directory to the mount point and take the snapshot. In this example, the install subvolume
is currently mounted as the root file system system.
# cd /mnt
# btrfs subvolume snapshot install root_snapshot_1
Create a snapshot of 'install' in './root_snapshot_1'
3. Change directory to / and unmount the top level of the file system.
# cd /
# umount /mnt
249
3. Change directory to / and unmount the top level of the file system.
# cd /
# umount /mnt
3. Use the following command with the block device corresponding to the ext2 file system:
# tune2fs -j device
250
5. Correct any entry for the file system in /etc/fstab so that its type is defined as ext3 instead of
ext2.
6. You can now remount the file system whenever convenient:
# mount filesystem
The command adds an ext3 journal to the file system as the file /.journal.
2. Run the mount command to determine the device that is currently mounted as the root file system.
In the following example, the root file system corresponds to the disk partition /dev/sda2:
# mount
/dev/sda2 on / type ext2 (rw)
where device is the root file system device (for example, /dev/sda2).
The command moves the .journal file to the journal inode.
9. Create a mount point (/mnt1) and mount the converted root file system on it.
bash-4.1# mkdir /mnt1
bash-4.1# mount -t ext3 device /mnt1
251
10. Use the vi command to edit /mnt1/etc/fstab, and change the file system type of the root file
system to ext3, for example:
/dev/sda2
ext3
defaults
1 1
11. Create the file .autorelabel in the root of the mounted file system.
bash-4.1# touch /mnt1/.autorelabel
The presence of the .autorelabel file in / instructs SELinux to recreate the security attributes of all
files on the file system.
Note
If you do not create the .autorelabel file, you might not be able to boot
the system successfully. If you forget to create the file and the reboot fails,
either disable SELinux temporarily by specifying selinux=0 to the kernel boot
parameters, or run SELinux in permissive mode by specifying enforcing=0.
12. Unmount the converted root file system.
bash-4.1# umount /mnt1
13. Remove the boot CD, DVD, or ISO, and reboot the system.
For more information, see the tune2fs(8) manual page.
For example, create a locally mountable OCFS2 volume on /dev/sdc1 with one node slot and the label
localvol:
# mkfs.ocfs2 -M local --fs-features=local -N 1 -L "localvol" /dev/sdc1
You can use the tunefs.ocfs2 utility to convert a local OCTFS2 file system to cluster use, for example:
# umount /dev/sdc1
# tunefs.ocfs2 -M cluster --fs-features=cluster -N 8 /dev/sdc1
This example also increases the number of node slots from 1 to 8 to allow up to eight nodes to mount the
file system.
For information about using OCFS2 with clusters, see Chapter 23, Oracle Cluster File System Version 2.
252
XFS has a large number of features that make it suitable for deployment in an enterprise-level computing
environment that requires the implementation of very large file systems:
XFS implements journaling for metadata operations, which guarantees the consistency of the file
system following loss of power or a system crash. XFS records file system updates asynchronously
to a circular buffer (the journal) before it can commit the actual data updates to disk. The journal can
be located either internally in the data section of the file system, or externally on a separate device to
reduce contention for disk access. If the system crashes or loses power, it reads the journal when the file
system is remounted, and replays any pending metadata operations to ensure the consistency of the file
system. The speed of this recovery does not depend on the size of the file system.
XFS is internally partitioned into allocation groups, which are virtual storage regions of fixed size. Any
files and directories that you create can span multiple allocation groups. Each allocation group manages
its own set of inodes and free space independently of other allocation groups to provide both scalability
and parallelism of I/O operations. If the file system spans many physical devices, allocation groups
can optimize throughput by taking advantage of the underlying separation of channels to the storage
components.
XFS is an extent-based file system. To reduce file fragmentation and file scattering, each file's blocks
can have variable length extents, where each extent consists of one or more contiguous blocks. XFS's
space allocation scheme is designed to efficiently locate free extents that it can use for file system
operations. XFS does not allocate storage to the holes in sparse files. If possible, the extent allocation
map for a file is stored in its inode. Large allocation maps are stored in a data structure maintained by
the allocation group.
To maximize throughput for XFS file systems that you create on an underlying striped, software or
hardware-based array, you can use the su and sw arguments to the -d option of the mkfs.xfs
command to specify the size of each stripe unit and the number of units per stripe. XFS uses the
information to align data, inodes, and journal appropriately for the storage. On lvm and md volumes and
some hardware RAID configurations, XFS can automatically select the optimal stripe parameters for you.
To reduce fragmentation and increase performance, XFS implements delayed allocation, reserving file
system blocks for data in the buffer cache, and allocating the block when the operating system flushes
that data to disk.
XFS supports extended attributes for files, where the size of each attribute's value can be up to 64 KB,
and each attribute can be allocated to either a root or a user name space.
Direct I/O in XFS implements high throughput, non-cached I/O by performing DMA directly between an
application and a storage device, utilising the full I/O bandwidth of the device.
To support the snapshot facilities that volume managers, hardware subsystems, and databases provide,
you can use the xfs_freeze command to suspend and resume I/O for an XFS file system. See
Section 21.22, Freezing and Unfreezing an XFS File System.
To defragment individual files in an active XFS file system, you can use the xfs-fsr command. See
Section 21.25, Defragmenting an XFS File System.
To grow an XFS file system, you can use the xfs_growfs command. See Section 21.21, Growing an
XFS File System.
To back up and restore a live XFS file system, you can use the xfsdump and xfsrestore commands.
See Section 21.24, Backing up and Restoring XFS File Systems.
XFS supports user, group, and project disk quotas on block and inode usage that are initialized when
the file system is mounted. Project disk quotas allow you to set limits for individual directory hierarchies
253
within an XFS file system without regard to which user or group has write access to that directory
hierarchy.
You can find more information about XFS at http://xfs.org/index.php/XFS_Papers_and_Documentation.
3. If you require the XFS development and QA packages, additionally subscribe your system to the
ol7_x86_64_optional channel and use yum to install them:
254
isize=256
sectsz=512
bsize=4096
sunit=0
bsize=4096
bsize=4096
sectsz=512
extsz=4096
To create an XFS file system with a stripe-unit size of 32 KB and 6 units per stripe, you would specify the
su and sw arguments to the -d option, for example:
# mkfs.xfs -d su=32k,sw=6 /dev/vg0/lv1
-l /dev/sdb
-L "VideoRecords" /dev/sdb
SBs
"VideoRecords"
Note
The label can be a maximum of 12 characters in length.
To display the existing UUID and then generate a new UUID:
# xfs_admin -u /dev/sdb
UUID = cd4f1cc4-15d8-45f7-afa4-2ae87d1db2ed
# xfs_admin -U generate /dev/sdb
writing all SBs
new UUID = c1b9d5a2-f162-11cf-9ece-0020afc76f16
255
# xfs_admin -c 0 /dev/sdb
Disabling lazy-counters
# xfs_admin -c 1 /dev/sdb
Enabling lazy-counters
To increase the size of the file system to the maximum size that the underlying device supports, specify the
-d option:
# xfs_growfs -d /myxfs1
Note
You can also use the xfs_freeze command with btrfs, ext3, and ext4 file
systems.
256
Description
gqnoenforce
Enable group quotas. Report usage, but do not enforce usage limits.
gquota
pqnoenforce
Enable project quotas. Report usage, but do not enforce usage limits.
pquota
uqnoenforce
Enable user quotas. Report usage, but do not enforce usage limits.
uquota
To show the block usage limits and the current usage in the myxfs file system for all users, use the
xfs_quota command:
# xfs_quota -x -c 'report -h' /myxfs
User quota on /myxfs (/dev/vg0/lv0)
Blocks
User ID
Used
Soft
Hard Warn/Grace
---------- --------------------------------root
0
0
0 00 [------]
guest
0
200M
250M 00 [------]
The following forms of the command display the free and used counts for blocks and inodes respectively in
the manner of the df -h command:
# xfs_quota -c 'df -h' /myxfs
Filesystem
Size
Used Avail Use% Pathname
/dev/vg0/lv0 200.0G 32.2M 20.0G
1% /myxfs
# xfs_quota -c 'df -ih' /myxfs
Filesystem
Inodes
Used
Free Use% Pathname
/dev/vg0/lv0 21.0m
4 21.0m
1% /myxfs
If you specify the -x option to enter expert mode, you can use subcommands such as limit to set soft
and hard limits for block and inode usage by an individual user, for example:
# xfs_quota -x -c 'limit bsoft=200m bhard=250m isoft=200 ihard=250 guest' /myxfs
Of course, this command requires that you mounted the file system with user quotas enabled.
To set limits for a group on an XFS file system that you have mounted with group quotas enabled, specify
the -g option to limit, for example:
# xfs_quota -x -c 'limit -g bsoft=5g bhard=6g devgrp' /myxfs
257
quota limits for a privileged user (for example, /var/log) or if many users or groups have write access to
a directory (for example, /var/tmp).
To define a project and set quota limits on it:
1. Mount the XFS file system with project quotas enabled:
# mount -o pquota device mountpoint
For example, to enable project quotas for the /myxfs file system:
# mount -o pquota /dev/vg0/lv0 /myxfs
2. Define a unique project ID for the directory hierarchy in the /etc/projects file:
# echo project_ID:mountpoint/directory >> /etc/projects
3. Create an entry in the /etc/projid file that maps a project name to the project ID:
# echo project_name:project_ID >> /etc/projid
For example, to map the project name testproj to the project with ID 51:
# echo testproj:51 >> /etc/projid
4. Use the project subcommand of xfs_quota to define a managed tree in the XFS file system for the
project:
# xfs_quota -x -c project -s project_name mountpoint
For example, to define a managed tree in the /myxfs file system for the project testproj, which
corresponds to the directory hierarchy /myxfs/testdir:
# xfs_quota -x -c project -s testproj /myxfs
5. Use the limit subcommand to set limits on the disk usage of the project:
# xfs_quota -x -c limit -p arguments project_name mountpoint
For example, to set a hard limit of 10 GB of disk space for the project testproj:
# xfs_quota -x -c limit -p bhard=10g testproj /myxfs
For more information, see the projects(5), projid(5), and xfs_quota(8) manual pages.
258
You can use the xfsdump command to create a backup of an XFS file system on a device such as a tape
drive, or in a backup file on a different file system. A backup can span multiple physical media that are
written on the same device, and you can write multiple backups to the same medium. You can write only
a single backup to a file. The command does not overwrite existing XFS backups that it finds on physical
media. You must use the appropriate command to erase a physical medium if you need to overwrite any
existing backups.
For example, the following command writes a level 0 (base) backup of the XFS file system, /myxfs to the
device /dev/st0 and assigns a session label to the backup:
# xfsdump -l 0 -L "Backup level 0 of /myxfs `date`" -f /dev/st0 /myxfs
You can make incremental dumps relative to an existing backup by using the command:
# xfsdump -l level -L "Backup level level of /myxfs `date`" -f /dev/st0 /myxfs
A level 1 backup records only file system changes since the level 0 backup, a level 2 backup records only
the changes since the latest level 1 backup, and so on up to level 9.
If you interrupt a backup by typing Ctrl-C and you did not specify the -J option (suppress the dump
inventory) to xfsdump , you can resume the dump at a later date by specifying the -R option:
# xfsdump -R -l 1 -L "Backup level 1 of /myxfs `date`" -f /dev/st0 /myxfs
In this example, the backup session label from the earlier, interrupted session is overridden.
You use the xfsrestore command to find out information about the backups you have made of an XFS
file system or to restore data from a backup.
The xfsrestore -I command displays information about the available backups, including the session ID
and session label. If you want to restore a specific backup session from a backup medium, you can specify
either the session ID or the session label.
For example, to restore an XFS file system from a level 0 backup by specifying the session ID:
# xfsrestore -f /dev/st0 -S c76b3156-c37c-5b6e-7564-a0963ff8ca8f /myxfs
If you specify the -r option, you can cumulatively recover all data from a level 0 backup and the higherlevel backups that are based on that backup:
# xfsrestore -r -f /dev/st0 -v silent /myxfs
The command searches the archive looking for backups based on the level 0 backup, and prompts you to
choose whether you want to restore each backup in turn. After restoring the backup that you select, the
command exits. You must run this command multiple times, first selecting to restore the level 0 backup,
and then subsequent higher-level backups up to and including the most recent one that you require to
restore the file system data.
Note
After completing a cumulative restoration of an XFS file system, you should delete
the housekeeping directory that xfsrestore creates in the destination directory.
You can recover a selected file or subdirectory contents from the backup medium, as shown in the
following example, which recovers the contents of /myxfs/profile/examples to /tmp/profile/
examples from the backup with a specified session label:
# xfsrestore -f /dev/sr0 -L "Backup level 0 of /myxfs Sat Mar 2 14:47:59 GMT 2013" \
-s profile/examples /usr/tmp
259
This form of the command allows you browse a backup as though it were a file system. You can change
directories, list files, add files, delete files, or extract files from a backup.
To copy the entire contents of one XFS file system to another, you can combine xfsdump and
xfsrestore, using the -J option to suppress the usual dump inventory housekeeping that the commands
perform:
# xfsdump -J - /myxfs | xfsrestore -J - /myxfsclone
For more information, see the xfsdump(8) and xfsrestore(8) manual pages.
If you run the xfs_fsr command without any options, the command defragments all currently mounted,
writeable XFS file systems that are listed in /etc/mtab. For a period of two hours, the command
passes over each file system in turn, attempting to defragment the top ten percent of files that have
the greatest number of extents. After two hours, the command records its progress in the file /var/
tmp/.fsrlast_xfs, and it resumes from that point if you run the command again.
For more information, see the xfs_fsr(8) manual page.
If you can mount the file system and you do not have a suitable backup, you can use xfsdump to attempt
to back up the existing file system data, However, the command might fail if the file system's metadata has
become too corrupted.
You can use the xfs_repair command to attempt to repair an XFS file system specified by its device
file. The command replays the journal log to fix any inconsistencies that might have resulted from the
file system not being cleanly unmounted. Unless the file system has an inconsistency, it is usually not
necessary to use the command, as the journal is replayed every time that you mount an XFS file system.
260
# xfs_repair device
If the journal log has become corrupted, you can reset the log by specifying the -L option to xfs_repair.
Warning
Resetting the log can leave the file system in an inconsistent state, resulting in data
loss and data corruption. Unless you are experienced in debugging and repairing
XFS file systems using xfs_db, it is recommended that you instead recreate the
file system and restore its contents from a backup.
If you cannot mount the file system or you do not have a suitable backup, running xfs_repair is the only
viable option unless you are experienced in using xfs_db.
xfs_db provides an internal command set that allows you to debug and repair an XFS file system
manually. The commands allow you to perform scans on the file system, and to navigate and display its
data structures. If you specify the -x option to enable expert mode, you can modify the data structures.
# xfs_db [-x] device
For more information, see the xfs_check(8), xfs_db(8) and xfs_repair(8) manual pages, and the
help command within xfs_db.
261
262
263
263
263
266
266
266
268
271
271
This chapter describes administration tasks for the NFS and Samba shared file systems.
The Network File System (NFS) is a distributed file system that allows a client computer to access
files over a network as though the files were on local storage. See Section 22.2, About NFS.
Samba
Samba enables the provision of file and print services for Microsoft Windows clients and can
integrate with a Windows workgroup, NT4 domain, or Active Directory domain. See Section 22.3,
About Samba.
263
2. Edit the /etc/exports file to define the directories that the server will make available for clients to
mount, for example:
/var/folder 192.0.2.102(rw,async)
/usr/local/apps *(all-squash,anonuid=501,anongid=501,ro)
/var/projects/proj1 192.168.1.0/24(ro) mgmtpc(rw)
Each entry consists of the local path to the exported directory, followed by a list of clients that can
mount the directory with client-specific mount options in parentheses. If this example:
The client system with the IP address 192.0.2.102 can mount /var/folder with read and write
permissions. All writes to the disk are asynchronous, which means that the server does not wait for
write requests to be written to disk before responding to further requests from the client.
All clients can mount /usr/local/apps read-only, and all connecting users including root are
mapped to the local unprivileged user with UID 501 and GID 501.
All clients on the 192.168.1.0 subnet can mount /var/projects/proj1 read-only, and the client
system named mgmtpc can mount the directory with read-write permissions.
Note
There is no space between a client specifier and the parenthesized list of
options.
For more information, see the exports(5) manual page.
3. Start the nfs-server service, and configure the service to start following a system reboot:
# systemctl start nfs-server
# systemctl enable nfs-server
4. If the server will serve NFSv4 clients, edit /etc/idmapd.conf and edit the definition for the Domain
parameter to specify the DNS domain name of the server, for example:
Domain = mydom.com
This setting prevents the owner and group being unexpectedly listed as the anonymous user or group
(nobody or nogroup) on NFS clients when the all_squash mount option has not been specified.
5. If you need to allow access through the firewall for NFSv4 clients only, use the following commands:
# firewall-cmd --zone=zone --add-service=nfs
# firewall-cmd --permanent --zone=zone --add-service=nfs
This configuration assumes that rpc.nfsd listens for client requests on TCP port 2049.
6. If you need to allow access through the firewall for NFSv3 clients as well as NFSv4 clients:
a. Edit /etc/sysconfig/nfs and create port settings for handling network mount requests and
status monitoring:
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
# Port rpc.statd should listen on.
STATD_PORT=662
264
The port values shown in this example are the default settings that are commented-out in the file.
b. Edit /etc/sysctl.conf and configure settings for the TCP and UDP ports on which the network
lock manager should listen:
fs.nfs.nlm_tcpport = 32803
fs.nfs.nlm_udpport = 32769
c. To verify that none of the ports that you have specified in /etc/sysconfig/nfs or /etc/
sysctl.conf is in use, enter the following commands:
#
#
#
#
lsof
lsof
lsof
lsof
-i
-i
-i
-i
tcp:32803
udp:32769
:892
:662
If any port is in use, use the lsof -i command to determine an unused port and amend the
setting in /etc/sysconfig/nfs or /etc/sysctl.conf as appropriate.
d. Shut down and reboot the server.
# systemctl reboot
NFS fails to start if one of the specified ports is in use, and reports an error in /var/log/
messages. Edit /etc/sysconfig/nfs or /etc/sysctl.conf as appropriate to use a different
port number for the service that could not start, and attempt to restart the nfslock and nfsserver services. You can use the rpcinfo -p command to confirm on which ports RPC services
are listening.
e. Restart the firewall service and configure the firewall to allow NFSv3 connections:
# systemctl restart firewalld
# firewall-cmd --zone=zone \
--add-port=2049/tcp --add-port=2049/udp \
--add-port=111/tcp --add-port=111/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=892/tcp --add-port=892/udp \
--add-port=662/tcp --add-port=662/udp
# firewall-cmd --permanent --zone=zone \
--add-port=2049/tcp --add-port=2049/udp \
--add-port=111/tcp --add-port=111/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=892/tcp --add-port=892/udp \
--add-port=662/tcp --add-port=662/udp
The port values shown in this example assume that the default port settings in /etc/sysconfig/
nfs and /etc/sysctl.conf are available for use by RPC services. This configuration also
assumes that rpc.nfsd and rpcbind listen on ports 2049 and 111 respectively.
7. Use the showmount -e command to display a list of the exported file systems, for example:
# showmount -e
Export list for host01.mydom.com
/var/folder 192.0.2.102
/usr/local/apps *
/var/projects/proj1 192.168.1.0/24 mgmtpc
showmount -a lists the current clients and the file systems that they have mounted, for example:
# showmount -a
mgmtpc.mydom.com:/var/projects/proj1
265
Note
To be able to use the showmount command from NFSv4 clients,
MOUNTD_PORT must be defined in /etc/sysconfig/nfs and a firewall rule
must allow access on this TCP port.
If you want to export or unexport directories without editing /etc/exports and restarting the NFS
service, use the exportfs command. The following example makes /var/dev available with read and
write access by all clients, and ignores any existing entries in /etc/exports.
# exportfs -i -o ro *:/var/dev
For more information, see the exportfs(8), exports(5), and showmount(8) manual pages.
2. Use showmount -e to discover what file systems an NFS server exports, for example:
# showmount -e host01.mydom.com
Export list for host01.mydom.com
/var/folder 192.0.2.102
/usr/local/apps *
/var/projects/proj1 192.168.1.0/24 mgmtpc
3. Use the mount command to mount an exported NFS file system on an available mount point:
# mount -t nfs -o ro,nosuid host01.mydoc.com:/usr/local/apps /apps
/apps
nfs
ro,nosuid
0 0
For more information, see the mount(8), nfs(5), and showmount(8) manual pages.
266
2. Edit /etc/samba/smb.conf and configure the sections to support the required services, for example:
[global]
security = ADS
realm = MYDOM.REALM
password server = krbsvr.mydom.com
load printers = yes
printing = cups
printcap name = cups
[printers]
comment = All Printers
path = /var/spool/samba
browseable = no
guest ok = yes
writable = no
printable = yes
printer admin = root, @ntadmins, @smbprintadm
[homes]
comment = User home directories
valid users = @smbusers
browsable = no
writable = yes
guest ok = no
[apps]
comment = Shared /usr/local/apps directory
path = /usr/local/apps
browsable = yes
writable = no
guest ok = yes
The [global] section contains settings for the Samba server. In this example, the server is assumed
to be a member of an Active Directory (AD) domain that is running in native mode. Samba relies on
tickets issued by the Kerberos server to authenticate clients who want to access local services.
For more information, see Section 22.3.2, About Samba Configuration for Windows Workgroups and
Domains.
The [printers] section specifies support for print services. The path parameter specifies the
location of a spooling directory that receives print jobs from Windows clients before submitting them to
the local print spooler. Samba advertises all locally configured printers on the server.
The [homes] section provide a personal share for each user in the smbusers group. The settings for
browsable and writable prevent other users from browsing home directories, while allowing full
access to valid users.
The [apps] section specifies a share named apps, which grants Windows users browsing and readonly permission to the /usr/local/apps directory.
3. Configure the system firewall to allow incoming TCP connections to ports 139 and 445, and incoming
UDP datagrams on ports 137 and 138:
# firewall-cmd --zone=zone \
--add-port=139/tcp --add-port=445/tcp --add-port=137-138/udp
# firewall-cmd --permanent --zone=zone \
--add-port=139/tcp --add-port=445/tcp --add-port=137-138/udp
Add similar rules for other networks from which Samba clients can connect.
267
The nmdb daemon services NetBIOS Name Service requests on UDP port 137 and NetBIOS Datagram
Service requests on UDP port 138.
The smbd daemon services NetBIOS Session Service requests on TCP port 139 and Microsoft
Directory Service requests on TCP port 445.
4. Start the smb service, and configure the service to start following a system reboot:
# systemctl start smb
# systemctl enable smb
If you change the /etc/samba/smb.conf file and any files that it references, the smb service will
reload its configuration automatically after a delay of up to one minute. You can force smb to reload its
configuration by sending a SIGHUP signal to the service daemon:
# killall -SIGHUP smbd
Making smb reload its configuration has no effect on established connections. You must restart the smb
service or the existing users of the service must disconnect and then reconnect.
To restart the smb service, use the following command:
# systemctl restart smb
For more information, see the smb.conf(5) and smbd(8) manual pages and http://www.samba.org/
samba/docs/.
268
[global]
security = share
workgroup = workgroup_name
netbios name = netbios_name
The client provides only a password and not a user name to the server. Typically, each share is associated
with a valid users parameter and the server validates the password against the hashed passwords
stored in /etc/passwd, /etc/shadow, NIS, or LDAP for the listed users. Using share-level security is
discouraged in favor of user-level security, for example:
[global]
security = user
workgroup = workgroup_name
netbios name = netbios_name
In the user security model, a client must supply a valid user name and password. This model supports
encrypted passwords. If the server successfully validates the client's user name and password, the client
can mount multiple shares without being required to specify a password. Use the smbpasswd command to
create an entry for a user in the Samba password file, for example:
# smbpasswd -a guest
New SMB password: password
Retype new SMB password: password
Added user guest.
The user must already exist as a user on the system. If a user is permitted to log into the server, he or she
can use the smbpasswd command to change his or her password.
If a Windows user has a different user name from his or her user name on the Samba server, create a
mapping between the names in the /etc/samba/smbusers file, for example:
root = admin administrator root
nobody = guest nobody pcguest smbguest
eddie = ejones
fiona = fchau
The first entry on each line is the user name on the Samba server. The entries after the equals sign (=) are
the equivalent Windows user names.
Note
Only the user security model uses Samba passwords.
The server security model, where the Samba server relies on another server to authenticate user names
and passwords, is deprecated as it has numerous security and interoperability issues.
269
security = ADS
realm = KERBEROS.REALM
It might also be necessary to specify the password server explicitly if different servers support AD
services and Kerberos authentication:
password server = kerberos_server.your_domain
3. Create a Kerberos ticket for the Administrator account in the Kerberos domain, for example:
# kinit [email protected]
This command creates the Kerberos ticket that is required to join the server to the AD domain.
4. Join the server to the AD domain:
# net ads join -S winads.mydom.com -U Administrator%password
In this example, the AD server is winads.mydom.com and password is the password for the
Administrator account.
The command creates a machine account in Active Directory for the Samba server and allows it to join
the domain.
5. Restart the smb service:
# systemctl restart smb
270
In this example, the primary domain controller is winpdc.mydom.com and password is the password
for the Administrator account.
4. Restart the smb service:
# systemctl restart smb
5. Create an account for each user who is allowed access to shares or printers:
# useradd -s /sbin/nologin username
# passwd username
In this example, the account's login shell is set to /sbin/nologin to prevent direct logins.
If you enter \\server_name, Windows displays the directories and printers that the server is sharing. You
can also use the same syntax to map a network drive to a share name.
After logging in, enter help at the smb:\> prompt to display a list of available commands.
To mount a Samba share, use a command such as the following:
# mount -t cifs //server_name/share_name mountpoint -o credentials=credfile
where the credentials file contains settings for username, password, and domain, for example:
username=eddie
password=clydenw
domain=MYDOMWKG
271
Caution
As the credentials file contains a plain-text password, use chmod to make it
readable only by you, for example:
# chmod 400 credfile
If the Samba server is a domain member server in an AD domain and your current login session was
authenticated by the Kerberos server in the domain, you can use your existing session credentials by
specifying the sec=krb5 option instead of a credentials file:
# mount -t cifs //server_name/share_name mountpoint -o sec=krb5
For more information, see the findsmb(1), mount.cifs(8), smbclient(1), and smbtree(1)
manual pages.
272
273
274
275
276
276
276
278
280
281
281
283
283
283
284
284
284
285
287
287
287
287
288
288
This chapter describes how to configure and use the Oracle Cluster File System Version 2 (OCFS2) file
system.
273
274
It is possible to configure and use OCFS2 without using a private network but such a configuration
increases the probability of a node fencing itself out of the cluster due to an I/O heartbeat timeout.
275
The command creates the configuration file /etc/ocfs2/cluster.conf if it does not already exist.
2. For each node, use the following command to define the node.
# o2cb add-node cluster_name node_name --ip ip_address
The name of the node must be same as the value of system's HOSTNAME that is configured in /etc/
sysconfig/network. The IP address is the one that the node will use for private communication in
the cluster.
For example, to define a node named node0 with the IP address 10.1.0.100 in the cluster mycluster:
# o2cb add-node mycluster node0 --ip 10.1.0.100
3. If you want the cluster to use global heartbeat devices, use the following commands.
# o2cb add-heartbeat cluster_name device1
.
276
.
.
# o2cb heartbeat-mode cluster_name global
Note
You must configure global heartbeat to use whole disk devices. You cannot
configure a global heartbeat device on a disk partition.
For example, to use /dev/sdd, /dev/sdg, and /dev/sdj as global heartbeat devices:
#
#
#
#
o2cb
o2cb
o2cb
o2cb
4. Copy the cluster configuration file /etc/ocfs2/cluster.conf to each node in the cluster.
Note
Any changes that you make to the cluster configuration file do not take effect
until you restart the cluster stack.
The following sample configuration file /etc/ocfs2/cluster.conf defines a 4-node cluster named
mycluster with a local heartbeat.
node:
name = node0
cluster = mycluster
number = 0
ip_address = 10.1.0.100
ip_port = 7777
node:
name = node1
cluster = mycluster
number = 1
ip_address = 10.1.0.101
ip_port = 7777
node:
name = node2
cluster = mycluster
number = 2
ip_address = 10.1.0.102
ip_port = 7777
node:
name = node3
cluster = mycluster
number = 3
ip_address = 10.1.0.103
ip_port = 7777
cluster:
name = mycluster
heartbeat_mode = local
node_count = 4
If you configure your cluster to use a global heartbeat, the file also include entries for the global heartbeat
devices.
node:
name = node0
277
cluster = mycluster
number = 0
ip_address = 10.1.0.100
ip_port = 7777
node:
name = node1
cluster = mycluster
number = 1
ip_address = 10.1.0.101
ip_port = 7777
node:
name = node2
cluster = mycluster
number = 2
ip_address = 10.1.0.102
ip_port = 7777
node:
name = node3
cluster = mycluster
number = 3
ip_address = 10.1.0.103
ip_port = 7777
cluster:
name = mycluster
heartbeat_mode = global
node_count = 4
heartbeat:
cluster = mycluster
region = 7DA5015346C245E6A41AA85E2E7EA3CF
heartbeat:
cluster = mycluster
region = 4F9FBB0D9B6341729F21A8891B9A05BD
heartbeat:
cluster = mycluster
region = B423C7EEE9FC426790FC411972C91CC3
The cluster heartbeat mode is now shown as global, and the heartbeat regions are represented by the
UUIDs of their block devices.
If you edit the configuration file manually, ensure that you use the following layout:
The cluster:, heartbeat:, and node: headings must start in the first column.
Each parameter entry must be indented by one tab space.
A blank line must separate each section that defines the cluster, a heartbeat device, or a node.
The following table describes the values for which you are prompted.
278
Prompt
Description
Cluster to start at boot (Enter Enter the name of your cluster that you defined in the
"none" to clear)
cluster configuration file, /etc/ocfs2/cluster.conf.
The number of 2-second heartbeats that must elapse
without response before a node is considered dead. To
calculate the value to enter, divide the required threshold
time period by 2 and add 1. For example, to set the
threshold time period to 120 seconds, enter a value of 61.
The default value is 31, which corresponds to a threshold
time period of 60 seconds.
Note
If your system uses multipathed
storage, the recommended value is
61 or greater.
Specify network idle timeout in The time in milliseconds that must elapse before a
ms (>=5000)
network connection is considered dead. The default value
is 30,000 milliseconds.
Note
For bonded network interfaces,
the recommended value is 30,000
milliseconds or greater.
Specify network keepalive delay The maximum delay in milliseconds between sending
in ms (>=1000)
keepalive packets to another node. The default and
recommended value is 2,000 milliseconds.
Specify network reconnect delay The minimum delay in milliseconds between reconnection
in ms (>=2000)
attempts if a network connection goes down. The default
and recommended value is 2,000 milliseconds.
To verify the settings for the cluster stack, enter the systemctl status o2cb command:
# systemctl status o2cb
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster "mycluster": Online
Heartbeat dead threshold: 61
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Heartbeat mode: Local
Checking O2CB heartbeat: Active
279
In this example, the cluster is online and is using local heartbeat mode. If no volumes have been
configured, the O2CB heartbeat is shown as Not active rather than Active.
The next example shows the command output for an online cluster that is using three global heartbeat
devices:
# systemctl status o2cb
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster "mycluster":
Heartbeat dead threshold: 61
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Heartbeat mode: Global
Checking O2CB heartbeat: Active
7DA5015346C245E6A41AA85E2E7EA3CF
4F9FBB0D9B6341729F21A8891B9A05BD
B423C7EEE9FC426790FC411972C91CC3
Online
/dev/sdd
/dev/sdg
/dev/sdj
2. Configure the o2cb and ocfs2 services so that they start at boot time after networking is enabled:
# systemctl enable o2cb
# systemctl enable ocfs2
These settings allow the node to mount OCFS2 volumes automatically when the system starts.
Description
panic
Specifies the number of seconds after a panic before a system will automatically reset
itself.
If the value is 0, the system hangs, which allows you to collect detailed information
about the panic for troubleshooting. This is the default value.
To enable automatic reset, set a non-zero value. If you require a memory image
(vmcore), allow enough time for Kdump to create this image. The suggested value is
30 seconds, although large systems will require a longer time.
panic_on_oops Specifies that a system must panic if a kernel oops occurs. If a kernel thread required
for cluster operation crashes, the system must reset itself. Otherwise, another node
might not be able to tell whether a node is slow to respond or unable to respond,
causing cluster operations to hang.
On each node, enter the following commands to set the recommended values for panic and
panic_on_oops:
# sysctl kernel.panic = 30
# sysctl kernel.panic_on_oops = 1
To make the change persist across reboots, add the following entries to the /etc/sysctl.conf file:
280
Description
/etc/init.d/o2cb online
/etc/init.d/o2cb offline
/etc/init.d/o2cb unload
Description
-b block-size
Specifies the unit size for I/O transactions to and from the file system,
and the size of inode and extent blocks. The supported block sizes are
512 (512 bytes), 1K, 2K, and 4K. The default and recommended block
size is 4K (4 kilobytes).
--block-size block-size
-C cluster-size
--cluster-size clustersize
--fs-featurelevel=feature-level
Specifies the unit size for space used to allocate file data. The
supported cluster sizes are 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K,
and 1M (1 megabyte). The default cluster size is 4K (4 kilobytes).
Allows you select a set of file-system features:
default
max-compat
max-features
--fs_features=feature
-J size=journal-size
Specifies the size of the write-ahead journal. If not specified, the size
is determined from the file system usage type that you specify to the T option, and, otherwise, from the volume size. The default size of the
--journal-options
size=journal-size
281
Command Option
Description
journal is 64M (64 MB) for datafiles, 256M (256 MB) for mail, and
128M (128 MB) for vmstore.
-L volume-label
Specifies a descriptive name for the volume that allows you to identify
it easily on different cluster nodes.
--label volume-label
-N number
--node-slots number
-T file-system-usage-type
vmstore
For example, create an OCFS2 volume on /dev/sdc1 labeled as myvol using all the default settings for
generic usage on file systems that are no larger than a few gigabytes. The default values are a 4 KB block
and cluster size, eight node slots, a 256 MB journal, and support for default file-system features.
# mkfs.ocfs2 -L "myvol" /dev/sdc1
Create an OCFS2 volume on /dev/sdd2 labeled as dbvol for use with database files. In this case, the
cluster size is set to 128 KB and the journal size to 32 MB.
# mkfs.ocfs2 -L "dbvol" -T datafiles /dev/sdd2
Create an OCFS2 volume on /dev/sde1 with a 16 KB cluster size, a 128 MB journal, 16 node slots, and
support enabled for all features except refcount trees.
# mkfs.ocfs2 -C 16K -J size=128M -N 16 --fs-feature-level=max-features \
--fs-features=norefcount /dev/sde1
Note
Do not create an OCFS2 volume on an LVM logical volume. LVM is not clusteraware.
282
You cannot change the block and cluster size of an OCFS2 volume after it
has been created. You can use the tunefs.ocfs2 command to modify other
settings for the file system with certain restrictions. For more information, see the
tunefs.ocfs2(8) manual page.
If you intend the volume to store database files, do not specify a cluster size that is
smaller than the block size of the database.
The default cluster size of 4 KB is not suitable if the file system is larger than a few
gigabytes. The following table suggests minimum cluster size settings for different
file system size ranges:
File System Size
1 GB - 10 GB
8K
10GB - 100 GB
16K
100 GB - 1 TB
32K
1 TB - 10 TB
64K
10 TB - 16 TB
128K
/dbvol1
ocfs2
_netdev,defaults
0 0
Note
The file system will not mount unless you have enabled the o2cb and ocfs2
services to start after networking is started. See Section 23.2.5, Configuring the
Cluster Stack.
283
You can use the debugfs.ocfs2 command, which is similar in behavior to the debugfs command for
the ext3 file system, and allows you to trace events in the OCFS2 driver, determine lock statuses, walk
directory structures, examine inodes, and so on.
For more information, see the debugfs.ocfs2(8) manual page.
The o2image command saves an OCFS2 file system's metadata (including information about inodes,
file names, and directory names) to an image file on another file system. As the image file contains only
metadata, it is much smaller than the original file system. You can use debugfs.ocfs2 to open the image
file, and analyze the file system layout to determine the cause of a file system corruption or performance
problem.
For example, the following command creates the image /tmp/sda2.img from the OCFS2 file system on
the device /dev/sda2:
# o2image /dev/sda2 /tmp/sda2.img
/sys/kernel/debug
debugfs
defaults
0 0
284
Command
Description
debugfs.ocfs2 -l
debugfs.ocfs2 -l HEARTBEAT \
One method for obtaining a trace its to enable the trace, sleep for a short while, and then disable the trace.
As shown in the following example, to avoid seeing unnecessary output, you should reset the trace bits to
their default settings after you have finished.
# debugfs.ocfs2 -l ENTRY EXIT NAMEI INODE allow && sleep 10 && \
debugfs.ocfs2 -l ENTRY EXIT deny NAMEI INODE off
To limit the amount of information displayed, enable only the trace bits that you believe are relevant to
understanding the problem.
If you believe a specific file system command, such as mv, is causing an error, the following example
shows the commands that you can use to help you trace the error.
#
#
#
#
As the trace is enabled for all mounted OCFS2 volumes, knowing the correct process ID can help you to
interpret the trace.
For more information, see the debugfs.ocfs2(8) manual page.
285
2. Dump the lock statuses for the file system device (/dev/sdx1 in this example).
# echo "fs_locks" | debugfs.ocfs2 /dev/sdx1 >/tmp/fslocks 62
Lockres: M00000000000006672078b84822 Mode: Protected Read
Flags: Initialized Attached
RO Holders: 0 EX Holders: 0
Pending Action: None Pending Unlock Action: None
Requested Mode: Protected Read Blocking Mode: Invalid
The Lockres field is the lock name used by the DLM. The lock name is a combination of a lock-type
identifier, an inode number, and a generation number. The following table shows the possible lock
types.
Identifier
Lock Type
File data.
Metadata.
Rename.
Superblock.
Read-write.
3. Use the Lockres value to obtain the inode number and generation number for the lock.
# echo "stat <M00000000000006672078b84822>" | debugfs.ocfs2 -n /dev/sdx1
Inode: 419616
Mode: 0666
Generation: 2025343010 (0x78b84822)
...
4. Determine the file system object to which the inode number relates by using the following command.
# echo "locate <419616>" | debugfs.ocfs2 -n /dev/sdx1
419616 /linux-2.6.15/arch/i386/kernel/semaphore.c
5. Obtain the lock names that are associated with the file system object.
# echo "encode /linux-2.6.15/arch/i386/kernel/semaphore.c" | \
debugfs.ocfs2 -n /dev/sdx1
M00000000000006672078b84822 D00000000000006672078b84822 W00000000000006672078b84822
In this example, a metadata lock, a file data lock, and a read-write lock are associated with the file
system object.
6. Determine the DLM domain of the file system.
# echo "stats" | debugfs.ocfs2 -n /dev/sdX1 | grep UUID: | while read a b ; do echo $b ; done
82DA8137A49A47E4B187F74E09FBBB4B
7. Use the values of the DLM domain and the lock name with the following command, which enables
debugging for the DLM.
# echo R 82DA8137A49A47E4B187F74E09FBBB4B \
M00000000000006672078b84822 > /proc/fs/ocfs2_dlm/debug
286
The DLM supports 3 lock modes: no lock (type=0), protected read (type=3), and exclusive (type=5).
In this example, the lock is mastered by node 1 (owner=1) and node 3 has been granted a protectedread lock on the file-system resource.
9. Run the following command, and look for processes that are in an uninterruptable sleep state as shown
by the D flag in the STAT column.
# ps -e -o pid,stat,comm,wchan=WIDE-WCHAN-COLUMN
At least one of the processes that are in the uninterruptable sleep state will be responsible for the hang
on the other node.
If a process is waiting for I/O to complete, the problem could be anywhere in the I/O subsystem from
the block device layer through the drivers to the disk array. If the hang concerns a user lock (flock()),
the problem could lie in the application. If possible, kill the holder of the lock. If the hang is due to lack of
memory or fragmented memory, you can free up memory by killing non-essential processes. The most
immediate solution is to reset the node that is holding the lock. The DLM recovery process can then clear
all the locks that the dead node owned, so letting the cluster continue to operate.
where cluster_name is the name of the cluster. To set the value after each reboot of the system, add
this line to /etc/rc.local. To restore the default behavior, use the value reset instead of panic.
Oracle Databases
As both CSS and O2CB use the lowest node number as a tie breaker in quorum calculations, you should
ensure that the node numbers are the same in both clusters. If necessary, edit the O2CB configuration file
/etc/ocfs2/cluster.conf to make the node numbering consistent, and update this file on all nodes.
The change takes effect when the cluster is restarted.
288
Table of Contents
24 Authentication Configuration .......................................................................................................
24.1 About Authentication .......................................................................................................
24.2 About Local Oracle Linux Authentication ..........................................................................
24.2.1 Configuring Local Access .....................................................................................
24.2.2 Configuring Fingerprint Reader Authentication .......................................................
24.2.3 Configuring Smart Card Authentication ..................................................................
24.3 About IPA Authentication .................................................................................................
24.3.1 Configuring IPA Authentication ..............................................................................
24.4 About LDAP Authentication .............................................................................................
24.4.1 About LDAP Data Interchange Format ..................................................................
24.4.2 Configuring an LDAP Server .................................................................................
24.4.3 Replacing the Default Certificates .........................................................................
24.4.4 Creating and Distributing Self-signed CA Certificates .............................................
24.4.5 Initializing an Organization in LDAP ......................................................................
24.4.6 Adding an Automount Map to LDAP .....................................................................
24.4.7 Adding a Group to LDAP ......................................................................................
24.4.8 Adding a User to LDAP ........................................................................................
24.4.9 Adding Users to a Group in LDAP ........................................................................
24.4.10 Enabling LDAP Authentication ............................................................................
24.5 About NIS Authentication ................................................................................................
24.5.1 About NIS Maps ..................................................................................................
24.5.2 Configuring an NIS Server ....................................................................................
24.5.3 Adding User Accounts to NIS ...............................................................................
24.5.4 Enabling NIS Authentication .................................................................................
24.6 About Kerberos Authentication ........................................................................................
24.6.1 Configuring a Kerberos Server ..............................................................................
24.6.2 Configuring a Kerberos Client ...............................................................................
24.6.3 Enabling Kerberos Authentication ..........................................................................
24.7 About Pluggable Authentication Modules ..........................................................................
24.7.1 Configuring Pluggable Authentication Modules .......................................................
24.8 About the System Security Services Daemon ...................................................................
24.8.1 Configuring an SSSD Server ................................................................................
24.9 About Winbind Authentication ..........................................................................................
24.9.1 Enabling Winbind Authentication ...........................................................................
25 Local Account Configuration .......................................................................................................
25.1 About User and Group Configuration ...............................................................................
25.2 Changing Default Settings for User Accounts ...................................................................
25.3 Creating User Accounts ..................................................................................................
25.3.1 About umask and the setgid and Restricted Deletion Bits .......................................
25.4 Locking an Account ........................................................................................................
25.5 Modifying or Deleting User Accounts ...............................................................................
25.6 Creating Groups .............................................................................................................
25.7 Modifying or Deleting Groups ..........................................................................................
25.8 Configuring Password Ageing ..........................................................................................
25.9 Granting sudo Access to Users .......................................................................................
26 System Security Administration ..................................................................................................
26.1 About System Security ....................................................................................................
26.2 Configuring and Using SELinux .......................................................................................
26.2.1 About SELinux Administration ...............................................................................
26.2.2 About SELinux Modes ..........................................................................................
26.2.3 Setting SELinux Modes ........................................................................................
291
293
293
294
295
297
297
298
298
298
299
300
302
303
306
307
308
308
310
311
316
316
317
320
322
324
326
329
330
332
332
334
334
336
336
339
339
340
340
341
341
341
342
342
342
343
345
345
346
347
349
349
292
349
351
354
355
356
357
359
362
363
364
364
365
365
366
370
370
370
371
372
372
373
375
376
376
376
377
377
379
383
383
383
384
385
385
385
386
387
388
388
293
294
295
297
297
298
298
298
299
300
302
303
306
307
308
308
310
311
316
316
317
320
322
324
326
329
330
332
332
334
334
336
336
This chapter describes how to configure various authentication methods that Oracle Linux can use,
including NIS, LDAP, Kerberos, and Winbind, and how you can configure the System Security Services
Daemon feature to provide centralized identity and authentication management.
293
Directory Access Protocol (LDAP), the Network Information Service (NIS), or Winbind. In addition, IPSv2,
LDAP, and NIS data files can use the Kerberos authentication protocol, which allows nodes communicating
over a non-secure network to prove their identity to one another in a secure manner.
You can use the Authentication Configuration GUI (system-config-authentication) to select the
authentication mechanism and to configure any associated authentication options. Alternatively, you can
use the authconfig command. Both the Authentication Configuration GUI and authconfig adjust
settings in the PAM configuration files that are located in the /etc/pam.d directory. The Authentication
Configuration GUI is available if you install the authconfig-gtk package.
Figure 24.1 shows the Authentication Configuration GUI with Local accounts only selected.
Figure 24.1 Authentication Configuration of Local Accounts
294
The /etc/passwd file stores account information for each user such as his or her unique user ID (or UID,
which is an integer), user name, home directory, and login shell. A user logs in using his or her user name,
but the operating system uses the associated UID. When the user logs in, he or she is placed in his or her
home directory and his or her login shell runs.
The /etc/group file stores information about groups of users. A user also belongs to one or more
groups, and each group can contain one or more users. If you can grant access privileges to a group, all
members of the group receive the same access privileges. Each group account has a unique group ID
(GID, again an integer) and an associated group name.
By default, Oracle Linux implements the user private group (UPG) scheme where adding a user account
also creates a corresponding UPG with the same name as the user, and of which the user is the only
member.
Only the root user can add, modify, or delete user and group accounts. By default, both users and groups
use shadow passwords, which are cryptographically hashed and stored in /etc/shadow and /etc/
gshadow respectively. These shadow password files are readable only by the root user. root can set a
group password that a user must enter to become a member of the group by using the newgrp command.
If a group does not have a password, a user can only join the group by root adding him or her as a
member.
The /etc/login.defs file defines parameters for password aging and related security policies.
For more information about the content of these files, see the group(5), gshadow(5), login.defs(5),
passwd(5), and shadow(5) manual pages.
295
where:
permission
users
origins
296
For example, the following rule denies login access by anyone except root from the network
192.168.2.0/24:
- : ALL except root : 192.168.2.0/24
For more information, see the access.conf(5) manual page and Chapter 25, Local Account
Configuration.
2. Use the following command to install the root CA certificates in the NSS database:
# certutil -A -d /etc/pki/nssdb -t "TC,C,C" -n "Root CA certificates" -i CACert.pem
4. On the Advanced Options tab, select the Enable smart card support check box.
5. If you want to disable all other login authentication methods, select the Require smart card for login
check box.
Caution
Do not select this option until you have tested that can use a smart card to
authenticate with the system.
6. From the Card removal action menu, select the system's response if a user removes a smart card
while logged in to a session:
Ignore
297
Lock
You can also use the following command to configure smart card authentication:
# authconfig --enablesmartcard --update
To specify the system's response if a user removes a smart card while logged in to a session:
authconfig --smartcardaction=0|1 --update
Specify a value of 0 to --smartcardaction to lock the system if a card is removed. To ignore card
removal, use a value of 1.
Once you have tested that you can use a smart card to authenticate with the system, you can disable all
other login authentication methods.
# authconfig --enablerequiresmartcard --update
298
or more values. Examples of types are domain component (dc), common name (cn), organizational unit
(ou) and email address (mail). The objectClass attribute allows you to specify whether an attribute
is required or optional. An objectClass attribute's value specifies the schema rules that an entry must
obey.
A distinguished name (dn) uniquely identifies an entry in LDAP. The distinguished name consists of the
name of the entry (the relative distinguished name or RDN) concatenated with the names of its ancestor
entries in the LDAP directory hierarchy. For example, the distinguished name of a user with the RDN
uid=arc815 might be uid=arc815,ou=staff,dc=mydom,dc=com.
The following are examples of information stored in LDAP for a user:
# User arc815
dn: uid=arc815,ou=People,dc=mydom,dc=com
cn: John Beck
givenName: John
sn: Beck
uid: arc815
uidNumber: 5159
gidNumber: 626
homeDirectory: /nethome/arc815
loginShell: /bin/bash
mail: [email protected]
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword: {SSHA}QYrFtKkqOrifgk8H4EYf68B0JxIIaLga
The optional id number is determined by the application that you use to edit the entry. Each attribute type
for an entry contains either a value or a comma-separated list of attribute and value pairs as defined in the
LDAP directory schema.
There must be a blank line between each dn definition section or include: line. There must not be any
other blank lines or any white space at the ends of lines. White space at the start of a line indicates a
continuation of the previous line.
299
slapd.d/cn=config.ldif
slapd.d/cn=config/*.ldif
slapd.d/cn=config/
cn=schema/*.ldif
Note
You should never need to edit any files under /etc/openldap/slapd.d as
you can reconfigure OpenLDAP while the slapd service is running.
2. If you want configure slapd to listen on port 636 for connections over an SSL tunnel (ldaps://), edit
/etc/sysconfig/slapd, and change the value of SLAPD_LDAPS to yes:
SLAPD_LDAPS=yes
If required, you can prevent slapd listening on port 389 for ldap:// connections, by changing the
value of SLAPD_LDAP to no:
SLAPD_LDAP=no
Ensure that you also define the correct SLAPD_URLS for the ports that are enabled. For instance, if
you intend to use SSL and you wish slapd to listen on port 636, you must specify ldaps:// as one of
the supported URLS. For example:
SLAPD_URLS="ldapi:/// ldap:/// ldaps:///"
3. Configure the system firewall to allow incoming TCP connections on port 389, for example:
# firewall-cmd --zone=zone --add-port=389/tcp
# firewall-cmd --permanent --zone=zone --add-port=389/tcp
The primary TCP port for LDAP is 389. If you configure LDAP to use an SSL tunnel (ldaps), substitute
the port number that the tunnel uses, which is usually 636, for example:
# firewall-cmd --zone=zone --add-port=636/tcp
# firewall-cmd --permanent --zone=zone --add-port=636/tcp
4. Change the user and group ownership of /var/lib/ldap and any files that it contains to ldap:
# cd /var/lib/ldap
# chown ldap:ldap ./*
5. Start the slapd service and configure it to start following system reboots:
300
6. Generate a hash of the LDAP password that you will use with the olcRootPW entry in the configuration
file for your domain database, for example:
# slappasswd -h {SSHA}
New password: password
Re-enter new password: password
{SSHA}lkMShz73MZBic19Q4pfOaXNxpLN3wLRy
7. Create an LDIF file with a name such as config-mydom-com.ldif that contains configuration
entries for your domain database based on the following example:
# Load the schema files required for accounts
include file:///etc/openldap/schema/cosine.ldif
include file:///etc/openldap/schema/nis.ldif
include file:///etc/openldap/schema/inetorgperson.ldif
# Load the HDB (hierarchical database) backend modules
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulepath: /usr/lib64/openldap
olcModuleload: back_hdb
# Configure the database settings
dn: olcDatabase=hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {1}hdb
olcSuffix: dc=mydom,dc=com
# The database directory must already exist
# and it should only be owned by ldap:ldap.
# Setting its mode to 0700 is recommended
olcDbDirectory: /var/lib/ldap
olcRootDN: cn=admin,dc=mydom,dc=com
olcRootPW: {SSHA}lkMShz73MZBic19Q4pfOaXNxpLN3wLRy
olcDbConfig: set_cachesize 0 10485760 0
olcDbConfig: set_lk_max_objects 2000
olcDbConfig: set_lk_max_locks 2000
olcDbConfig: set_lk_max_lockers 2000
olcDbIndex: objectClass eq
olcLastMod: TRUE
olcDbCheckpoint: 1024 10
# Set up access control
olcAccess: to attrs=userPassword
by dn="cn=admin,dc=mydom,dc=com"
write by anonymous auth
by self write
by * none
olcAccess: to attrs=shadowLastChange
by self write
by * read
olcAccess: to dn.base=""
by * read
olcAccess: to *
by dn="cn=admin,dc=mydom,dc=com"
write by * read
301
Note
This configuration file allows you to reconfigure slapd while it is running. If you
use a slapd.conf configuration file, you can also update slapd dynamically,
but such changes do not persist if you restart the server.
For more information, see the slapd-config(5) manual page.
8. Use the ldapadd command to add the LDIF file:
# ldapadd -Y EXTERNAL -H ldapi:/// -f config-mydom-com.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=module,cn=config"
adding new entry "olcDatabase=hdb,cn=config"
For more information about configuring OpenLDAP, see the slapadd(8C), slapd(8C), slapdconfig(5), and slappasswd(8C) manual pages, the OpenLDAP Administrators Guide (/usr/share/
doc/openldap-servers-version/guide.html), and the latest OpenLDAP documentation at http://
www.openldap.org/doc/.
302
changetype: modify
replace: olcTLSCertificateFile
olcTLSCertificateFile: /etc/ssl/certs/server-cert.pem
dn: cn=config
changetype: modify
replace: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/ssl/certs/server-key.pem
dn: cn=config
changetype: modify
add: olcTLSCipherSuite
olcTLSCipherSuite: TLSv1+RSA:!NULL
dn: cn=config
changetype: modify
add: olcTLSVerifyClient
olcTLSVerifyClient: never
If you generate only a self-signed certificate and its corresponding key file, you do not need to specify a
root CA certificate.
2. Use the ldapmodify command to apply the LDIF file:
# ldapmodify -Y EXTERNAL -H ldapi:/// -f mod-TLS.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
modifying entry "cn=config"
modifying entry "cn=config"
...
For more information, see the ldapmodify(1), ldapsearch(1) and openssl(1) manual pages.
303
The following procedure describes how to use openssl to create a self-signed CA certificate and private
key file, and then use these files to sign server certificates.
To create the CA certificate and use it to sign a server certificate:
1. Change directory to /etc/openldap/certs on the LDAP server:
# cd /etc/openldap/certs
304
Note
If you intend to generate server certificates for several servers, name the
certificate, its key file, and the certificate request so that you can easily
identify both the server and the service, for example, ldap_host02cert.pem, ldap_host02-key.pem, and ldap_host02-cert.csr.
b. Change the mode on the key file to 0400, and change its user and group ownership to ldap:
# chmod 0400 server-key.pem
# chown ldap:ldap server-key.pem
Note
For the Common Name, specify the Fully Qualified Domain Name (FQDN)
of the server. If the FQDN of the server does not match the common name
specified in the certificate, clients cannot obtain a connection to the server.
d. Use the CA certificate and its corresponding key file to sign the certificate request and generate the
server certificate:
# openssl x509 -req -days 1095 -CAcreateserial \
-in server-cert.csr -CA CAcert.pem -CAkey CAcert-key.pem \
-out server-cert.pem
Signature ok
subject=/C=US/ST=California/L=Redwood City/O=Mydom
Inc/OU=Org/CN=ldap.mydom.com/[email protected]
Getting CA Private Key
7. If you generate server certificates for other LDAP servers, copy the appropriate server certificate, its
corresponding key file, and the CA certificate to /etc/openldap/certs on those servers.
8. Set up a web server to host the CA certificate for access by clients. The following steps assume that
the LDAP server performs this function. You can use any suitable, alternative server instead.
a. Install the Apache HTTP server.
# yum install httpd
305
Caution
Do not copy the key files.
d. Edit the HTTP server configuration file, /etc/httpd/conf/httpd.conf, and specify the
resolvable domain name of the server in the argument to ServerName.
ServerName server_addr:80
If the server does not have a resolvable domain name, enter its IP address instead.
Verify that the setting of the Options directive in the <Directory "/var/www/html"> section
specifies Indexes and FollowSymLinks to allow you to browse the directory hierarchy, for
example:
Options Indexes FollowSymLinks
e. Start the Apache HTTP server, and configure it to start after a reboot.
# systemctl start httpd
# systemctl enable httpd
f.
If you have enabled the firewall on your system, configure it to allow incoming HTTP connection
requests on TCP port 80, for example:
# firewall-cmd --zone=zone --add-port=80/tcp
# firewall-cmd --permanent --zone=zone --add-port=80/tcp
306
ou: groups
2. If you have configured LDAP authentication, use the ldapadd command to add the organization to
LDAP:
# ldapadd -cxWD "cn=admin,dc=mydom,dc=com" -f mydom-com-organization.ldif
Enter LDAP Password: admin_password
adding new entry "dc=mydom,dc=com"
adding new entry "ou=People,dc=mydom,dc=com"
adding new entry "ou=Groups,dc=mydom,dc=com"
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use this form of the ldapadd command:
# ldapadd -f mydom-com-organization.ldif
where nfssvr is the host name or IP address of the NFS server that exports the users' home
directories.
2. If you have configured LDAP authentication, use the following command to add the map to LDAP:
# ldapadd -xcWD "cn=admin,dc=mydom,dc=com" \
-f auto-home.ldif
Enter LDAP Password: user_password
adding new entry "nisMapName=auto.home,dc=mydom,dc=com"
adding new entry "cn=*,nisMapName=auto.home,dc=mydom,dc=com"
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use this form of the command:
# ldapmodify -f auto-home.ldif
307
objectClass: top
objectClass: nisMap
nisMapName: auto.home
dn: cn=*,nisMapName=auto.home,dc=mydom,dc=com
objectClass: nisObject
cn: *
nisMapEntry: -rw,sync nfssvr.mydom.com:/nethome/&
nisMapName: auto.home
2. If you have configured LDAP authentication, use the following command to add the group to LDAP:
# ldapadd -cxWD "cn=admin,dc=mydom,dc=com" -f employees-group.ldif
Enter LDAP Password: admin_password
adding new entry "cn=employees,ou=Groups,dc=mydom,dc=com"
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use this form of the ldapadd command:
# ldapadd -f employees-group.ldif
For more information, see the ldapadd(1) and ldapsearch(1) manual pages.
308
1. If the LDAP server does not already export the base directory of the users' home directories, perform
the following steps on the LDAP server:
a. Create the base directory for user directories, for example /nethome:
# mkdir /nethome
*(rw,sync)
You might prefer to restrict which clients can mount the file system. For example, the following entry
allows only clients in the 192.168.1.0/24 subnet to mount /nethome:
/nethome
192.168.1.0/24(rw,sync)
For example:
# useradd -b /nethome -s /sbin/nologin -u 5159 -U arc815
The command updates the /etc/passwd file and creates a home directory under /nethome on the
LDAP server.
The user's login shell will be overridden by the LoginShell value set in LDAP.
3. Use the id command to list the user and group IDs that have been assigned to the user, for example:
# id arc815
uid=5159(arc815) gid=5159(arc815) groups=5159(arc815)
4. Create an LDIF file that defines the user, for example arc815-user.ldif:
# UPG arc815
dn: cn=arc815,ou=Groups,dc=mydom,dc=com
cn: arc815
gidNumber: 5159
objectclass: top
objectclass: posixGroup
# User arc815
dn: uid=arc815,ou=People,dc=mydom,dc=com
cn: John Beck
givenName: John
sn: Beck
uid: arc815
uidNumber: 5159
gidNumber: 5159
homeDirectory: /nethome/arc815
loginShell: /bin/bash
mail: [email protected]
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword: {SSHA}x
309
In this example, the user belongs to a user private group (UPG), which is defined in the same file.
The users login shell attribute LoginShell is set to /bin/bash. The user's password attribute
userPassword is set to a placeholder value. If you use Kerberos authentication with LDAP, this
attribute is not used.
5. If you have configured LDAP authentication, use the following command to add the user to LDAP:
# ldapadd -cxWD cn=admin,dc=mydom,dc=com -f arc815-user.ldif
Enter LDAP Password: admin_password
adding new entry "cn=arc815,ou=Groups,dc=mydom,dc=com"
adding new entry "uid=arc815,ou=People,dc=mydom,dc=com"
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use this form of the ldapadd command:
# ldapadd -f arc815-user.ldif
6. Verify that you can locate the user and his or her UPG in LDAP:
# ldapsearch -LLL -x -b "dc=mydom,dc=com" '(|(uid=arc815)(cn=arc815))'
dn: cn=arc815,ou=Groups,dc=mydom,dc=com
cn: arc815
gidNumber: 5159
objectClass: top
objectClass: posixGroup
dn: uid=arc815,ou=People,dc=mydom,dc=com
cn: John Beck
givenName: John
sn: Beck
uid: arc815
uidNumber: 5159
gidNumber: 5159
homeDirectory: /home/arc815
loginShell: /bin/bash
mail: [email protected]
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
7. If you have configured LDAP authentication, set the user password in LDAP:
# ldappasswd -xWD "cn=admin,dc=mydom,dc=com" \
-S "uid=arc815,ou=people,dc=mydom,dc=com"
New password: user_password
Re-enter new password: user_password
Enter LDAP Password: admin_password
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use the kadmin command to add the user (principal) and password to the
database for the Kerberos domain, for example:
# kadmin -q "addprinc [email protected]"
For more information, see the kadmin(1), ldapadd(1), ldappasswd(1), and ldapsearch(1)
manual pages.
310
1. Create an LDIF file that defines the users that should be added to the memberuid attribute for the
group, for example employees-add-users.ldif:
dn: cn=employees,ou=Groups,dc=mydom,dc=com
changetype: modify
add: memberUid
memberUid: arc815
dn: cn=employees,ou=Groups,dc=mydom,dc=com
changetype: modify
add: memberUid
memberUid: arc891
...
2. If you have configured LDAP authentication, use the following command to add the group to LDAP:
# ldapmodify -xcWD "cn=admin,dc=mydom,dc=com" \
-f employees-add-users.ldif
Enter LDAP Password: user_password
modifying entry "cn=employees,ou=Groups,dc=mydom,dc=com"
...
If you have configured Kerberos authentication, use kinit to obtain a ticket granting ticket (TGT) for
the admin principal, and use this form of the command:
# ldapmodify -f employees-add-users.ldif
3. Select LDAP as the user account database and enter values for:
LDAP Search Base DN
LDAP Server
The URL of the LDAP server including the port number. For
example, ldap://ldap.mydom.com:389 or ldaps://
ldap.mydom.com:636.
LDAP authentication requires that you use either LDAP over SSL (ldaps) or Transport Layer Security
(TLS) to secure the connection to the LDAP server.
311
4. If you use TLS, click Download CA Certificate and enter the URL from which to download the CA
certificate that provides the basis for authentication within the domain.
5. Select either LDAP password or Kerberos password for authentication.
6. If you select Kerberos authentication, enter values for:
Realm
KDCs
Admin Servers
IN TXT "MYDOM.COM"
Select the Use DNS to locate KDCs for realms check box to look up the KDCs and administration
servers defined as SVR records in DNS, for example:
_kerberos._tcp.mydom.com
_kerberos._udp.mydom.com
_kpasswd._udp.mydom.com
_kerberos-adm._tcp.mydom.com
IN
IN
IN
IN
SVR
SVR
SVR
SVR
1
1
1
1
0
0
0
0
88
88
464
749
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
312
If you want to use TLS, additionally specify the --enableldaptls option and the download URL of the
CA certificate, for example:
# authconfig --enableldap --enableldapauth \
--ldapserver=ldap://ldap.mydom.com:389 \
--ldapbasedn="ou=people,dc=mydom,dc=com" \
--enableldaptls \
--ldaploadcacert=https://ca-server.mydom.com/CAcert.pem \
--update
313
The --enableldap option configures /etc/nsswitch.conf to enable the system to use LDAP
and SSSD for information services. The --enableldapauth option enables LDAP authentication by
modifying the PAM configuration files in /etc/pam.d to use the pam_ldap.so module.
For more information, see the authconfig(8), pam_ldap(5), and nsswitch.conf(5) manual pages.
For information about using Kerberos authentication with LDAP, see Section 24.6.3, Enabling Kerberos
Authentication.
Note
You must also configure SSSD to be able to access information in LDAP. See
Section 24.4.10.1, Configuring an LDAP Client to use SSSD.
If your client uses automount maps stored in LDAP, you must configure autofs
to work with LDAP. See Section 24.4.10.2, Configuring an LDAP Client to Use
Automount Maps.
2. Edit the /etc/sssd/sssd.conf configuration file and configure the sections to support the required
services, for example:
[sssd]
config_file_version = 2
domains = default
services = nss, pam
[domain/default]
id_provider = ldap
ldap_uri = ldap://ldap.mydom.com
ldap_id_use_start_tls = true
ldap_search_base = dc=mydom,dc=com
ldap_tls_cacertdir = /etc/openldap/cacerts
auth_provider = krb5
chpass_provider = krb5
krb5_realm = MYDOM.COM
krb5_server = krbsvr.mydom.com
krb5_kpasswd = krbsvr.mydom.com
cache_credentials = true
[domain/LDAP]
id_provider = ldap
ldap_uri = ldap://ldap.mydom.com
ldap_search_base = dc=mydom,dc=com
auth_provider = krb5
krb5_realm = MYDOM.COM
krb5_server = kdcsvr.mydom.com
cache_credentials = true
min_id = 5000
314
max_id = 25000
enumerate = false
[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 300
[pam]
reconnection_retries = 3
offline_credentials_expiration = 2
offline_failed_login_attempts = 3
offline_failed_login_delay = 5
For more information, see the sssd.conf(5) manual page and Section 24.8, About the System Security
Services Daemon.
In this example, the map is available. For details of how to make this map available, see Section 24.4.6,
Adding an Automount Map to LDAP.
3. If the auto.home map is available, edit /etc/auto.master and create an entry that tells autofs
where to find the auto.home map in LDAP, for example:
/nethome
ldap:nisMapName=auto.home,dc=mydom,dc=com
315
<autofs_ldap_sasl_conf
usetls="yes"
tlsrequired="no"
authrequired="autodetect"
authtype="GSSAPI"
clientprinc="host/[email protected]"
/>
This example assumes that Kerberos authentication with the LDAP server uses TLS for the connection.
The principal for the client system must exist in the Kerberos database. You can use the klist -k
command to verify this. If the principal for the client does not exist, use kadmin to add the principal.
5. If you use Kerberos Authentication, use kadmin to add a principal for the LDAP service on the LDAP
server, for example:
# kadmin -q "addprinc ldap/[email protected]
6. Restart the autofs service, and configure the service to start following a system reboot:
# systemctl restart autofs
# systemctl enable autofs
The autofs service creates the directory /nethome. When a user logs in, the automounter mounts
his or her home directory under /nethome.
If the owner and group for the user's files are unexpectedly listed as the anonymous user or group
(nobody or nogroup) and all_squash has not been specified as a mount option, verify that the
Domain setting in /etc/idmapd.conf on the NFS server is set to the DNS domain name. Restart the
NFS services on the NFS server if you change this file.
For more information, see the auto.master(5) and autofs_ldap_auth.conf(5) manual pages.
passwd.byuid
316
The /var/yp/nicknames file contains a list of commonly used short names for maps such as passwd
for passwd.byname and group for group.byname.
You can use the ypcat command to display the contents of an NIS map, for example:
# ypcat - passwd | grep 1500
guest:$6$gMIxsr3W$LaAo...6EE6sdsFPI2mdm7/NEm0:1500:1500::/nethome/guest:/bin/bash
Note
As the ypcat command displays password hashes to any user, this example
demonstrates that NIS authentication is inherently insecure against password-hash
cracking programs. If you use Kerberos authentication, you can configure password
hashes not to appear in NIS maps, although other information that ypcat displays
could also be useful to an attacker.
For more information, see the ypcat(1) manual page.
2. Edit /etc/sysconfig/network and add an entry to define the NIS domain, for example:
NISDOMAIN=mynisdom
3. Edit /etc/ypserv.conf to configure NIS options and to add rules for which hosts and domains can
access which NIS maps.
For example, the following entries allow access only to NIS clients in the mynisdom domain on the
192.168.1 subnet:
192.168.1.0/24: mynisdom : * : none
* : * : * : deny
For more information, see the ypserv.conf(5) manual page and the comments in /etc/
ypserv.conf.
4. Create the file /var/yp/securenets and add entries for the networks for which the server should
respond to requests, for example:
# cat > /var/yp/securenets <<!
255.255.255.255 127.0.0.1
255.255.255.0
192.168.1.0
!
# cat /var/yp/securenets
255.255.255.255 127.0.0.1
255.255.255.0
192.168.1.0
In this example, the server accepts requests from the local loopback interface and the 192.168.1
subnet.
317
5. Edit /var/yp/Makefile:
a. Set any required map options and specify which NIS maps to create using the all target, for
example:
all:
passwd group auto.home
# hosts rpc services netid protocols mail \
# netgrp shadow publickey networks ethers bootparams printcap \
# amd.home auto.local. passwd.adjunct \
# timezone locale netmasks
This example allows NIS to create maps for the /etc/passwd, /etc/group, and /etc/
auto.home files. By default, the information from the /etc/shadow file is merged with the
passwd maps, and the information from the /etc/gshadow file is merged with the group maps.
For more information, see the comments in /var/yp/Makefile.
b. If you intend to use Kerberos authentication instead of NIS authentication, change the values of
MERGE_PASSWD and MERGE_GROUP to false:
MERGE_PASSWD=false
MERGE_GROUP=false
Note
These settings prevent password hashes from appearing in the NIS maps.
c. If you configure any NIS slave servers in the domain, set the value of NOPUSH to false:
NOPUSH=false
If you update the maps, this setting allows the master server to automatically push the maps to the
slave servers.
6. Configure the NIS services:
a. Start the ypserv service and configure it to start after system reboots:
# systemctl start ypserv
# systemctl enable ypserv
The ypserv service runs on the NIS master server and any slave servers.
b. If the server will act as the master NIS server and there will be at least one slave NIS server, start
the ypxfrd service and configure it to start after system reboots:
# systemctl start ypxfrd
# systemctl enable ypxfrd
The ypxfrd service speeds up the distribution of very large NIS maps from an NIS master to any
NIS slave servers. The service runs on the master server only, and not on any slave servers. You
do not need to start this service if there are no slave servers.
c. Start the yppasswdd service and configure it to start after system reboots:
# systemctl start yppasswdd
# systemctl enable yppasswdd
318
The yppasswdd service allows NIS users to change their password in the shadow map. The
service runs on the NIS master server and any slave servers.
7. Configure the firewall settings:
a. Edit /etc/sysconfig/network and add the following entries that define the ports on which the
ypserv and ypxfrd services listen:
YPSERV_ARGS="-p 834"
YPXFRD_ARGS="-p 835"
These entries fix the ports on which ypserv and ypxfrd listen.
b. Allow incoming TCP connections to ports 111 and 834 and incoming UDP datagrams on ports 111
and 834:
# firewall-cmd --zone=zone --add-port=111/tcp --add-port=111/udp \
--add-port=834/tcp --add-port=834/udp
# firewall-cmd --permanent --zone=zone --add-port=111/tcp --add-port=111/udp \
--add-port=834/tcp --add-port=834/udp
portmapper services requests on TCP port 111 and UDP port 111, and ypserv services requests
on TCP port 834 and UDP port 834.
c. On the master server, if you run the ypxfrd service to support transfers to slave servers, allow
incoming TCP connections to port 835 and incoming UDP datagrams on port 835:
# firewall-cmd --zone=zone --add-port=835/tcp --add-port=835/udp
# firewall-cmd --permanent --zone=zone --add-port=835/tcp --add-port=835/udp
Note
Do not make this rule permanent. The UDP port number that yppasswdd
uses is different every time that it restarts.
e. Edit /etc/rc.local and add the following line:
firewall-cmd --zone=zone \
--add-port=`rpcinfo -p | gawk '/yppasswdd/ {print $4}'`/udp
This entry creates a firewall rule for the yppasswdd service when the system reboots. If you restart
yppasswdd, you must correct the firewall rules manually unless you modify the /etc/init.d/
yppasswdd script.
8. After you have started all the servers, create the NIS maps on the master NIS server:
# /usr/lib64/yp/ypinit -m
At this point, we have to construct a list of the hosts which will run NIS
servers. nismaster is in the list of NIS server hosts. Please continue to add
the names for the other hosts, one per line. When you are done with the
list, type a <control D>."
next host to add: nismaster
next host to add: nisslave1
next host to add: nisslave2
319
^D
Enter the host names of the NIS slave servers (if any), type Ctrl-D to finish, and enter y to confirm the
list of NIS servers. The host names must be resolvable to IP addresses in DNS or by entries in /etc/
hosts.
The ypinit utility builds the domain subdirectory in /var/yp and makes the NIS maps that are
defined for the all target in /var/yp/Makefile. If you have configured NOPUSH=false in /var/
yp/Makefile and the names of the slave servers in /var/yp/ypservers, the command also
pushes the updated maps to the slave servers.
9. On each NIS slave server, run the following command to initialize the server:
# /usr/lib64/yp/ypinit -s nismaster
where nismaster is the host name or IP address of the NIS master server.
For more information, see the ypinit(8) manual page
Note
If you update any of the source files on the master NIS server that are used to build
the maps, use the following command on the master NIS server to remake the map
and push the changes out to the slave servers:
# make -C /var/yp
320
a. Create the base directory for user directories, for example /nethome:
# mkdir /nethome
*(rw,sync)
You might prefer to restrict which clients can mount the file system. For example, the following entry
allows only clients in the 192.168.1.0/24 subnet to mount /nethome:
/nethome
192.168.1.0/24(rw,sync)
d. If you have configured /var/yp/Makfile to make the auto.home map available to NIS clients,
create the following entry in /etc/auto.home:
*
-rw,sync
nissvr:/nethome/&
The command updates the /etc/passwd file and creates a home directory on the NIS server.
3. Depending on the type of authentication that you have configured:
For Kerberos authentication, on the Kerberos server or a client system with kadmin access, use
kadmin to create a principal for the user in the Kerberos domain, for example:
# kadmin -q "addprinc username@KRBDOMAIN"
The command prompts you to set a password for the user, and adds the principal to the Kerberos
database.
For NIS authentication, use the passwd command:
# passwd username
The command updates the /etc/shadow file with the hashed password.
4. Update the NIS maps:
# make -C /var/yp
This command makes the NIS maps that are defined for the all target in /var/yp/Makefile. If you
have configured NOPUSH=false in /var/yp/Makefile and the names of the slave servers in /var/
yp/ypservers, the command also pushes the updated maps to the slave servers.
Note
A Kerberos-authenticated user can use either kpasswd or passwd to change his or
her password. An NIS-authenticated user must use the yppasswd command rather
than passwd to change his or her password.
321
3. Select NIS as the user account database and enter values for:
NIS Domain
NIS Server
KDCs
Admin Servers
IN TXT "MYDOM.COM"
Select the Use DNS to locate KDCs for realms check box to look up the KDCs and administration
servers defined as SVR records in DNS, for example:
_kerberos._tcp.mydom.com
_kerberos._udp.mydom.com
_kpasswd._udp.mydom.com
_kerberos-adm._tcp.mydom.com
IN
IN
IN
IN
SVR
SVR
SVR
SVR
1
1
1
1
0
0
0
0
88
88
464
749
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
322
You can also enable and configure NIS or Kerberos authentication by using the authconfig command.
For example, to use NIS authentication, specify the --enablenis option together with the NIS domain
name and the host name or IP address of the master server, as shown in the following example:.
# authconfig --enablenis --nisdomain mynisdom \
--nisserver nissvr.mydom.com --update
The --enablenis option configures /etc/nsswitch.conf to enable the system to use NIS for
information services. The --nisdomain and --nisserver settings are added to /etc/yp.conf.
For more information, see the authconfig(8), nsswitch.conf(5), and yp.conf(5) manual pages.
323
For information about using Kerberos authentication with NIS, see Section 24.6.3, Enabling Kerberos
Authentication.
/etc/auto.home
In this example, the map is available. For details of how to make this map available, see Section 24.5.3,
Adding User Accounts to NIS.
4. If the auto.home map is available, edit the file /etc/auto.home to contain the following entry:
+auto.home
The autofs service creates the directory /nethome. When a user logs in, the automounter mounts
his or her home directory under /nethome.
If the owner and group for the user's files are unexpectedly listed as the anonymous user or group
(nobody or nogroup) and all_squash has not been specified as a mount option, verify that the
Domain setting in /etc/idmapd.conf on the NFS server is set to the DNS domain name. Restart the
NFS services on the NFS server if you change this file.
324
4. When the client want to use a service, usually to obtain access to a local or remote host system, it uses
the session key to encrypt a copy of the encrypted TGT, the clients IP address, a time stamp, and a
service ticket request, and it sends this item to the KDC.
The KDC uses its copies of the session key and the TGS key to extract the TGT, IP address, and
time stamp, which allow it to validate the client. Provided that both the client and its service request
are valid, the KDC generates a service session key and a service ticket that contains the clients IP
address, a time stamp, and a copy of the service session key, and it uses the service key to encrypt the
service ticket. It then uses the session key to encrypt both the service ticket and another copy of the
service session key.
The service key is usually the host principal's key for the system on which the service provider runs.
5. The KDC sends the encrypted combination of the service session key and the encrypted service ticket
to the client.
The client uses its copy of the session key to extract the encrypted service ticket and the service
session key.
6. The client sends the encrypted service ticket to the service provider together with the principal name
and a time stamp encrypted with the service session key.
The service provider uses the service key to extract the data in the service session ticket, including the
service session key.
7. The service provider enables the service for the client, which is usually to grant access to its host
system.
If the client and service provider are hosted on different systems, they can each use their own copy of
the service session key to secure network communication for the service session.
Note the following points about the authentication handshake:
Steps 1 through 3 correspond to using the kinit command to obtain and cache a TGT.
Steps 4 through 7 correspond to using a TGT to gain access to a Kerberos-aware service.
Authentication relies on pre-shared keys.
Keys are never sent in the clear over any communications channel between the client, the KDC, and the
service provider.
At the start of the authentication process, the client and the KDC share the principal's key, and the KDC
and the service provider share the service key. Neither the principal nor the service provider know the
TGS key.
At the end of the process, both the client and the service provider share a service session key that they
can use to secure the service session. The client does not know the service key and the service provider
does not know the principal's key.
The client can use the TGT to request access to other service providers for the lifetime of the ticket,
which is usually one day. The session manager renews the TGT if it expires while the session is active.
326
Note
Keep any system that you configure as a Kerberos server very secure, and do not
configure it to perform any other service function.
To configure a Kerberos server that can act as a key distribution center (KDC) and a Kerberos
administration server:
1. Configure the server to use DNS and that both direct and reverse name lookups of the server's domain
name and IP address work.
For more information about configuring DNS, see Chapter 13, Name Service Configuration.
2. Configure the server to use network time synchronization mechanism such as the Network Time
Protocol (NTP), Precision Time Protocol (PTP), or chrony. Kerberos requires that the system time on
Kerberos servers and clients are synchronized as closely as possible. If the system times of the server
and a client differ by more than 300 seconds (by default), authentication fails.
For more information, see Chapter 14, Network Time Configuration.
3. Install the krb5-libs, krb5-server, and krb5-workstation packages:
# yum install krb5-libs krb5-server krb5-workstation
4. Edit /etc/krb5.conf and configure settings for the Kerberos realm, for example:
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = MYDOM.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
MYDOM.COM = {
kdc = krbsvr.mydom.com
admin_server = krbsvr.mydom.com
}
[domain_realm]
.mydom.com = MYDOM.COM
mydom.com = MYDOM.COM
[appdefaults]
pam = {
debug = true
validate = false
}
In this example, the Kerberos realm is MYDOM.COM in the DNS domain mydom.com and
krbsvr.mydom.com (the local system) acts as both a KDC and an administration server. The
[appdefaults] section configures options for the pam_krb5.so module.
For more information, see the krb5.conf(5) and pam_krb5(5) manual pages.
327
5. Edit /var/kerberos/krb5kdc/kdc.conf and configure settings for the key distribution center, for
example:
kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
MYDOM.COM = {
#master_key_type = aes256-cts
master_key_type = des-hmac-sha1
default_principal_flags = +preauth
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /etc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal \
arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
In this example, any principal who has an instance of admin, such as alice/[email protected],
has full administrative control of the Kerberos database for the MYDOM.COM domain. Ordinary users
in the database usually have an empty instance, for example [email protected]. These users have no
administrative control other than being able to change their password, which is stored in the database.
8. Create a principal for each user who should have the admin instance, for example:
# kadmin.local -q "addprinc alice/admin"
9. Cache the keys that kadmind uses to decrypt administration Kerberos tickets in /etc/
kadm5.keytab:
# kadmin.local -q "ktadd -k /etc/kadm5.keytab kadmin/admin"
# kadmin.local -q "ktadd -k /etc/kadm5.keytab kadmin/changepw"
10. Start the KDC and administration services and configure them to start following system reboots:
#
#
#
#
systemctl
systemctl
systemctl
systemctl
start krb5kdc
start kadmin
enable krb5kdc
enable kadmin
11. Add principals for users and the Kerberos server and cache the key for the server's host principal in /
etc/kadm5.keytab by using either kadmin.local or kadmin, for example:
# kadmin.local -q "addprinc bob"
# kadmin.local -q "addprinc -randkey host/krbsvr.mydom.com"
# kadmin.local -q "ktadd -k /etc/kadm5.keytab host/krbsvr.mydom.com"
12. Allow incoming TCP connections to ports 88, 464, and 749 and UDP datagrams on UDP port 88, 464,
and 749:
# firewall-cmd --zone=zone --add-port=88/tcp --add-port=88/udp \
328
--add-port=464/tcp --add-port=464/udp \
--add-port=749/tcp --add-port=749/udp
# firewall-cmd --permanent --zone=zone --add-port=88/tcp --add-port=88/udp \
--add-port=464/tcp --add-port=464/udp \
--add-port=749/tcp --add-port=749/udp
krb5kdc services requests on TCP port 88 and UDP port 88, and kadmind services requests on TCP
ports 464 and 749 and UDP ports 464 and 749.
In addition, you might need to allow TCP and UDP access on different ports for other applications.
For more information, see the kadmin(1) manual page.
b. Edit /etc/ntp.conf and configure the settings as required. See the ntp.conf(5) manual page
and http://www.ntp.org.
c. Start the ntpd service and configure it to start following system reboots.
# systemctl start ntpd
# systemctl enable ntpd
4. Copy the /etc/krb5.conf file to the system from the Kerberos server.
5. Use the Authentication Configuration GUI or authconfig to set up the system to use Kerberos with
either NIS or LDAP, for example:
# authconfig --enablenis --enablekrb5 --krb5realm=MYDOM.COM \
--krb5adminserver=krbsvr.mydom.com --krb5kdc=krbsvr.mydom.com \
--update
6. On the Kerberos KDC, use either kadmin or kadmin.local to add a host principal for the client, for
example:
# kadmin.local -q "addprinc -randkey host/client.mydom.com"
7. On the client system, use kadmin to cache the key for its host principal in /etc/kadm5.keytab, for
example:
# kadmin -q "ktadd -k /etc/kadm5.keytab host/client.mydom.com"
8. To use ssh and related OpenSSH commands to connect from Kerberos client system to another
Kerberos client system:
a. On the remote Kerberos client system, verify that GSSAPIAuthentication is enabled in /etc/
ssh/sshd_config:
GSSAPIAuthentication yes
To allow use of the Kerberos versions of rlogin, rsh, and telnet, which are provided in the krb5appl-clients package, you must enable the corresponding services on the remote client.
For more information, see the kadmin(1) manual page.
KDCs
Admin Servers
IN TXT "MYDOM.COM"
Select the Use DNS to locate KDCs for realms check box to look up the KDCs and administration
servers defined as SVR records in DNS, for example:
330
_kerberos._tcp.mydom.com
_kerberos._udp.mydom.com
_kpasswd._udp.mydom.com
_kerberos-adm._tcp.mydom.com
IN
IN
IN
IN
SVR
SVR
SVR
SVR
1
1
1
1
0
0
0
0
88
88
464
749
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
krbsvr.mydom.com
Figure 24.6 shows the Authentication Configuration GUI with LDAP selected as the user account database
and Kerberos selected for authentication.
Figure 24.6 Authentication Configuration of LDAP with Kerberos Authentication
331
Alternatively, you can use the authconfig command to configure Kerberos authentication with LDAP, for
example:
# authconfig --enableldap \
--ldapbasedn="dc=mydom,dc=com" --ldapserver=ldap://ldap.mydom.com:389 \
[--enableldaptls --ldaploadcacert=https://ca-server.mydom.com/CAcert.pem] \
--enablekrb5 \
--krb5realm=MYDOM.COM | --enablekrb5realmdns \
--krb5kdc=krbsvr.mydom.com --krb5adminserver=krbsvr.mydom.com | --enablekrb5kdcdns \
--update
or with NIS:
# authconfig --enablenis \
--enablekrb5 \
--krb5realm=MYDOM.COM | --enablekrb5realmdns \
--krb5kdc=krbsvr.mydom.com --krb5adminserver=krbsvr.mydom.com | --enablekrb5kdcdns \
--update
The --enablekrb5 option enables Kerberos authentication by modifying the PAM configuration files in /
etc/pam.d to use the pam_krb5.so module. The --enableldap and --enablenis options configure
/etc/nsswitch.conf to enable the system to use LDAP or NIS for information services.
For more information, see the authconfig(8), nsswitch.conf(5), and pam_krb5(5) manual pages.
Comments in the file start with a # character. The remaining lines each define an operation type, a control
flag, the name of a module such as pam_rootok.so or the name of an included configuration file such
as system-auth, and any arguments to the module. PAM provides authentication modules as shared
libraries in /usr/lib64/security.
332
For a particular operation type, PAM reads the stack from top to bottom and calls the modules listed in the
configuration file. Each module generates a success or failure result when called.
The following operation types are defined for use:
auth
account
password
session
If the operation type is preceded with a dash (-), PAM does not add an create a system log entry if the
module is missing.
With the exception of include, the control flags tell PAM what to do with the result of running a module.
The following control flags are defined for use:
optional
required
requisite
sufficient
If the module succeeds, PAM does not process any remaining modules
of the same operation type. If the module fails, PAM processes the
remaining modules of the same operation type to determine overall
success or failure.
The control flag field can also define one or more rules that specify the action that PAM should take
depending on the value that a module returns. Each rule takes the form value=action, and the rules are
enclosed in square brackets, for example:
[user_unknown=ignore success=ok ignore=ignore default=bad]
If the result returned by a module matches a value, PAM uses the corresponding action, or, if there is no
match, it uses the default action.
The include flag specifies that PAM must also consult the PAM configuration file specified as the
argument.
Most authentication modules and PAM configuration files have their own manual pages. In addition, the
/usr/share/doc/pam-version directory contains the PAM System Administrators Guide (html/
Linux-PAM_SAG.html or Linux-PAM_SAG.txt) and a copy of the PAM standard (rfc86.0.txt).
333
For more information, see the pam(8) manual page. In addition, each PAM module has its own manual
page, for example pam_unix(8), postlogin(5), and system-auth(5).
2. Edit the /etc/sssd/sssd.conf configuration file and configure the sections to support the required
services, for example:
[sssd]
config_file_version = 2
domains = LDAP
services = nss, pam
[domain/LDAP]
id_provider = ldap
ldap_uri = ldap://ldap.mydom.com
ldap_search_base = dc=mydom,dc=com
auth_provider = krb5
krb5_server = krbsvr.mydom.com
krb5_realm = MYDOM.COM
cache_credentials = true
min_id = 5000
max_id = 25000
enumerate = false
[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 300
334
[pam]
reconnection_retries = 3
offline_credentials_expiration = 2
offline_failed_login_attempts = 3
offline_failed_login_delay = 5
The [sssd] section contains configuration settings for SSSD monitor options, domains, and services.
The SSSD monitor service manages the services that SSSD provides.
The services entry defines the supported services, which should include nss for the Name Service
Switch and pam for Pluggable Authentication Modules.
The domains entry specifies the name of the sections that define authentication domains.
The [domain/LDAP] section defines a domain for an LDAP identity provider that uses Kerberos
authentication. Each domain defines where user information is stored, the authentication method, and
any configuration options. SSSD can work with LDAP identity providers such as OpenLDAP, Red Hat
Directory Server, IPA, and Microsoft Active Directory, and it can use either native LDAP or Kerberos
authentication.
The id_provider entry specifies the type of provider (in this example, LDAP). ldap_uri specifies
a comma-separated list of the Universal Resource Identifiers (URIs) of the LDAP servers, in order of
preference, to which SSSD can connect. ldap_search_base specifies the base distinguished name
(dn) that SSSD should use when performing LDAP user operations on a relative distinguished name
(RDN) such as a common name (cn).
The auth_provider entry specifies the authentication provider (in this example, Kerberos).
krb5_server specifies a comma-separated list of Kerberos servers, in order of preference, to which
SSSD can connect. krb5_realm specifies the Kerberos realm. cache_credentials specifies
if SSSD caches user credentials such as tickets, session keys, and other identifying information to
support offline authentication and single sign-on.
Note
To allow SSSD to use Kerberos authentication with an LDAP server, you must
configure the LDAP server to use both Simple Authentication and Security
Layer (SASL) and the Generic Security Services API (GSSAPI). For more
information about configuring SASL and GSSAPI for OpenLDAP, see http://
www.openldap.org/doc/admin24/sasl.html.
The min_id and max_id entries specify upper and lower limits on the values of user and group IDs.
enumerate specifies whether SSSD caches the complete list of users and groups that are available
on the provider. The recommended setting is False unless a domain contains relatively few users or
groups.
The [nss] section configures the Name Service Switch (NSS) module that integrates the SSS
database with NSS. The filter_users and filter_groups entries prevent NSS retrieving
information about the specified users and groups being retrieved from SSS. reconnection_retries
specifies the number of times that SSSD should attempt to reconnect if a data provider crashes.
enum_cache_timeout specifies the number of seconds for which SSSD caches user information
requests.
The [pam] section configures the PAM module that integrates SSS with PAM. The
offline_credentials_expiration entry specifies the number of days for which to allow
cached logins if the authentication provider is offline. offline_failed_login_attempts
335
specifies how many failed login attempts are allowed if the authentication provider
is offline. offline_failed_login_delay specifies how many minutes after
offline_failed_login_attempts failed login attempts that a new login attempt is permitted.
3. Change the mode of /etc/sssd/sssd.conf to 0600:
# chmod 0600 /etc/sssd/sssd.conf
Note
If you edit /etc/sssd/sssd.conf, use this command to update the service.
The --enablesssd option updates /etc/nsswitch.conf to support SSS.
The --enablesssdauth option updates /etc/pam.d/system-auth to include the required
pam_sss.so entries to support SSSD.
domain
In the domain security model, the local Samba server has a machine
account (a domain security trust account) and Samba authenticates
user names and passwords with a domain controller in a domain that
implements Windows NT4 security.
Warning
If the local machine acts as a Primary or Backup
Domain Controller, do not use the domain
security model. Use the user security model
instead.
server
In the server security model, the local Samba server authenticates user
names and passwords with another server, such as a Windows NT
server.
336
Warning
The server security model is deprecated as it
has numerous security issues.
user
In the user security model, a client must log in with a valid user name
and password. This model supports encrypted passwords. If the server
successfully validates the client's user name and password, the client
can mount multiple shares without being required to specify a password.
Depending on the security model that you choose, you might also need to specify the following information:
The name of the ADS realm that the Samba server is to join (ADS security model only).
The names of the domain controllers. If there are several domain controllers, separate the names with
spaces.
The login template shell to use for the Windows NT user account (ADS and domain security models
only).
Whether to allow user authentication using information that has been cached by the System Security
Services Daemon (SSSD) if the domain controllers are offline.
Your selection updates the security directive in the [global] section of the /etc/samba/smb.conf
configuration file.
If you have initialized Kerberos, you can click Join Domain to create a machine account on the Active
Directory server and grant permission for the Samba domain member server to join the domain.
You can also use the authconfig command to configure Winbind authentication. To use the userlevel security models, specify the name of the domain or workgroup and the host names of the domain
controllers. for example:
# authconfig --enablewinbind --enablewinbindauth --smbsecurity user \
[--enablewinbindoffline] --smbservers="ad1.mydomain.com ad2.mydomain.com" \
--smbworkgroup=MYDOMAIN --update
To allow user authentication using information that has been cached by the System Security Services
Daemon (SSSD) if the domain controllers are offline, specify the --enablewinbindoffline option.
For the domain security model, additionally specify the template shell, for example:
# authconfig --enablewinbind --enablewinbindauth --smbsecurity domain \
[--enablewinbindoffline] --smbservers="ad1.mydomain.com ad2.mydomain.com" \
--smbworkgroup=MYDOMAIN --update --winbindtemplateshell=/bin/bash --update
For the ADS security model, additionally specify the ADS realm and template shell, for example:
# authconfig --enablewinbind --enablewinbindauth --smbsecurity ads \
[--enablewinbindoffline] --smbservers="ad1.mydomain.com ad2.mydomain.com" \
--smbworkgroup=MYDOMAIN --update --smbrealm MYDOMAIN.COM \
--winbindtemplateshell=/bin/bash --update
337
338
339
340
340
341
341
341
342
342
342
343
This chapter describes how to configure and manage local user and group accounts.
339
In an enterprise environment that might have hundreds of servers and thousands of users, user and group
account information is more likely to be held in a central repository rather than in files on individual servers.
You can configure user and group information on a central server and retrieve this information by using
services such as Lightweight Directory Access Protocol (LDAP) or Network Information Service (NIS). You
can also create users home directories on a central server and automatically mount, or access, these
remote file systems when a user logs in to a system.
INACTIVE specifies after how many days the system locks an account if a user's password expires. If set
to 0, the system locks the account immediately. If set to -1, the system does not lock the account.
SKEL defines a template directory, whose contents are copied to a newly created users home directory.
The contents of this directory should match the default shell defined by SHELL.
You can specify options to useradd -D to change the default settings for user accounts. For example, to
change the defaults for INACTIVE, HOME and SHELL:
# useradd -D -f 3 -b /home2 -s /bin/sh
Note
If you change the default login shell, you would usually also create a new SKEL
template directory with contents that are appropriate to the new shell.
If you specify /sbin/nologin for a user's SHELL, that user cannot log into the
system directly but processes can run with that user's ID. This setting is typically
used for services that run as users other than root.
The default settings are stored in the /etc/default/useradd file.
For more information, see Section 25.8, Configuring Password Ageing and the useradd(8) manual
page.
You can specify options to change the account's settings from the default ones.
By default, if you specify a user name argument but do not specify any options, useradd creates a
locked user account using the next available UID and assigns a user private group (UPG) rather than
the value defined for GROUP as the user's group.
340
Alternatively, you can use the newusers command to create a number of user accounts at the same time.
For more information, see the chpasswd(8), newusers(8), passwd(1), and useradd(8) manual
pages.
25.3.1 About umask and the setgid and Restricted Deletion Bits
Users whose primary group is not a UPG have a umask of 0022 set by /etc/profile or /etc/bashrc,
which prevents other users, including other members of the primary group, from modifying any file that the
user owns.
A user whose primary group is a UPG has a umask of 0002. It is assumed that no other user has the same
group.
To grant users in the same group write access to files within the same directory, change the group
ownership on the directory to the group, and set the setgid bit on the directory:
# chgrp groupname directory
# chmod g+s directory
Files created in such a directory have their group set to that of the directory rather than the primary group
of the user who creates the file.
The restricted deletion bit prevents unprivileged users from removing or renaming a file in the directory
unless they own either the file or the directory.
To set the restricted deletion bit on a directory:
# chmod a+t directory
341
Creating Groups
For example, to add a user to a supplementary group (other than his or her login group):
# usermod -aG groupname username
You can use the groups command to display the groups to which a user belongs, for example:
# groups root
root : root bin daemon sys adm disk wheel
For more information, see the groups(1), userdel(8) and usermod(8) manual pages.
Typically, you might want to use the -g option to specify the group ID (GID). For example:
# groupadd -g 1000 devgrp
For more information, see the groupdel(8) and groupmod(8) manual pages.
Description
PASS_MAX_DAYS
Maximum number of days for which a password can be used before it must be
changed. The default value is 99,999 days.
PASS_MIN_DAYS
PASS_WARN_AGE
Number of days warning that is given before a password expires. The default
value is 7 days.
342
# usermod -f 30 username
To change the default inactivity period for new user accounts, use the useradd command:
# useradd -D -f 30
A value of -1 specifies that user accounts are not locked due to inactivity.
For more information, see the useradd(8) and usermod(8) manual pages.
ALL=(ALL)
ALL
ALL= SERVICES, SOFTWARE
For more information, see the su(1), sudo(8), sudoers(5), and visudo(8) manual pages.
343
344
345
346
347
349
349
349
351
354
355
356
357
359
362
363
364
364
365
365
366
370
370
370
371
372
372
373
375
376
376
376
377
377
379
This chapter describes the subsystems that you can use to administer system security, including SELinux,
the Netfilter firewall, TCP Wrappers, chroot jails, auditing, system logging, and process accounting.
345
Oracle Linux has evolved into a secure enterprise-class operating system that can provide the
performance, data integrity, and application uptime necessary for business-critical production
environments.
Thousands of production systems at Oracle run Oracle Linux and numerous internal developers use it
as their development platform. Oracle Linux is also at the heart of several Oracle engineered systems,
including the Oracle Exadata Database Machine, Oracle Exalytics In-Memory Machine, Oracle Exalogic
Elastic Cloud, and Oracle Database Appliance.
Oracle On Demand services, which deliver software as a service (SaaS) at a customer's site, via an Oracle
data center, or at a partner site, use Oracle Linux at the foundation of their solution architectures. Backed
by Oracle support, these mission-critical systems and deployments depend fundamentally on the built-in
security and reliability features of the Oracle Linux operating system.
Released under an open-source license, Oracle Linux includes the Unbreakable Enterprise Kernel that
provides the latest Linux innovations while offering tested performance and stability. Oracle has been
a key participant in the Linux community, contributing code enhancements such as Oracle Cluster File
System and the Btrfs file system. From a security perspective, having roots in open source is a significant
advantage. The Linux community, which includes many experienced developers and security experts,
reviews posted Linux code extensively prior to its testing and release. The open-source Linux community
has supplied many security improvements over time, including access control lists (ACLs), cryptographic
libraries, and trusted utilities.
Description
policycoreutils
libselinux
Provides the API that SELinux applications use to get and set process and
file security contexts, and to obtain security policy decisions.
selinux-policy
Provides the SELinux Reference Policy, which is used as the basis for other
policies, such as the SELinux targeted policy.
346
Package
Description
selinux-policytargeted
Provides support for the SELinux targeted policy, where objects outside the
targeted domains run under DAC.
libselinux-python
libselinux-utils
The following table describes a selection of useful SELinux packages that are not installed by default:
Package
Description
mcstrans
policycoreutils-gui
policycoreutilspython
selinux-policy-mls
setroubleshoot
setroubleshootserver
setools-console
Use yum or another suitable package manager to install the SELinux packages that you require on your
system.
For more information about SELinux, refer to the SELinux Project Wiki, the selinux(8) manual page,
and the manual pages for the SELinux commands.
Package
Description
audit2allow
policycoreutilspython
audit2why
policycoreutilspython
avcstat
libselinux-utils
chcat
policycoreutilspython
findcon
setools-console
347
Utility
Package
Description
fixfiles
policycoreutils
getenforce
libselinux-utils
getsebool
libselinux-utils
indexcon
setools-console
load_policy
policycoreutils
matchpathcon
libselinux-utils
replcon
setools-console
restorecon
policycoreutils
restorecond
policycoreutils
sandbox
policycoreutilspython
sealert
setroubleshootserver,
setroubleshoot
seaudit-report
setools-console
sechecker
setools-console
secon
policycoreutils
sediff
setools-console
seinfo
setools-console
selinuxconlist
libselinux-utils
selinuxdefcon
libselinux-utils
selinuxenabled
libselinux-utils
semanage
policycoreutilspython
semodule
policycoreutils
semodule_deps
policycoreutils
semodule_expand
policycoreutils
semodule_link
policycoreutils
semodule_package
policycoreutils
sesearch
setools-console
sestatus
policycoreutils
setenforce
libselinux-utils
setsebool
policycoreutils
setfiles
policycoreutils
348
Utility
Package
Description
system-configselinux
togglesebool
libselinux-utils
The kernel uses only DAC rules for access control. SELinux does not
enforce any security policy because no policy is loaded into the kernel.
Enforcing
Permissive
The kernel does not enforce security policy rules but SELinux sends
denial messages to a log file. This allows you to see what actions would
have been denied if SELinux were running in enforcing mode. This
mode is intended to used for diagnosing the behavior of SELinux.
The current value that you set for a mode using setenforce does not persist across reboots. To
configure the default SELinux mode, edit the configuration file for SELinux, /etc/selinux/config, and
set the value of the SELINUX directive to disabled, enabled, or permissive.
349
confined domain, which restricts access to files that an attacker could exploit. If SELinux detects that a
targeted process is trying to access resources outside the confined domain, it denies access to those
resources and logs the denial. Only specific services run in confined domains. Examples are services that
listen on a network for client requests, such as httpd, named, and sshd, and processes that run as root
to perform tasks on behalf of users, such as passwd. Other processes, including most user processes, run
in an unconfined domain where only DAC rules apply. If an attack compromises an unconfined process,
SELinux does not prevent access to system resources and data.
The following table lists examples of SELinux domains.
Domain
Description
init_t
systemd
httpd_t
kernel_t
Kernel threads
syslogd_t
unconfined_t
350
You can set the boolean values in the Boolean view of the SELinux Administration GUI.
Alternatively, to display all boolean values together with a short description, use the following command:
# semanage boolean -l
SELinux boolean
State
Default Description
ftp_home_dir
(off , off)
Determine whether ftpd can read and write files in user home directories.
smartmon_3ware
(off , off)
Determine whether smartmon can support devices on 3ware controllers.
mpd_enable_homedirs
(off , off)
Determine whether mpd can traverse user home directories.
...
You can use the getsebool and setsebool commands to display and set the value of a specific
boolean.
# getsebool boolean
# setsebool boolean on|off
For example, to display and set the value of the ftp_home_dir boolean:
# getsebool ftp_home_dir
ftp_home_dir --> off
# setsebool ftp_home_dir on
# getsebool ftp_home_dir
ftp_home_dir --> on
To toggle the value of a boolean, use the togglesebool command as shown in this example:
# togglesebool ftp_home_dir
ftp_home_dir: inactive
To make the value of a boolean persist across reboots, specify the -P option to setsebool, for example:
# setsebool -P ftp_home_dir on
# getsebool ftp_home_dir
ftp_home_dir --> on
Role
351
Level
SELinux User
MLS/MCS Range
Service
__default__
root
system_u
unconfined_u
unconfined_u
system_u
s0-s0:c0.c1023
s0-s0:c0.c1023
s0-s0:c0.c1023
*
*
*
By default, SELinux maps Linux users other than root and the default system-level user, system_u, to
the Linux __default__ user, and in turn to the SELinux unconfined_u user. The MLS/MCS Range is
the security level used by Multilevel Security (MLS) and Multicategory Security (MCS).
root
root
root
root
root
root
root
root
root
root
system_u:object_r:admin_home_t:s0 anaconda-ks.cfg
unconfined_u:object_r:admin_home_t:s0 config
system_u:object_r:admin_home_t:s0 initial-setup-ks.cfg
unconfined_u:object_r:admin_home_t:s0 jail
unconfined_u:object_r:admin_home_t:s0 team0.cfg
To display the context information that is associated with a specified file or directory:
# ls -Z /etc/selinux/config
-rw-r--r--. root root system_u:object_r:selinux_config_t:s0 /etc/selinux/config
To display the context information that is associated with processes, use the ps -Z command:
352
# ps -Z
LABEL
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
PID
3038
3044
3322
TTY
pts/0
pts/0
pts/0
TIME
00:00:00
00:00:00
00:00:00
CMD
su
bash
ps
To display the context information that is associated with the current user, use the id -Z command:
# id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
system_u:object_r:httpd_sys_content_t:s0
2. Use the restorecon command to apply the new file type to the entire directory hierarchy.
# /sbin/restorecon -R -v /var/webcontent
2. Use the restorecon command to apply the default file type to the entire directory hierarchy.
# /sbin/restorecon -R -v /var/webcontent
MLS/MCS Range
s0-s0:c0.c1023
s0-s0:c0.c1023
s0-s0:c0.c1023
The MLS/MCS Range column displays the level used by MLS and MCS.
By default, Oracle Linux users are mapped to the SELinux user unconfined_u.
You can configure SELinux to confine Oracle Linux users by mapping them to SELinux users in confined
domains, which have predefined security rules and mechanisms as listed in the following table.
SELinux User
SELinux
Domain
Permit
Permit Network Permit Logging Permit Executing
Running su Access?
in Using
Applications in
and sudo?
X Window
$HOME and /tmp?
System?
guest_u
guest_t
No
Yes
No
No
staff_u
staff_t
sudo
Yes
Yes
Yes
system_u
ssystem_t
Yes
Yes
Yes
Yes
user_u
user_t
No
Yes
Yes
Yes
xguest_x
xguest_t
No
Firefox only
Yes
No
354
# setsebool -P allow_xguest_exec_content on
To prevent Linux users in the staff_t and user_t domains from executing applications in directories to
which they have write access:
# setsebool -P allow_staff_exec_content off
# setsebool -P allow_user_exec_content off
A service attempts to access a port to which a security policy does not allow access.
If the service's use of the port is valid, a solution is to use semanage to add the port to the policy
configuration. For example, allow the Apache HTTP server to listen on port 8000:
# semanage port -a -t http_port_t -p tcp 8000
An update to a package causes an application to behave in a way that breaks an existing security policy.
You can use the audit2allow -w -a command to view the reason why an access denial occurred.
355
If you then run the audit2allow -a -M module command, it creates a type enforcement (.te)
file and a policy package (.pp) file. You can use the policy package file with the semodule -i
module.pp command to stop the error from reoccurring. This procedure is usually intended to allow
package updates to function until an amended policy is available. If used incorrectly, it can create
potential security holes on your system.
356
To create or modify a firewall configuration from the command line, use the firewall-cmd utility (or, if
you prefer, the iptables, or ip6tables utilities) to configure the packet filtering rules.
The packet filtering rules are recorded in the /etc/firewalld hierarchy for firewalld and in the /
etc/sysconfig/iptables and /etc/sysconfig/ip6tables files for iptables and ip6tables.
The command does not display any results if the system has not been assigned to a zone.
357
To configure your system for the work zone on a local network connected via the em1 interface:
# firewall-cmd --zone=work --change-interface=em1
success
Querying the current zone now shows that the firewall is configured on the interface em1 for the work
zone:
# firewall-cmd --get-active-zone
work
interfaces: em1
To make the change permanent, you can change the default zone for the system, for example:
# firewall-cmd --get-default-zone
public
# firewall-cmd --set-default-zone=work
success
# firewall-cmd --get-default-zone
work
In this example, the system allows access by SSH and Samba clients.
To permit access by NFS and HTTP clients when the work zone is active, use the --add-service
option:
# firewall-cmd --zone=work --add-service=http --add-service=nfs
success
# firewall-cmd --zone=work --list-services
http nfs ssh samba
Note
If you do not specify the zone, the change is applied to the default zone, not the
currently active zone.
To make rule changes persist across reboots, run the command again, additionally specifying the -permanent option:
# firewall-cmd --permanent --zone=work --add-service=http --add-service=nfs
success
358
Similarly, the --remove-port option removes access to a port. Remember to rerun the command with
the --permanant option if you want to make the change persist.
To display all the firewall rules that are defined for a zone, use the --list-all option:
# firewall-cmd --zone=work --list-all
work (default,active)
interfaces: em1
sources:
services: http nfs ssh
ports: 5353/udp 3689/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
systemctl
systemctl
systemctl
systemctl
stop firewalld
disable firewalld
start iptables
enable iptables
To save any changes that you have made to the firewall rules to /etc/sysconfig/iptables, so that
the service loads them when it next starts:
# /etc/init.d/iptables iptables save
359
The default table, which is mainly used to drop or accept packets based
on their content.
Mangle
NAT
The kernel uses the rules stored in these tables to make decisions about network packet filtering. Each
rule consists of one or more criteria and a single action. If a criterion in a rule matches the information in a
network packet header, the kernel applies the action to the packet. Examples of actions include:
ACCEPT
DROP
REJECT
As DROP, and additionally notify the sending system that the packet was
blocked.
Rules are stored in chains, where each chain is composed of a default policy plus zero or more rules. The
kernel applies each rule in a chain to a packet until a match is found. If there is no matching rule, the kernel
applies the chains default action (policy) to the packet.
Each netfilter table has several predefined chains. The filter table contains the following chains:
FORWARD
Packets that are not addressed to the local system pass through this
chain.
INPUT
OUTPUT
The chains are permanent and you cannot delete them. However, you can create additional chains in the
filter table.
ACCEPT)
source
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
anywhere
destination
anywhere
anywhere
anywhere
anywhere
anywhere
224.0.0.251
anywhere
anywhere
anywhere
destination
anywhere
360
state RELATED,ESTABLISHED
reject-with icmp-host-prohibited
destination
In this example, the default policy for each chain is ACCEPT. A more secure system could have a default
policy of DROP, and the additional rules would only allow specific packets on a case-by-case basis.
If you want to modify the chains, specify the --line-numbers option to see how the rules are numbered.
# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num target
prot opt source
1
ACCEPT
all -- anywhere
2
ACCEPT
icmp -- anywhere
3
ACCEPT
all -- anywhere
4
ACCEPT
tcp -- anywhere
5
ACCEPT
udp -- anywhere
6
ACCEPT
udp -- anywhere
7
ACCEPT
tcp -- anywhere
8
ACCEPT
udp -- anywhere
9
REJECT
all -- anywhere
destination
anywhere
anywhere
anywhere
anywhere
anywhere
224.0.0.251
anywhere
anywhere
anywhere
destination
anywhere
destination
state RELATED,ESTABLISHED
reject-with icmp-host-prohibited
destination
anywhere
destination
state RELATED,ESTABLISHED
tcp dpt:http
state NEW tcp dpt:ssh
state NEW udp dpt:ipp
state NEW udp dpt:mdns
state NEW tcp dpt:ipp
state NEW udp dpt:ipp
reject-with icmp-host-prohibited
reject-with icmp-host-prohibited
The output from iptables -L shows that the new entry has been inserted as rule 4, and the old rules
4 through 9 are pushed down to positions 5 through 10. The TCP destination port of 80 is represented as
http, which corresponds to the following definition in the /etc/services file (the HTTP daemon listens
for client requests on port 80):
http
80/tcp
www www-http
# WorldWideWeb HTTP
To replace the rule in a chain, use the iptables -R command. For example, the following command
replaces rule 4 in the INPUT chain to allow access by TCP on port 443:
361
state RELATED,ESTABLISHED
tcp dpt:https
The TCP destination port of 443 is represented as https, which corresponds to the following definition in
the /etc/services file for secure HTTP on port 443:
https
443/tcp
The command saves the rules to /etc/sysconfig/iptables. For IPv6, you can use /etc/init.d/
ip6tables save to save the rules to /etc/sysconfig/ip6tables.
When a remote client attempts to connect to a network service on the system, the wrapper consults the
rules in the configuration files /etc/hosts.allow and /etc/hosts.deny files to determine if access is
permitted.
The wrapper for a service first reads /etc/hosts.allow from top to bottom. If the daemon and client
combination matches an entry in the file, access is allowed. If the wrapper does not find a match in /etc/
hosts.allow, it reads /etc/hosts.deny from top to bottom. If the daemon and client combination
matches and entry in the file, access is denied. If no rules for the daemon and client combination are found
in either file, or if neither file exists, access to the service is allowed.
362
The wrapper first applies the rules specified in /etc/hosts.allow, so these rules take precedence over
the rules specified in /etc/hosts.deny. If a rule defined in /etc/hosts.allow permits access to a
service, any rule in /etc/hosts.deny that forbids access to the same service is ignored.
The rules take the following form:
daemon_list : client_list [: command] [: deny]
where daemon_list and client_list are comma-separated lists of daemons and clients, and
the optional command is run when a client tries to access a daemon. You can use the keyword ALL to
represent all daemons or all clients. Subnets can be represented by using the * wildcard, for example
192.168.2.*. Domains can be represented by prefixing the domain name with a period (.), for example
.mydomain.com. The optional deny keyword causes a connection to be denied even for rules specified in
the /etc/hosts.allow file.
The following are some sample rules.
Match all clients for scp, sftp, and ssh access (sshd).
sshd : ALL
Match all clients on the 192.168.2 subnet for FTP access (vsftpd).
vsftpd : 192.168.2.*
Match all clients in the mydomain.com domain for access to all wrapped services.
ALL : .mydomain.com
Match all clients for FTP access, and displays the contents of the banner file /etc/banners/vsftpd (the
banner file must have the same name as the daemon).
vsftpd : ALL : banners /etc/banners/
Match all clients on the 200.182.68 subnet for all wrapped services, and logs all such events. The %c and
%d tokens are expanded to the names of the client and the daemon.
ALL : 200.182.68.* : spawn /usr/bin/echo `date` Attempt by %c to connect to %d" >> /var/log/tcpwr.log
Match all clients for scp, sftp, and ssh access, and logs the event as an emerg message, which is
displayed on the console.
sshd : ALL : severity emerg
Match all clients in the forbid.com domain for scp, sftp, and ssh access, logs the event, and deny
access (even if the rule appears in /etc/hosts.allow).
sshd : .forbid.com : spawn /usr/bin/echo `date` "sshd access denied for %c" >>/var/log/sshd.log : deny
363
Note
The chroot mechanism cannot defend against intentional tampering or low-level
access to system devices by privileged users. For example, a chroot root user
could create device nodes and mount file systems on them. A program can also
break out of a chroot jail if it can gain root privilege and use chroot() to change
its current working directory to the real root directory. For this reason, you should
ensure that a chroot jail does not contain any setuid or setgid executables that
are owned by root.
For a chroot process to be able to start successfully, you must populate the chroot directory with all
required program files, configuration files, device nodes, and shared libraries at their expected locations
relative to the level of the chroot directory.
2. Use the ldd command to find out which libraries are required by the command that you intend to run in
the chroot jail, for example /usr/bin/bash:
# ldd /usr/bin/bash
linux-vdso.so.1 => (0x00007fffdedfe000)
libtinfo.so.5 => /lib64/libtinfo.so.5 (0x0000003877000000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000003861c00000)
libc.so.6 => /lib64/libc.so.6 (0x0000003861800000)
/lib64/ld-linux-x86-64.so.2 (0x0000003861000000)
Note
Although the path is displayed as /lib64, the actual path is /usr/lib64
because /lib64 is a symbolic link to /usr/lib64. Similarly, /bin is a
symbolic link to /usr/bin. You need to recreate such symbolic links within the
chroot jail.
3. Create subdirectories of the chroot jail's root directory that have the same relative paths as the
command binary and its required libraries have to the real root directory, for example:
# mkdir -p /home/oracle/jail/usr/bin
# mkdir -p /home/oracle/jail/usr/lib64
4. Create the symbolic links that link to the binary and library directories in the same manner as the
symbolic links that exists in the real root directory.
364
# ln -s /home/oracle/jail/usr/bin /home/oracle/jail/bin
# ln -s /home/oracle/jail/usr/lib64 /home/oracle/jail/lib64
5. Copy the binary and the shared libraries to the directories under the chroot jail's root directory, for
example:
# cp /usr/bin/bash /home/oracle/jail/usr/bin
# cp /usr/lib64/{libtinfo.so.5,libdl.so.2,libc.so.6,ld-linux-x86-64.so.2} \
/home/oracle/jail/usr/lib64
If you do not specify a command argument, chroot runs the value of the SHELL environment variable or /
usr/bin/sh if SHELL is not set.
For example, to run /usr/bin/bash in a chroot jail (having previously set it up as described in
Section 26.5.2, Creating a Chroot Jail):
# chroot /home/oracle/jail
bash-4.2# pwd
/
bash-4.2# ls
bash: ls: command not found
bash-4.2# exit
exit
#
You can run built-in shell commands such as pwd in this shell, but not other commands unless you have
copied their binaries and any required shared libraries to the chroot jail.
For more information, see the chroot(1) manual page.
365
Record all unsuccessful exits from open and truncate system calls for files in the /etc directory
hierarchy.
-a exit,always -S open -S truncate -F /etc -F success=0
Record all files that have been written to or that have their attributes changed by any user who originally
logged in with a UID of 500 or greater.
-a exit,always -S open -F auid>=500 -F perm=wa
Record requests for write or file attribute change access to /etc/sudoers, and tag such record with the
string sudoers-change.
-w /etc/sudoers -p wa -k sudoers-change
Record requests for write and file attribute change access to the /etc directory hierarchy.
-w /etc/ -p wa
Require a reboot after changing the audit configuration. If specified, this rule should appear at the end of
the /etc/audit/audit.rules file.
-e 2
The aureport command generates summaries of audit data. You can set up cron jobs that run
aureport periodically to generate reports of interest. For example, the following command generates a
reports that shows every login event from 1 second after midnight on the previous day until the current
time:
# aureport -l -i -ts yesterday -te now
For more information, see the ausearch(8) and aureport(8) manual pages.
366
For more information, see the journalctl(1) and systemd-journald.service(8) manual pages.
The configuration file for rsyslogd is /etc/rsyslog.conf, which contains global directives, module
directives, and rules. By default, rsyslog processes and archives only syslog messages. If required,
you can configure rsyslog to archive any other messages that journald forwards, including kernel,
boot, initrd, stdout, and stderr messages.
Global directives specify configuration options that apply to the rsyslogd daemon. All configuration
directives must start with a dollar sign ($) and only one directive can be specified on each line. The
following example specifies the maximum size of the rsyslog message queue:
$MainMsgQueueSize 50000
367
Expression-based filters, written in the rsyslog scripting language, select messages according to
arithmetic, boolean, or string values.
Facility/priority-based filters filter messages based on facility and priority values that take the form
facility.priority.
Property-based filters filter messages by properties such as timegenerated or syslogtag.
The following table lists the available facility keywords for facility/priority-based filters:
Facility Keyword
Description
auth, authpriv
cron
crond messages.
daemon
kern
Kernel messages.
lpr
Mail system.
news
syslog
user
User-level messages.
UUCP
UUCP subsystem.
local0 - local7
Local use.
The following table lists the available priority keywords for facility/priority-based filters, in ascending order
of importance:
Priority Keyword
Description
debug
Debug-level messages.
info
Informational messages.
notice
warning
Warning conditions.
err
Error conditions.
crit
Critical conditions.
alert
emerg
System is unstable.
All messages of the specified priority and higher are logged according to the specified action. An asterisk
(*) wildcard specifies all facilities or priorities. Separate the names of multiple facilities and priorities on a
line with commas (,). Separate multiple filters on one line with semicolons (;). Precede a priority with an
exclamation mark (!) to select all messages except those with that priority.
The following are examples of facility/priority-based filters.
Select all kernel messages with any priority.
kern.*
368
mail.crit
Select all daemon and kern messages with warning or err priority.
daemon,kern.warning,err
Select all cron messages except those with info or debug priority.
cron.!info,!debug
/dev/console
/var/log/messages
/var/log/secure
-/var/log/maillog
/var/log/cron
/var/log/boot.log
You can send the logs to a central log server over TCP by adding the following entry to the forwarding
rules section of /etc/rsyslog.conf on each log client:
*.*
@@logsvr:port
where logsvr is the domain name or IP address of the log server and port is the port number (usually,
514).
On the log server, add the following entry to the MODULES section of /etc/rsyslog.conf:
$ModLoad imtcp
$InputTCPServerRun port
where port corresponds to the port number that you set on the log clients.
To manage the rotation and archival of the correct logs, edit /etc/logrotate.d/syslog so that it
references each of the log files that are defined in the RULES section of /etc/rsyslog.conf. You can
configure how often the logs are rotated and how many past copies of the logs are archived by editing /
etc/logrotate.conf.
It is recommended that you configure Logwatch on your log server to monitor the logs for suspicious
messages, and disable Logwatch on log clients. However, if you do use Logwatch, disable high precision
timestamps by adding the following entry to the GLOBAL DIRECTIVES section of /etc/rsyslog.conf
on each system:
369
Configuring Logwatch
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
For more information, see the logrotate(8), logwatch(8), rsyslogd(8) and rsyslog.conf(5)
manual pages, the HTML documentation in the /usr/share/doc/rsyslog-5.8.10 directory, and the
documentation at http://www.rsyslog.com/doc/manual.html.
accton
lastcomm
sa
For more information, see the ac(1), accton(8), lastcomm(1), and sa(8) manual pages.
370
To display the files that a package provides, use the repoquery utility, which is included in the yumutils package. For example, the following command lists the files that the btrfs-progs package
provides.
# repoquery -l btrfs-progs
/sbin/btrfs
/sbin/btrfs-convert
/sbin/btrfs-debug-tree
.
.
.
To uninstall a package, use the yum remove command, as shown in this example:
# yum remove xinetd
Loaded plugins: refresh-packagekit, security
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package xinetd.x86_64 2:2.3.14-35.el6_3 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package
Arch
Version
Repository
Size
================================================================================
Removing:
xinetd
x86_64
2:2.3.14-35.el6_3
@ol6_latest
259 k
Transaction Summary
================================================================================
Remove
1 Package(s)
Installed size: 259 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing
: 2:xinetd-2.3.14-35.el6_3.x86_64
Verifying : 2:xinetd-2.3.14-35.el6_3.x86_64
Removed:
xinetd.x86_64 2:2.3.14-35.el6_3
Complete!
371
1/1
1/1
The following table lists packages that you should not install or that you should remove using yum remove
if they are already installed.
Package
Description
krb5-appl-clients
rsh, rsh-server
samba
talk, talk-server
telnet, telnet-server
tftp, tftp-server
xinetd
ypbind, ypserv
If the service is not running, start it and enable it to start when the system is rebooted:
# systemctl start rsyslog
# systemctl enable rsyslog
Ensure that each log file referenced in /etc/rsyslog.conf exists and is owned and only readable by
root:
# touch logfile
# chown root:root logfile
# chmod 0600 logfile
It is also recommended that you use a central log server and that you configure Logwatch on that server.
See Section 26.7, About System Logging.
hard
core
372
You can restrict access to core dumps to certain users or groups, as described in the limits.conf(5)
manual page.
By default, the system prevents setuid and setgid programs, programs that have changed credentials,
and programs whose binaries do not have read permission from dumping core. To ensure that the setting
is permanently recorded, add the following lines to /etc/sysctl.conf:
# Disallow core dumping by setuid and setgid programs
fs.suid_dumpable = 0
Unless specifically stated otherwise, consider disabling the services in the following table if they are not
used on your system:
Service
Description
anacron
Executes commands periodically. Primarily intended for use on laptop and user
desktop machines that do not run continuously.
automount
Manages mount points for the automatic file-system mounter. Disable this
service on servers that do not require automounter functionality.
bluetooth
373
Service
Description
gpm
(General Purpose Mouse) Provides support for the mouse pointer in a text
console.
hidd
irqbalance
iscsi
Controls logging in to iSCSI targets and scanning of iSCSI devices. Disable this
service on servers that do not access iSCSI devices.
iscsid
Implements control and management for the iSCSI protocol. Disable this
service on servers that do not access iSCSI devices.
kdump
Allows a kdump kernel to be loaded into memory at boot time or a kernel dump
to be saved if the system panics. Disable this service on servers that you do
not use for debugging or testing.
mcstrans
mdmonitor
Checks the status of all software RAID arrays on the system. Disable this
service on servers that do not use software RAID.
pcscd
sandbox
setroubleshoot
smartd
xfs
You should consider disabling the following network services if they are not used on your system:
Service
Description
avahi-daemon
cups
hplip
374
Service
Description
isdn
netfs
Mounts and unmounts network file systems, including NCP, NFS, and SMB.
Disable this service on servers that do not require this functionality.
network
Activates all network interfaces that are configured to start at boot time.
NetworkManager
nfslock
Implements the Network Status Monitor (NSM) used by NFS. Disable this
service on servers that do not require this functionality.
nmb
Provides NetBIOS name services used by Samba. Disable this service and
remove the samba package if the system is not acting as an Active Directory
server, a domain controller, or as a domain member, and it does not provide
Microsoft Windows file and print sharing functionality.
portmap
Implements Remote Procedure Call (RPC) support for NFS. Disable this
service on servers that do not require this functionality.
rhnsd
Queries the Unbreakable Linux Network (ULN) for updates and information.
rpcgssd
Used by NFS. Disable this service on servers that do not require this
functionality.
rpcidmapd
Used by NFS. Disable this service on servers that do not require this
functionality.
smb
Provides SMB network services used by Samba. Disable this service and
remove the samba package if the system is not acting as an Active Directory
server, a domain controller, or as a domain member, and it does not provide
Microsoft Windows file and print sharing functionality.
To stop a service and prevent it from starting when you reboot the system, used the following commands:
# systemctl stop service_name
# systemctl disable service_name
375
For more information, see the xinetd(8) and /etc/xinetd.conf(5) manual pages.
376
You can restrict remote access to certain users and groups by specifying the AllowUsers,
AllowGroups, DenyUsers, and DenyGroups settings, for example:
DenyUsers carol dan
AllowUsers alice bob
The ClientAliveInterval and ClientAliveCountMax settings cause the SSH client to time out
automatically after a period of inactivity, for example:
# Disconnect client after 300 seconds of inactivity
ClientAliveCountMax 0
ClientAliveInterval 300
After making changes to the configuration file, restart the sshd service for your changes to take effect.
For more information, see the sshd_config(5) manual page.
377
Use the find command to check for unowned files and directories on each file system, for example:
# find mount_point -mount -type f -nouser -o -nogroup -exec ls -l {} \;
# find mount_point -mount -type d -nouser -o -nogroup -exec ls -l {} \;
Unowned files and directories might be associated with a deleted user account, they might indicate an
error with software installation or deleting, or they might a sign of an intrusion on the system. Correct
the permissions and ownership of the files and directories that you find, or remove them. If possible,
investigate and correct the problem that led to their creation.
Use the find command to check for world-writable directories on each file system, for example:
# find mount_point -mount -type d -perm /o+w -exec ls -l {} \;
Investigate any world-writable directory that is owned by a user other than a system user. The user can
remove or change any file that other users write to the directory. Correct the permissions and ownership of
the directories that you find, or remove them.
You can also use find to check for setuid and setgid executables.
# find path -type f \( -perm -4000 -o -perm -2000 \) -exec ls -l {} \;
If the setuid and setgid bits are set, an executable can perform a task that requires other rights, such
as root privileges. However, buffer overrun attacks can exploit such executables to run unauthorized code
with the rights of the exploited process.
If you want to stop a setuid and setgid executable from being used by non-root users, you can use
the following commands to unset the setuid or setgid bit:
# chmod u-s file
# chmod g-s file
The following table lists programs for which you might want to consider unsetting the setuid and setgid:
Program File
Bit Set
Description of Usage
/usr/bin/chage
setuid
/usr/bin/chfn
setuid
/usr/bin/chsh
setuid
/usr/bin/crontab
setuid
/usr/bin/wall
setgid
/usr/bin/write
setgid
/usr/bin/Xorg
setuid
/usr/libexec/openssh/
ssh-keysign
setuid
/usr/sbin/mount.nfs
setuid
/usr/sbin/netreport
setgid
378
Program File
Bit Set
Description of Usage
/usr/sbin/usernetctl
setuid
Note
This list is not exhaustive as many optional packages contain setuid and setgid
programs.
In the output from this command, the second field shows if a user account is locked (LK), does not have a
password (NP), or has a valid password (PS). The third field shows the date on which the user last changed
their password. The remaining fields show the minimum age, maximum age, warning period, and inactivity
period for the password and additional information about the password's status. The unit of time is days.
Use the passwd command to set passwords on any accounts that are not protected.
Use passwd -l to lock unused accounts. Alternatively, use userdel to remove the accounts entirely.
For more information, see the passwd(1) and userdel(8) manual pages.
To specify how users' passwords are aged, edit the following settings in the /etc/login.defs file:
Setting
Description
PASS_MAX_DAYS
Maximum number of days for which a password can be used before it must be
changed. The default value is 99,999 days.
PASS_MIN_DAYS
PASS_WARN_AGE
Number of days warning that is given before a password expires. The default
value is 7 days.
To change the default inactivity period for new user accounts, use the useradd command:
# useradd -D -f 30
379
A value of -1 specifies that user accounts are not locked due to inactivity.
For more information, see the useradd(8) and usermod(8) manual pages.
Verify that no user accounts other than root have a user ID of 0.
# awk -F":" '$3 == 0 { print $1 }' /etc/passwd
root
If you install software that creates a default user account and password, change the vendor's default
password immediately. Centralized user authentication using an LDAP implementation such as OpenLDAP
can help to simplify user authentication and management tasks, and also reduces the risk arising from
unused accounts or accounts without a password.
By default, an Oracle Linux system is configured so that you cannot log in directly as root. You must log
in as a named user before using either su or sudo to perform tasks as root. This configuration allows
system accounting to trace the original login name of any user who performs a privileged administrative
action. If you want to grant certain users authority to be able to perform specific administrative tasks via
sudo, use the visudo command to modify the /etc/sudoers file. For example, the following entry
grants the user erin the same privileges as root when using sudo, but defines a limited set of privileges
to frank so that he can run commands such as systemctl, rpm, and yum:
erin
frank
ALL=(ALL)
ALL
ALL= SERVICES, SOFTWARE
requisite
sufficient
required
The line for pam_pwquality.so defines that a user gets three attempts to choose a good password.
From the module's default settings, the password length must a minimum of six characters, of which three
characters must be different from the previous password. The module only tests the quality of passwords
for users who are defined in /etc/passwd.
The line for pam_unix.so specifies that the module tests the password previously specified in the stack
before prompting for a password if necessary (pam_pwquality will already have performed such checks
for users defined in /etc/passwd), uses SHA-512 password hashing and the /etc/shadow file, and
allows access if the existing password is null.
You can modify the control flags and module parameters to change the checking that is performed when a
user changes his or her password, for example:
password
password
password
required
required
required
The line for pam_pwquality.so defines that a user gets three attempts to choose a good password with
a minimum of eight characters, of which five characters must be different from the previous password, and
380
which must contain at least one upper case letter, one lower case letter, one numeric digit, and one nonalphanumeric character.
The line for pam_unix.so specifies that the module does not perform password checking, uses SHA-512
password hashing and the /etc/shadow file, and saves information about the previous five passwords for
each user in the /etc/security/opasswd file. As nullok is not specified, a user cannot change his or
her password if the existing password is null.
The omission of the try_first_pass keyword means that the user is always asked for their existing
password, even if he or she entered it for the same module or for a previous module in the stack.
For more information, see Section 24.7, About Pluggable Authentication Modules and the pam_deny(8),
pam_pwquality(8), and pam_unix(8) manual pages.
An alternate way of defining password requirements is available by selecting the Password Options tab in
the Authentication Configuration GUI (system-config-authentication).
Figure 26.2 shows the Authentication Configuration GUI with the Password Options tab selected.
Figure 26.2 Password Options
381
You can specify the minimum password length, minimum number of required character classes, which
character classes are required, and the maximum number of consecutive characters and consecutive
characters from the same class that are permitted.
382
383
383
384
385
385
385
386
387
388
388
This chapter describes how to configure OpenSSH to support secure communication between networked
systems.
sftp
ssh
sshd
ssh-keygen
Unlike utilities such as rcp, ftp, telnet, rsh, and rlogin, the OpenSSH tools encrypt all network
packets between the client and server, including password authentication.
OpenSSH supports SSH protocol version 1 (SSH1) and version 2 (SSH2). In addition, OpenSSH provides
a secure way of using graphical applications over a network by using X11 forwarding. It also provides a
way to secure otherwise insecure TCP/IP protocols by using port forwarding.
ssh_config
ssh_host_dsa_key
ssh_host_dsa_key.pub
383
ssh_host_key
ssh_host_key.pub
ssh_host_rsa_key
ssh_host_rsa_key.pub
sshd_config
Other files can be configured in this directory. For details, see the sshd(8) manual page.
For more information, see the ssh_config(5), sshd(8), and sshd_config(5) manual pages.
Contains a user's SSH2 RSA private and public keys. SSH2 RSA is
most commonly used key-pair type.
Caution
The private key file can be readable and writable by the user but must not be
accessible to other users.
The optional config file contains client configuration settings.
Caution
A config file can be readable and writable by the user but must not be accessible
to other users.
For more information, see the ssh(1) and ssh-keygen(1) manual pages.
Contains your authorized public keys. The server uses the signed public key in this
file to authenticate a client.
384
config
environment
rc
Contains commands that ssh executes when a user logs in, before the users shell
or command runs. This file is optional.
For more information, see the ssh(1) and ssh_config(5) manual pages.
2. Start the sshd service and configure it to start following a system reboot:
# systemctl start sshd
# systemctl enable sshd
You can set sshd configuration options for features such as Kerberos authentication, X11 forwarding, and
port forwarding in the /etc/ssh/sshd_config file.
For more information, see the sshd(8) and sshd_config(5) manual pages.
385
When you enter yes to accept the connection to the server, the client adds the servers public host key to
the your ~/.ssh/known_hosts file. When you next connect to the remote server, the client compares the
key in this file to the one that the server supplies. If the keys do not match, you see a warning such as the
following:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@
WARNING: POSSIBLE DNS SPOOFING DETECTED!
@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
The RSA host key for host has changed,
and the key for the according IP address IP_address
is unchanged. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
Offending key for IP in /home/user/.ssh/known_hosts:10
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is fingerprint
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:53
RSA host key for host has changed and you have requested strict checking.
Host key verification failed.
Unless there is a reason for the remote servers host key to have changed, such as an upgrade of either
the SSH software or the server, you should not try to connect to that machine until you have contacted its
administrator about the situation.
host is the name of the remote OpenSSH server to which you want to connect.
For example, to log in to host04 with the same user name as on the local system, enter:
$ ssh host04
The remote system prompts you for your password on that system.
To connect as a different user, specify the user name and @ symbol before the remote host name, for
example:
$ ssh joe@host04
To execute a command on the remote system, specify the command as an argument, for example:
$ ssh joe@host04 ls ~/.ssh
386
ssh logs you in, executes the command, and then closes the connection.
For more information, see the ssh(1) manual page.
Copy testfile to the same directory but change its name to new_testfile:
$ scp testfile host04:new_testfile
The -r option allows you to recursively copy the contents of directories. For example, copy the directory
remdir and its contents from your home directory on remote host04 to your local home directory:
$ scp -r host04:~/remdir ~
The sftp command is a secure alternative to ftp for file transfer between systems. Unlike scp, sftp
allows you to browse the file system on the remote server before you copy any files.
To open an FTP connection to a remote system over SSH:
$ sftp [options] [user@]host
For example:
$ sftp host04
Connecting to host04...
guest@host04s password: password
sftp>
Enter sftp commands at the sftp> prompt. For example, use put to upload the file newfile from the
local system to the remote system and ls to list it:
sftp> put newfile
Uploading newfile to /home/guest/newfile
foo
sftp> ls foo
foo
100% 1198
1.2KB/s
00:01
Enter help or ? to display a list of available commands. Enter bye, exit, or quit to close the connection
and exit sftp.
For more information, see the ssh(1) and sftp(1) manual pages.
387
To generate an SSH1 RSA or SSH2 DSA key pair, specify the -t rsa1 or -t dsa options.
For security, in case an attacker gains access to your private key, you can specify an passphrase to
encrypt your private key. If you encrypt your private key, you must enter this passphrase each time that
you use the key. If you do not specify a passphrase, you are not prompted.
ssh-keygen generates a private key file and a public key file in ~/.ssh (unless you specify an alternate
directory for the private key file):
$ ls -l ~/.ssh
total 8
-rw-------. 1 guest guest 1743 Apr 13 12:07 id_rsa
-rw-r--r--. 1 guest guest 397 Apr 13 12:07 id_rsa.pub
388
...
Press Enter each time that the command prompts you to enter a passphrase.
2. Use the ssh-copy-id script to append the public key in the local ~/.ssh/id_rsa.pub file to the
~/.ssh/authorized_keys file on the remote system, for example:
$ ssh-copy-id remote_user@host
remote_user@host's password: remote_password
Now try logging into the machine, with "ssh 'remote_user@host'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
3. Verify that the permissions on the remote ~/.ssh directory and ~/.ssh/authorized_keys file allow
access only by you:
$ ssh remote_user@host ls -al .ssh
total 4
drwx------+ 2 remote_user group
5 Jun 12 08:33 .
drwxr-xr-x+ 3 remote_user group
9 Jun 12 08:32 ..
-rw-------+ 1 remote_user group 397 Jun 12 08:33 authorized_keys
$ ssh remote_user@host getfacl .ssh
# file: .ssh
# owner: remote_user
# group: group
user::rwx
group::--mask::rwx
other::--$ ssh remote_user@host getfacl .ssh/authorized_keys
# file: .ssh/authorized_keys
# owner: remote_user
# group: group
user::rwgroup::--mask::rwx
other::---
389
Note
If your user names are the same on the client and the server systems, you do
not need to specify your remote user name and the @ symbol.
4. If your user names are different on the client and the server systems, create a ~/.ssh/config file
with permissions 600 on the remote system that defines your local user name, for example:
$ ssh remote_user@host echo -e "Host *\\\nUser local_user" '>>' .ssh/config
$ ssh remote_user@host cat .ssh/config
Host *
User local_user
$ ssh remote_user@host 'umask 077; /sbin/restorecon .ssh/config'
You should now be able to access the remote system without needing to specify your remote user
name, for example:
$ ssh host ls -l .ssh/config
-rw-------+ 1 remote_user group 37 Jun 12 08:34 .ssh/config
$ ssh host getfacl .ssh/config
# file: .ssh/config
# owner: remote_user
# group: group
user::rwgroup::--mask::rwx
other::---
For more information, see the ssh-copy-id(1), ssh-keygen(1), and ssh_config(5) manual pages.
390
Part V Containers
This section contains the following chapters:
Chapter 28, Linux Containers describes how to use Linux Containers (LXC) to isolate applications and entire
operating system images from the other processes that are running on a host system.
Note
Information on using the Docker engine to manage containers and images under Oracle Linux
is provided in the Oracle Linux Docker User's Guide available at http://docs.oracle.com/cd/
E52668_01/E75728/html/.
Table of Contents
28 Linux Containers ........................................................................................................................
28.1 About Linux Containers ...................................................................................................
28.1.1 Supported Oracle Linux Container Versions ..........................................................
28.2 Configuring Operating System Containers ........................................................................
28.2.1 Installing and Configuring the Software .................................................................
28.2.2 Setting up the File System for the Containers ........................................................
28.2.3 Creating and Starting a Container .........................................................................
28.2.4 About the lxc-oracle Template Script .....................................................................
28.2.5 About Veth and Macvlan ......................................................................................
28.2.6 Modifying a Container to Use Macvlan ..................................................................
28.3 Logging in to Containers .................................................................................................
28.4 Creating Additional Containers ........................................................................................
28.5 Monitoring and Shutting Down Containers ........................................................................
28.6 Starting a Command Inside a Running Container .............................................................
28.7 Controlling Container Resources ......................................................................................
28.8 Configuring ulimit Settings for an Oracle Linux Container ..................................................
28.9 Configuring Kernel Parameter Settings for Oracle Linux Containers ...................................
28.10 Deleting Containers .......................................................................................................
28.11 Running Application Containers .....................................................................................
28.12 For More Information About Linux Containers .................................................................
393
395
395
397
397
397
398
398
400
402
403
404
404
405
407
407
408
409
410
410
412
394
395
397
397
397
398
398
400
402
403
404
404
405
407
407
408
409
410
410
412
This chapter describes how to use Linux Containers (LXC) to isolate applications and entire operating
system images from the other processes that are running on a host system. The version of LXC described
here is 1.0.7 or later, which has some significant enhancements over previous versions.
For information about how to use the Docker Engine to create application containers, see the Oracle Linux
Docker User's Guide.
395
Running Oracle Linux 5, Oracle Linux 6, and Oracle Linux 7 containers in parallel. You can run an
Oracle Linux 5 container on an Oracle Linux 7 system with the UEK R3 or UEKR4 kernel, even though
UEK R3 and UEK R4 are not supported for Oracle Linux 5. You can also run an i386 container on an
x86_64 kernel. For more information, see Section 28.1.1, Supported Oracle Linux Container Versions.
Running applications that are supported only by Oracle Linux 5 in an Oracle Linux 5 container on an
Oracle Linux 7 host. However, incompatibilities might exist in the modules and drivers that are available.
Running many copies of application configurations on the same system. An example configuration would
be a LAMP stack, which combines Linux, Apache HTTP server, MySQL, and Perl, PHP, or Python
scripts to provide specialised web services.
Creating sandbox environments for development and testing.
Providing user environments whose resources can be tightly controlled, but which do not require the
hardware resources of full virtualization solutions.
Creating containers where each container appears to have its own IP address. For example you can
use the lxc-sshd template script to create isolated environments for untrusted users. Each container
runs an sshd daemon to handle logins. By bridging a container's Virtual Ethernet interface to the host's
network interface, each container can appear to have its own IP address on a LAN.
When you use the lxc-start command to start a system container, by default the copy of /sbin/
init (for an Oracle Linux 6 or earlier container) or /usr/lib/systemd/systemd (for an Oracle Linux
7 container) in the container is started to spawn other processes in the container's process space. Any
system calls or device access are handled by the kernel running on the host. If you need to run different
kernel versions or different operating systems from the host, use a full virtualization solution such as Oracle
VM or Oracle VM VirtualBox instead of Linux Containers.
There are a number of configuration steps that you need to perform on the file system image for a
container so that it can run correctly:
Disable any init or systemd scripts that load modules to access hardware directly.
Disable udev and instead create static device nodes in /dev for any hardware that needs to be
accessible from within the container.
Configure the network interface so that it is bridged to the network interface of the host system.
LXC provides a number of template scripts in /usr/share/lxc/templates that perform much of the
required configuration of system containers for you. However, it is likely that you will need to modify the
script to allow the container to work correctly as the scripts cannot anticipate the idiosyncrasies of your
system's configuration. You use the lxc-create command to create a system container by invoking a
template script. For example, the lxc-busybox template script creates a lightweight BusyBox system
container.
The example system container in this chapter uses the template script for Oracle Linux (lxc-oracle).
The container is created on a btrfs file system (/container) to take advantage of its snapshot feature.
A btrfs file system allows you to create a subvolume that contains the root file system (rootfs) of a
container, and to quickly create new containers by cloning this subvolume.
You can use control groups to limit the system resources that are available to applications such as web
servers or databases that are running in the container.
Application containers are not created by using template scripts. Instead, an application container mounts
all or part of the host's root file system to provide access to the binaries and libraries that the application
requires. You use the lxc-execute command to invoke /usr/sbin/init.lxc (a cut-down version of
396
/sbin/init) in the container. init.lxc mounts any required directories such as /proc, /dev/shm,
and /dev/mqueue, executes the specified application program, and then waits for it to finish executing.
When the application exits, the container instance ceases to exist.
This command installs all of the required packages, such as libvirt and lxc-libs. The LXC
template scripts are installed in /usr/share/lxc/templates. LXC uses the virtualization
management service to support network bridging for containers. LXC uses wget to download packages
from Oracle Public Yum.
3. Start the virtualization management service, libvirtd, and configure the service to start at boot time.
[root@host ~]# systemctl start libvirtd
[root@host ~]# systemctl enable libvirtd
LXC uses the virtualization management service to support network bridging for containers.
397
4. If you are going to compile applications that require the LXC header files and libraries, install the lxcdevel package.
[root@host ~]# yum install lxc-devel
/container
btrfs
defaults
0 0
For more information, see Section 21.2, About the Btrfs File System.
398
Note
For LXC version 1.0 and later, you must specify the -B btrfs option if you
want to use the snapshot features of btrfs. For more information, see the lxccreate(1) manual page.
The lxc-create command runs the template script lxc-oracle to create the container in /
container/ol6ctr1 with the btrfs subvolume /container/ol6ctr1/rootfs as its root file
system. The command then uses yum to install the latest available update of Oracle Linux 6 from
Oracle Public Yum. It also writes the container's configuration settings to the file /container/
ol6ctr1/config and its fstab file to /container/ol6ctr1/fstab. The default log file for the
container is /container/ol6ctr1/ol6ctr1.log.
You can specify the following template options after the -- option to lxc-create:
-a | --arch=i386|x86_64
--baseurl=pkg_repo
Specify the file URI of a package repository. You must also use the
--arch and --release options to specify the architecture and the
release, for example:
# mount -o loop OracleLinux-R7-GA-Everything-x86_64-dvd.iso /mnt
# lxc-create -n ol70beta -B btrfs -t oracle -- -R 7.0 -a x86_64 \
--baseurl=file:///mnt/Server
-P | --patch=path
--privileged[=rt]
-R | -release=major.minor
Specifies the major release number and minor update number of the
Oracle release to install. The value of major can be set to 4, 5, 6,
or 7. If you specify latest for minor, the latest available release
packages for the major release are installed. If the host is running
399
Oracle Linux, the default release is the same as the release installed
on the host. Otherwise, the default release is the latest update of
Oracle Linux 6.
-r | --rpms=rpm_name
-t | --templatefs=rootfs
-u | --url=repo_URL
2. If you want to create additional copies of the container in its initial state, create a snapshot of the
container's root file system, for example:
# btrfs subvolume snapshot /container/ol6ctr1/rootfs /container/ol6ctr1/rootfs_snap
See Section 21.2, About the Btrfs File System and Section 28.4, Creating Additional Containers.
3. Start the container ol6ctr1 as a daemon that writes its diagnostic output to a log file other than the
default log file.
[root@host ~]# lxc-start -n ol6ctr1 -d -o /container/ol6ctr1_debug.log -l DEBUG
Note
If you omit the -d option, the container's console opens in the current shell.
The following logging levels are available: FATAL, CRIT, WARN, ERROR,
NOTICE, INFO, and DEBUG. You can set a logging level for all lxc-*
commands.
If you run the ps -ef --forest command on the host system and the process tree below the
lxc-start process shows that the /usr/sbin/sshd and /sbin/mingetty processes have
started in the container, you can log in to the container from the host. See Section 28.3, Logging in to
Containers.
400
Description
chkconfig
dhclient
initscripts
openssh-server
oraclelinux-release
passwd
policycoreutils
rootfiles
rsyslog
vim-minimal
yum
The template script edits the system configuration files under rootfs to set up networking in the container
and to disable unnecessary services including volume management (LVM), device management (udev),
the hardware clock, readahead, and the Plymouth boot system.
401
If you want to allow network connections from outside the host to be able to connect to the container,
the container needs to have an IP address on the same network as the host. One way to achieve this
configuration is to use a macvlan bridge to create an independent logical network for the container. This
network is effectively an extension of the local network that is connected the host's network interface.
External systems can access the container as though it were an independent system on the network, and
the container has network access to other containers that are configured on the bridge and to external
systems. The container can also obtain its IP address from an external DHCP server on your local network.
However, unlike a veth bridge, the host system does not have network access to the container.
Figure 28.2 illustrates a host system with two containers that are connected via a macvlan bridge.
Figure 28.2 Network Configuration of Containers Using a Macvlan Bridge
If you do not want containers to be able to see each other on the network, you can configure the Virtual
Ethernet Port Aggregator (VEPA) mode of macvlan. Figure 28.3 illustrates a host system with two
402
containers that are separately connected to a network by a macvlan VEPA. In effect, each container is
connected directly to the network, but neither container can access the other container nor the host via the
network.
Figure 28.3 Network Configuration of Containers Using a Macvlan VEPA
For information about configuring macvlan, see Section 28.2.6, Modifying a Container to Use Macvlan
and the lxc.conf(5) manual page.
In these sample configurations, the setting for lxc.network.link assumes that you want the container's
network interface to be visible on the network that is accessible via the host's eth0 interface.
403
Logging in to Containers
to read:
BOOTPROTO=none
If you do not specify a tty number, you log in to the first available terminal.
For example, log in to a terminal on ol6ctr1:
[root@host ~]# lxc-console -n ol6ctr1
Alternatively, you can use the lxc-create command to create a container by copying the root file system
from an existing system, container, or Oracle VM template. Specify the path of the root file system as the
argument to the --templatefs template option:
[root@host ~]# lxc-create -n ol6ctr3 -B btrfs -t oracle -- --templatefs=/container/ol6ctr1/rootfs_snap
404
This example copies the new container's rootfs from a snapshot of the rootfs that belongs to container
ol6ctr1. The additional container is created in /container/ol6ctr3 and a new rootfs snapshot is
created in /container/ol6ctr3/rootfs.
Note
For LXC version 1.0 and later, you must specify the -B btrfs option if you want to
use the snapshot features of btrfs. For more information, see the lxc-create(1)
manual page.
To change the host name of the container, edit the HOSTNAME settings
in /container/name/rootfs/etc/sysconfig/network and /
container/name/rootfs/etc/sysconfig/network-scripts/
ifcfg-iface, where iface is the name of the network interface, such as eth0.
To display the containers that are running on the host system, specify the --active option.
[root@host ~]# lxc-ls --active
ol6ctr1
To display the state of a container, use the lxc-info command on the host.
[root@host ~]# lxc-info -n ol6ctr1
Name:
ol6ctr1
State:
RUNNING
PID:
5662
IP:
192.168.122.188
CPU use:
1.63 seconds
BlkIO use:
18.95 MiB
Memory use:
11.53 MiB
KMem use:
0 bytes
Link:
vethJHU5OA
TX bytes:
1.42 KiB
RX bytes:
6.29 KiB
Total bytes:
7.71 KiB
A container can be in one of the following states: ABORTING, RUNNING, STARTING, STOPPED, or
STOPPING. Although lxc-info might show your container to be in the RUNNING state, you cannot log in
to it unless the /usr/sbin/sshd or /sbin/mingetty processes have started running in the container.
You must allow time for the init or systemd process in the container to first start networking and the
various other services that you have configured.
To view the state of the processes in the container from the host, either run ps -ef --forest and
look for the process tree below the lxc-start process or use the lxc-attach command to run the ps
command in the container.
[root@host ~]# ps
UID
PID
PPID
...
root 3171
1
root 3182 3171
root 3441 3182
-ef --forest
C STIME TTY
TIME
0 09:57 ?
0 09:57 ?
0 09:57 ?
CMD
405
root 3464
root 3493
root 3500
root 3504
root 3506
root 3508
root 3510
...
[root@host
USER
root
root
root
root
root
root
root
root
root
root
3182
3182
3182
3182
3182
3182
3182
0
0
0
0
0
0
0
09:57
09:57
09:57
09:57
09:57
09:57
09:57
?
?
pts/5
pts/1
pts/2
pts/3
pts/4
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
\_
\_
\_
\_
\_
\_
\_
/sbin/rsyslogd
/usr/sbin/sshd
/sbin/mingetty
/sbin/mingetty
/sbin/mingetty
/sbin/mingetty
/sbin/mingetty
...
...
...
...
...
...
/dev/console
/dev/tty1
/dev/tty2
/dev/tty3
/dev/tty4
TIME COMMAND
0:00 /sbin/init
0:00 /sbin/dhclient
0:00 /sbin/rsyslogd
0:00 /usr/sbin/sshd
0:00 /sbin/mingett
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /bin/ps aux
Tip
If a container appears not to be starting correctly, examining its process tree from
the host will often reveal where the problem might lie.
If you were logged into the container, the output from the ps -ef command would look similar to the
following.
[root@ol6ctr1 ~]# ps -ef
UID
PID PPID C STIME
root
1
0 0 11:54
root 193
1 0 11:54
root 216
1 0 11:54
root 258
1 0 11:54
root 265
1 0 11:54
root 271
1 0 11:54
root 273
1 0 11:54
root 275
1 0 11:54
root 297
1 0 11:57
root 301
297 0 12:08
root 312
301 0 12:08
TTY
TIME CMD
?
00:00:00 /sbin/init
?
00:00:00 /sbin/dhclient -H ol6ctr1 ...
?
00:00:00 /sbin/rsyslogd -i ...
?
00:00:00 /usr/sbin/sshd
lxc/console 00:00:00 /sbin/mingetty ... /dev/console
lxc/tty2 00:00:00 /sbin/mingetty ... /dev/tty2
lxc/tty3 00:00:00 /sbin/mingetty ... /dev/tty3
lxc/tty4 00:00:00 /sbin/mingetty ... /dev/tty4
?
00:00:00 login -- root
lxc/tty1 00:00:00 -bash
lxc/tty1 00:00:00 ps -ef
Note that the process numbers differ from those of the same processes on the host, and that they all
descend from process 1, /sbin/init, in the container.
To suspend or resume the execution of a container, use the lxc-freeze and lxc-unfreeze commands
on the host.
[root@host ~]# lxc-freeze -n ol6ctr1
[root@host ~]# lxc-unfreeze -n ol6ctr1
From the host, you can use the lxc-stop command with the --nokill option to shut down the container
in an orderly manner.
[root@host ~]# lxc-stop --nokill -n ol6ctr1
Alternatively, you can run a command such as halt while logged in to the container.
[root@ol6ctr1 ~]# halt
Broadcast message from root@ol6ctr1
(/dev/tty2) at 22:52 ...
406
As shown in the example, you are returned to the shell prompt on the host.
To shut down a container by terminating its processes immediately, use lxc-stop with the -k option.
[root@host ~]# lxc-stop -k -n ol6ctr1
If you are debugging the operation of a container, this is the quickest method as you would usually destroy
the container and create a new version after modifying the template script.
To monitor the state of a container, use the lxc-monitor command.
[root@host ~]# lxc-monitor
'ol6ctr1' changed state to
'ol6ctr1' changed state to
'ol6ctr1' changed state to
'ol6ctr1' changed state to
-n ol6ctr1
[STARTING]
[RUNNING]
[STOPPING]
[STOPPED]
To wait for a container to change to a specified state, use the lxc-wait command.
lxc-wait -n $CTR -s ABORTING && lxc-wait -n $CTR -s STOPPED && \
echo "Container $CTR terminated with an error."
TIME COMMAND
0:00 /sbin/init
0:00 /sbin/dhclient
0:00 /sbin/rsyslogd
0:00 /usr/sbin/sshd
0:00 /sbin/mingett
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /sbin/mingetty
0:00 /bin/ps aux
407
0-7
To restrict a container to cores 0 and 1, you would enter a command such as the following:
[root@host ~]# lxc-cgroup -n ol6ctr1 cpuset.cpus 0,1
To change a container's share of CPU time and block I/O access, you would enter:
[root@host ~]# lxc-cgroup -n ol6ctr2 cpu.shares 256
[root@host ~]# lxc-cgroup -n ol6ctr2 blkio.weight 500
Limit a container to 256 MB of memory when the system detects memory contention or low memory;
otherwise, set a hard limit of 512 MB:
[root@host ~]# lxc-cgroup -n ol6ctr2 memory.soft_limit_in_bytes 268435456
[root@host ~]# lxc-cgroup -n ol6ctr2 memory.limit_in_bytes 53687091
To make the changes to a container's configuration permanent, add the settings to the file /
container/name/config, for example:
# Permanently tweaked resource settings
lxc.cgroup.cpu.shares=256
lxc.cgroup.blkio.weight=500
For more information about the resources that can be controlled, see http://www.kernel.org/doc/
Documentation/cgroups/.
<type>
soft
hard
soft
hard
<item>
memlock
memlock
nofile
nofile
<value>
1048576
2097152
5120
10240
A process can use the ulimit built-in shell command or the setrlimit() system call to raise the
current limit for a shell above the soft limit. However, the new value cannot exceed the hard limit unless the
process is owned by root.
You can use ulimit to set or display the current soft and hard values on the host or from inside the
container, for example:
[root@host ~]#
host: nofile =
[root@host ~]#
host: nofile =
[root@host ~]#
408
Note
Log out and log in again or, if possible, reboot the host before starting the container
in a shell that uses the new soft and hard values for ulimit.
409
Deleting Containers
/proc/sys/net/ipv4/tcp_syncookies
With UEK R3 QU6 and later, these parameters are read-only within the container to
allow Oracle Database and other applications to be installed. You can change the
values of these parameters only from the host. Any changes that you make to hostonly parameters apply to all containers on the host.
While the container is active, you can monitor it by running commands such as lxc-ls --active and
lxc-info -n guest from another window.
[root@host ~]# lxc-ls --active
guest
[root@host ~]# lxc-info -n guest
Name:
guest
State:
RUNNING
PID:
11220
CPU use:
0.02 seconds
BlkIO use:
0 bytes
Memory use:
544.00 KiB
KMem use:
0 bytes
If you need to customize an application container, you can use a configuration file. For example, you might
want to change the container's network configuration or the system directories that it mounts.
The following example shows settings from a sample configuration file where the rootfs is mostly not
shared except for mount entries to ensure that init.lxc and certain library and binary directory paths are
available.
lxc.utsname = guest
lxc.tty = 1
lxc.pts = 1
lxc.rootfs = /tmp/guest/rootfs
lxc.mount.entry=/usr/lib usr/lib none ro,bind 0 0
lxc.mount.entry=/usr/lib64 usr/lib64 none ro,bind 0 0
lxc.mount.entry=/usr/bin usr/bin none ro,bind 0 0
lxc.mount.entry=/usr/sbin usr/sbin none ro,bind 0 0
lxc.cgroup.cpuset.cpus=1
The mount entry for /usr/sbin is required so that the container can access /usr/sbin/init.lxc on
the host system.
410
In practice, you should limit the host system directories that an application container mounts to only those
directories that the container needs to run the application.
Note
To avoid potential conflict with system containers, do not use the /container
directory for application containers.
You must also configure the required directories and symbolic links under the rootfs directory:
[root@host ~]# TMPDIR=/tmp/guest/rootfs
[root@host ~]# mkdir -p $TMPDIR/usr/lib $TMPDIR/usr/lib64 \
$TMPDIR/usr/bin $TMPDIR/usr/sbin \
$TMPDIR/dev/pts $TMPDIR/dev/shm $TMPDIR/proc
[root@host ~]# ln -s $TMPDIR/usr/lib $TMPDIR/lib
[root@host ~]# ln -s $TMPDIR/usr/lib64 $TMPDIR/lib64
[root@host ~]# ln -s $TMPDIR/usr/bin $TMPDIR/bin
[root@host ~]# ln -s $TMPDIR/usr/sbin $TMPDIR/sbin
In this example, the directories include /dev/pts, /dev/shm, and /proc in addition to the mount point
entries defined in the configuration file.
You can then use the -f option to specify the configuration file (config) to lxc-execute:
[root@host ~]# lxc-execute -n guest -f config /usr/bin/bash
bash-4.2# ps -ef
UID
PID PPID C STIME TTY
TIME CMD
0
1
0 0 14:17 ?
00:00:00 /usr/sbin/init.lxc -- /usr/bin/bash
0
4
1 0 14:17 ?
00:00:00 /usr/bin/bash
0
5
4 0 14:17 ?
00:00:00 ps -ef
bash-4.2# mount
/dev/sda3 on / type btrfs (rw,relatime,seclabel,space_cache)
/dev/sda3 on /usr/lib type btrfs (ro,relatime,seclabel,space_cache)
/dev/sda3 on /usr/lib64 type btrfs (ro,relatime,seclabel,space_cache)
/dev/sda3 on /usr/bin type btrfs (ro,relatime,seclabel,space_cache)
/dev/sda3 on /usr/sbin type btrfs (ro,relatime,seclabel,space_cache)
devpts on /dev/pts type devpts (rw,relatime,seclabel,gid=5,mode=620,ptmxmode=666)
proc on /proc type proc (rw,relatime)
shmfs on /dev/shm type tmpfs (rw,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
bash-4.2# ls -l /
total 16
lrwxrwxrwx.
1 0 0 7 May 21 14:03 bin -> usr/bin
drwxr-xr-x.
1 0 0 52 May 21 14:27 dev
lrwxrwxrwx.
1 0 0 7 May 21 14:03 lib -> usr/lib
lrwxrwxrwx.
1 0 0 9 May 21 14:27 lib64 -> usr/lib64
dr-xr-xr-x. 230 0 0 0 May 21 14:27 proc
lrwxrwxrwx.
1 0 0 8 May 21 14:03 sbin -> usr/sbin
drwxr-xr-x.
1 0 0 30 May 21 12:58 usr
bash-4.2# touch /bin/foo
touch: cannot touch '/bin/foo': Read-only file system
bash-4.2# echo $?
1
In this example, running the ps command reveals that bash runs as a child of init.lxc. mount shows
the individual directories that the container mounts read-only, such as /usr/lib, and ls -l / displays
the symbolic links that you set up in rootfs.
Attempting to write to the read-only /bin file system results in an error. If you were to run the same lxcexecute command without specifying the configuration file, it would make the entire root file system of the
host available to the container in read/write mode.
As for system containers, you can set cgroup entries in the configuration file and use the lxc-cgroup
command to control the system resources to which an application container has access.
411
Note
lxc-execute is intended to run application containers that share the host's root
file system, and not to run system containers that you create using lxc-create.
Use lxc-start to run system containers.
For more information, see the lxc-execute(1) and lxc.conf(5) manual pages.
412