Red Hat Enterprise Linux 7 Beta Performance Tuning Guide en US
Red Hat Enterprise Linux 7 Beta Performance Tuning Guide en US
Red Hat Enterprise Linux 7 Beta Performance Tuning Guide en US
2 Beta
Performance Tuning Guide
Laura Bailey
Legal Notice
Co pyright 20 14 20 15 Red Hat, Inc. and o thers.
This do cument is licensed by Red Hat under the Creative Co mmo ns Attributio n-ShareAlike 3.0
Unpo rted License. If yo u distribute this do cument, o r a mo dified versio n o f it, yo u must pro vide
attributio n to Red Hat, Inc. and pro vide a link to the o riginal. If the do cument is mo dified, all Red
Hat trademarks must be remo ved.
Red Hat, as the licenso r o f this do cument, waives the right to enfo rce, and agrees no t to assert,
Sectio n 4 d o f CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shado wman lo go , JBo ss, MetaMatrix, Fedo ra, the Infinity
Lo go , and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o ther
co untries.
Linux is the registered trademark o f Linus To rvalds in the United States and o ther co untries.
Java is a registered trademark o f Oracle and/o r its affiliates.
XFS is a trademark o f Silico n Graphics Internatio nal Co rp. o r its subsidiaries in the United
States and/o r o ther co untries.
MySQL is a registered trademark o f MySQL AB in the United States, the Euro pean Unio n and
o ther co untries.
No de.js is an o fficial trademark o f Jo yent. Red Hat So ftware Co llectio ns is no t fo rmally
related to o r endo rsed by the o fficial Jo yent No de.js o pen so urce o r co mmercial pro ject.
The OpenStack Wo rd Mark and OpenStack Lo go are either registered trademarks/service
marks o r trademarks/service marks o f the OpenStack Fo undatio n, in the United States and o ther
co untries and are used with the OpenStack Fo undatio n's permissio n. We are no t affiliated with,
endo rsed o r spo nso red by the OpenStack Fo undatio n, o r the OpenStack co mmunity.
All o ther trademarks are the pro perty o f their respective o wners.
Abstract
The Red Hat Enterprise Linux 7 Perfo rmance Tuning Guide explains ho w to o ptimize Red Hat
Enterprise Linux 7 perfo rmance. It also do cuments perfo rmance-related upgrades in Red Hat
Enterprise Linux 7. The Perfo rmance Tuning Guide presents o nly field-tested and pro ven
pro cedures. No netheless, all pro spective co nfiguratio ns sho uld be set up and tested in a
testing enviro nment befo re being applied to a pro ductio n system. Backing up all data and
co nfiguratio n settings prio r to tuning is also reco mmended. No te: this do cument is under active
develo pment; is subject to substantial change; and is pro vided o nly as a preview. The included
info rmatio n and instructio ns sho uld no t be co nsidered co mplete and sho uld be used with
cautio n.
T able of Contents
. .hapt
C
. . . .er
. .1. .. Performance
. . . . . . . . . . . .Feat
. . . ures
. . . . .in
. .Red
. . . .Hat
. . . .Ent
. . .erprise
. . . . . . Linux
. . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . .
1.1. New in 7.1
3
1.2. New in 7.0
3
. .hapt
C
. . . .er
. .2. .. Performance
. . . . . . . . . . . .Monit
. . . . .oring
. . . . .T. ools
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . .
2 .1. /p ro c
5
2 .2. G NO ME Sys tem Mo nito r
5
2 .3. Perfo rmanc e Co -Pilo t (PCP)
6
2 .4. Tuna
6
2 .5. Built in c o mmand line to o ls
6
2 .6 . tuned and tuned -ad m
7
2 .7. p erf
8
2 .8 . turb o s tat
8
2 .9 . io s tat
9
2 .10 . irq b alanc e
9
2 .11. s s
2 .12. numas tat
9
9
2 .13. numad
2 .14. Sys temTap
10
10
10
11
. .hapt
C
. . . .er
. .3.
. .CPU
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 2. . . . . . . . . .
3 .1. Co ns id eratio ns
3 .2. Mo nito ring and d iag no s ing p erfo rmanc e p ro b lems
12
17
18
. .hapt
C
. . . .er
. .4. .. Memory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. 4. . . . . . . . . .
4 .1. Co ns id eratio ns
24
4 .2. Mo nito ring and d iag no s ing p erfo rmanc e p ro b lems
4 .3. Co nfig uratio n to o ls
25
28
. .hapt
C
. . . .er
. .5.
. .St
. .orage
. . . . . and
. . . . File
. . . .Syst
. . . .ems
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
...........
5 .1. Co ns id eratio ns
5 .2. Mo nito ring and d iag no s ing p erfo rmanc e p ro b lems
33
39
41
. .hapt
C
. . . .er
. .6. .. Net
. . . working
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
...........
6 .1. Co ns id eratio ns
52
6 .2. Mo nito ring and d iag no s ing p erfo rmanc e p ro b lems
53
6 .3. Co nfig uratio n to o ls
54
. .ppendix
A
. . . . . . . A.
..T
. .ool
. . . Reference
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. 1. . . . . . . . . .
A .1. irq b alanc e
61
A .2. Tuna
62
A .3. ethto o l
64
A .4. s s
64
A .5. tuned
A .6 . tuned -ad m
A .7. p erf
A .8 . Perfo rmanc e Co -Pilo t (PCP)
64
65
66
67
A .9 . vms tat
A .10 . x8 6 _energ y_p erf_p o lic y
A .11. turb o s tat
67
68
69
69
70
71
A .14. numad
A .15. O Pro file
A .16 . tas ks et
A .17. Sys temTap
72
74
75
75
. .ppendix
A
. . . . . . . B.
. . .Revision
. . . . . . . .Hist
. . . ory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. 6. . . . . . . . . .
Red Hat Enterprise Linux 7 provides support for automatic NUMA balancing. The kernel now
automatically detects which memory pages process threads are actively using, and groups the
threads and their memory into or across NUMA nodes. The kernel reschedules threads and
migrates memory to balance the system for optimal NUMA alignment and performance.
The performance penalty to enabling file system barriers is now negligible (less than 3% ). As
such, t u n ed profiles no longer disable file system barriers.
OProfile adds support for profiling based on the Linux Performance Events subsystem with the
new o perf tool. This new tool can be used to collect data in place of the o pco ntro l daemon.
Control groups remain available as a method of allocating resources to certain groups of
processes on your system. For detailed information about implementation in Red Hat
Enterprise Linux 7, see the Red Hat Enterprise Linux 7 Resource Management Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
2.1. /proc
The /pro c " file system" is a directory that contains a hierarchy of files that represent the current state
of the Linux kernel. It allows users and applications to see the kernel's view of the system.
The /pro c directory also contains information about system hardware and any currently running
processes. Most files in the /pro c file system are read-only, but some files (primarily those in
/proc/sys) can be manipulated by users and applications to communicate configuration changes to
the kernel.
For further information about viewing and editing files in the /pro c directory, refer to the Red Hat
Enterprise Linux 7 System Administrator's Reference Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
2.4 . T una
Tuna adjusts configuration details such as scheduler policy, thread priority, and CPU and interrupt
affinity. The tuna package provides a command line tool and a graphical interface with equivalent
functionality.
Section 3.3.8, Configuring CPU, thread, and interrupt affinity with Tuna describes how to configure
your system with Tuna on the command line. For details about how to use Tuna, see Section A.2,
Tuna or the man page:
$ man tuna
2.5.1. t op
The top tool, provided by the procps-ng package, gives a dynamic view of the processes in a running
system. It can display a variety of information, including a system summary and a list of tasks
currently being managed by the Linux kernel. It also has a limited ability to manipulate processes,
and to make configuration changes persistent across system restarts.
By default, the processes displayed are ordered according to the percentage of CPU usage, so that
you can easily see the processes consuming the most resources. Both the information top displays
and its operation are highly configurable to allow you to concentrate on different usage statistics as
required.
For detailed information about using top, see the man page:
$ man top
2.5.2. ps
The ps tool, provided by the procps-ng package, takes a snapshot of a select group of active
processes. By default, the group examined is limited to processes that are owned by the current user
and associated with the terminal in which ps is run.
ps can provide more detailed information about processes than top, but by default it provides a
single snapshot of this data, ordered by process identifier.
For detailed information about using ps, see the man page:
$ man ps
2.7. perf
The perf tool uses hardware performance counters and kernel tracepoints to track the impact of other
commands and applications on your system. Various perf subcommands display and record
statistics for common performance events, and analyze and report on the data recorded.
For detailed information about perf and its subcommands, see Section A.7, perf .
Alternatively, more information is available in the Red Hat Enterprise Linux 7 Developer Guide,
available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
2.8. t urbost at
Turbostat is provided by the kernel-tools package. It reports on processor topology, frequency, idle
power-state statistics, temperature, and power usage on Intel 64 processors.
Turbostat is useful for identifying servers that are inefficient in terms of power usage or idle time. It
also helps to identify the rate of system management interrupts (SMIs) occurring on the system. It can
also be used to verify the effects of power management tuning.
Turbostat requires root privileges to run. It also requires processor support for the following:
invariant time stamp counters
APERF model-specific registers
MPERF model-specific registers
For more details about turbostat output and how to read it, see Section A.11, turbostat .
For more information about turbostat, see the man page:
$ man turbostat
2.9. iost at
The io st at tool is provided by the sysstat package. It monitors and reports on system input/output
device loading to help administrators make decisions about how to balance input/output load
between physical disks. It reports on processor or device utilization since iostat was last run, or since
boot. You can focus the output of these reports on specific devices by using the parameters defined
in the iostat man page:
$ man iostat
2.10. irqbalance
irq b alan ce is a command line tool that distributes hardware interrupts across processors to
improve system performance. For details about irq b alan ce, see Section A.1, irqbalance or the
man page:
$ man irqbalance
2.11. ss
ss is a command-line utility that prints statistical information about sockets, allowing administrators
to assess device performance over time. By default, ss lists open non-listening TCP sockets that have
established connections, but a number of useful options are provided to help administrators filter out
statistics about specific sockets.
Red Hat recommends using ss over netstat in Red Hat Enterprise Linux 7.
One common usage is ss -tmpi e which displays detailed information (including internal
information) about TCP sockets, memory usage, and processes using the socket.
ss is provided by the iproute package. For more information, see the man page:
$ man ss
2.12. numast at
The n u mast at tool displays memory statistics for processes and the operating system on a perNUMA-node basis.
By default, n u mast at displays per-node NUMA hit an miss system statistics from the kernel memory
allocator. Optimal performance is indicated by high numa_hi t values and low numa_mi ss values.
N u mast at also provides a number of command line options, which can show how system and
process memory is distributed across NUMA nodes in the system.
It can be useful to cross-reference per-node n u mast at output with per-CPU t o p output to verify that
process threads are running on the same node to which memory is allocated.
N u mast at is provided by the numactl package. For details about how to use numastat, see
Section A.12, numastat . For further information about numastat, see the man page:
$ man numastat
2.13. numad
numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource
usage within a system in order to dynamically improve NUMA resource allocation and management
(and therefore system performance). D epending on system workload, numad can provide up to 50
percent improvements in performance benchmarks. It also provides a pre-placement advice service
that can be queried by various job management systems to provide assistance with the initial
binding of CPU and memory resources for their processes.
numad monitors available system resources on a per-node basis by periodically accessing
information in the /pro c file system. It tries to maintain a specified resource usage level, and
rebalances resource allocation when necessary by moving processes between NUMA nodes. numad
attempts to achieve optimal NUMA performance by localizing and isolating significant processes on
a subset of the system's NUMA nodes.
numad primarily benefits systems with long-running processes that consume significant amounts of
resources, and are contained in a subset of the total system resources. It may also benefit
applications that consume multiple NUMA nodes' worth of resources; however, the benefits provided
by numad decrease as the consumed percentage of system resources increases.
numad is unlikely to improve performance when processes run for only a few minutes, or do not
consume many resources. Systems with continuous, unpredictable memory access patterns, such as
large in-memory databases, are also unlikely to benefit from using numad.
For further information about using numad, see Section 3.3.5, Automatic NUMA affinity management
with numad or Section A.14, numad , or refer to the man page:
$ man numad
2.15. OProfile
OProfile is a system-wide performance monitoring tool. It uses the processor's dedicated
performance monitoring hardware to retrieve information about the kernel and system executables to
determine the frequency of certain events, such as when memory is referenced, the number of secondlevel cache requests, and the number of hardware requests received. OProfile can also be used to
determine processor usage, and to determine which applications and services are used most often.
However, OProfile does have several limitations:
10
Performance monitoring samples may not be precise. Because the processor may execute
instructions out of order, samples can be recorded from a nearby instruction instead of the
instruction that triggered the interrupt.
OProfile expects processes to start and stop multiple times. As such, samples from multiple runs
are allowed to accumulate. You may need to clear the sample data from previous runs.
OProfile focuses on identifying problems with processes limited by CPU access. It is therefore not
useful for identifying processes that are sleeping while they wait for locks on other events.
For more detailed information about OProfile, see Section A.15, OProfile , or the Red Hat
Enterprise Linux 7 System Administrator's Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/. Alternatively, refer to the
documentation on your system, located in /usr/share/d o c/o pro fi l e-version.
2.16. Valgrind
Valgrind provides a number of detection and profiling tools to help improve the performance of your
applications. These tools can detect memory and thread-related errors, as well as heap, stack, and
array overruns, letting you easily locate and correct errors in your application code. They can also
profile the cache, the heap, and branch-prediction to identify factors that may increase application
speed and minimize memory usage.
Valgrind analyzes your application by running it on a synthetic CPU and instrumenting existing
application code as it is executed. It then prints commentary that clearly identifies each process
involved in application execution to a user-specified file, file descriptor, or network socket. Note that
executing instrumented code can take between four and fifty times longer than normal execution.
Valgrind can be used on your application as-is, without recompiling. However, because Valgrind
uses debugging information to pinpoint issues in your code, if your application and support libraries
were not compiled with debugging information enabled, Red Hat recommends recompiling to include
this information.
Valgrind also integrates with the GNU Project D ebugger (gdb) to improve debugging efficiency.
Valgrind and its subordinate tools are useful for memory profiling. For detailed information about
using Valgrind to profile system memory, see Section 4.2.2, Profiling application memory usage with
Valgrind .
For detailed information about Valgrind, see the Red Hat Enterprise Linux 7 D eveloper Guide,
available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
For detailed information about using Valgrind, see the man page:
$ man valgrind
Accompanying documentation can also be found in /usr/share/d o c/val g ri nd -version when
the valgrind package is installed.
11
Chapter 3. CPU
This chapter outlines CPU hardware details and configuration options that affect application
performance in Red Hat Enterprise Linux 7. Section 3.1, Considerations discusses the CPU related
factors that affect performance. Section 3.2, Monitoring and diagnosing performance problems
teaches you how to use Red Hat Enterprise Linux 7 tools to diagnose performance problems related
to CPU hardware or configuration details. Section 3.3, Configuration suggestions discusses the
tools and strategies you can use to solve CPU related performance problems in Red Hat
Enterprise Linux 7.
12
Chapt er 3. CPU
x86_64
32-bit, 64-bit
Little Endian
40
0-39
1
10
4
4
GenuineIntel
6
47
Intel(R) Xeon(R) CPU E7- 4870
2
2394.204
4787.85
VT-x
32K
32K
@ 2.40GHz
13
L2 cache:
L3 cache:
NUMA node0
NUMA node1
NUMA node2
NUMA node3
CPU(s):
CPU(s):
CPU(s):
CPU(s):
256K
30720K
0,4,8,12,16,20,24,28,32,36
2,6,10,14,18,22,26,30,34,38
1,5,9,13,17,21,25,29,33,37
3,7,11,15,19,23,27,31,35,39
The l sto po command, provided by the hwloc package, creates a graphical representation of your
system. The l sto po -no -g raphi cs command provides detailed textual output.
14
Chapt er 3. CPU
15
3.1.2. Scheduling
In Red Hat Enterprise Linux, the smallest unit of process execution is called a thread. The system
scheduler determines which processor runs a thread, and for how long the thread runs. However,
because the scheduler's primary concern is to keep the system busy, it may not schedule threads
optimally for application performance.
For example, say an application on a NUMA system is running on Node A when a processor on
Node B becomes available. To keep the processor on Node B busy, the scheduler moves one of the
application's threads to Node B. However, the application thread still requires access to memory on
Node A. Because the thread is now running on Node B, and Node A memory is no longer local to the
thread, it will take longer to access. It may take longer for the thread to finish running on Node B than
it would have taken to wait for a processor on Node A to become available, and to execute the thread
on the original node with local memory access.
Performance sensitive applications often benefit from the designer or administrator determining
where threads are run. For details about how to ensure threads are scheduled appropriately for the
needs of performance sensitive applications, see Section 3.3.6, Tuning scheduling policy .
16
Chapt er 3. CPU
For more information about tuning interrupt requests, see Section 3.3.7, Setting interrupt affinity or
Section 3.3.8, Configuring CPU, thread, and interrupt affinity with Tuna . For information specific to
network interrupts, see Chapter 6, Networking.
3.2.1. t urbost at
T u rb o st at prints counter results at specified intervals to help administrators identify unexpected
behavior in servers, such as excessive power usage, failure to enter deep sleep states, or system
management interrupts (SMIs) being created unnecessarily.
The t u rb o st at tool is part of the kernel-tools package. It is supported for use on systems with AMD 64
and Intel 64 processors. It requires root privileges to run, and processor support for invariant time
stamp counters, and APERF and MPERF model specific registers.
For usage examples, see the man page:
$ man turbostat
3.2.2. numast at
Important
This tool received substantial updates in the Red Hat Enterprise Linux 6 life cycle. While the
default output remains compatible with the original tool written by Andi Kleen, supplying any
options or parameters to numastat significantly changes the format of its output.
The n u mast at tool displays per-NUMA node memory statistics for processes and the operating
system and shows administrators whether process memory is spread throughout a system or
centralized on specific nodes.
Cross reference n u mast at output with per-processor t o p output to confirm that process threads are
running on the same node from which process memory is allocated.
N u mast at is provided by the numactl package. For further information about n u mast at output, see
the man page:
$ man numastat
17
18
Chapt er 3. CPU
Important
t askset does not guarantee local memory allocation. If you require the additional
performance benefits of local memory allocation, Red Hat recommends using n u mact l
instead of t askset .
For more information about t askset , see Section A.16, taskset or the man page:
$ man taskset
19
Multi-threaded applications that are sensitive to performance may benefit from being configured to
execute on a specific NUMA node rather than a specific processor. Whether this is suitable depends
on your system and the requirements of your application. If multiple application threads access the
same cached data, then configuring those threads to execute on the same processor may be
suitable. However, if multiple threads that access and cache different data execute on the same
processor, each thread may evict cached data accessed by a previous thread. This means that each
thread 'misses' the cache, and wastes execution time fetching data from disk and replacing it in the
cache. You can use the p erf tool, as documented in Section A.7, perf , to check for an excessive
number of cache misses.
N u mact l provides a number of options to assist you in managing processor and memory affinity.
See Section A.12, numastat or the man page for details:
$ man numactl
Note
The n u mact l package includes the l i bnuma library. This library offers a simple
programming interface to the NUMA policy supported by the kernel, and can be used for more
fine-grained tuning than the n u mact l application. For more information, see the man page:
$ man numa
20
Chapt er 3. CPU
SC HED _FIFO (also called static priority scheduling) is a realtime policy that defines a fixed priority
for each thread. This policy allows administrators to improve event response time and reduce
latency, and is recommended for time sensitive tasks that do not run for an extended period of time.
When SC HED _FIFO is in use, the scheduler scans the list of all SC HED _FIFO threads in priority
order and schedules the highest priority thread that is ready to run. The priority level of a
SC HED _FIFO thread can be any integer from 1 to 99, with 99 treated as the highest priority. Red Hat
recommends starting at a low number and increasing priority only when you identify latency issues.
Warning
Because realtime threads are not subject to time slicing, Red Hat does not recommend setting
a priority of 99. This places your process at the same priority level as migration and watchdog
threads; if your thread goes into a computational loop and these threads are blocked, they will
not be able to run. Systems with a single processor will eventually hang in this situation.
Administrators can limit SC HED _FIFO bandwidth to prevent realtime application programmers from
initiating realtime tasks that monopolize the processor.
/p ro c/sys/kern el/sch ed _rt _p erio d _u s
This parameter defines the time period in microseconds that is considered to be one
hundred percent of processor bandwidth. The default value is 10 0 0 0 0 0 s, or 1 second.
/p ro c/sys/kern el/sch ed _rt _ru n t ime_u s
This parameter defines the time period in microseconds that is devoted to running realtime
threads. The default value is 9 50 0 0 0 s, or 0.95 seconds.
3.3.6 .1.2. R o u n d ro b in p rio rit y sch ed u lin g wit h SC H ED _R R
SC HED _R R is a round-robin variant of SC HED _FIFO . This policy is useful when multiple threads
need to run at the same priority level.
Like SC HED _FIFO , SC HED _R R is a realtime policy that defines a fixed priority for each thread. The
scheduler scans the list of all SC HED _R R threads in priority order and schedules the highest priority
thread that is ready to run. However, unlike SC HED _FIFO , threads that have the same priority are
scheduled round-robin style within a certain time slice.
You can set the value of this time slice in milliseconds with the sched_rr_timeslice_ms kernel
parameter (/pro c/sys/kernel /sched _rr_ti mesl i ce_ms). The lowest value is 1 millisecond.
3.3.6 .1.3. N o rmal sch ed u lin g wit h SC H ED _O T H ER
SC HED _O T HER is the default scheduling policy in Red Hat Enterprise Linux 7. This policy uses the
Completely Fair Scheduler (CFS) to allow fair processor access to all threads scheduled with this
policy. This policy is most useful when there are a large number of threads or data throughput is a
priority, as it allows more efficient scheduling of threads over time.
When this policy is in use, the scheduler creates a dynamic priority list based partly on the niceness
value of each process thread. Administrators can change the niceness value of a process, but
cannot change the scheduler's dynamic priority list directly.
For details about changing process niceness, see the Red Hat Enterprise Linux 7 Deployment Guide,
available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
21
Note
On systems that support interrupt steering, modifying the smp_affinity of an interrupt
request sets up the hardware so that the decision to service an interrupt with a particular
processor is made at the hardware level with no intervention from the kernel. For more
information about interrupt steering, see Chapter 6, Networking.
3.3.8. Configuring CPU, t hread, and int errupt affinit y wit h T una
T u n a can control CPU, thread, and interrupt affinity, and provides a number of actions for each type
of entity it can control. For a full list of T u n a's capabilities, see Section A.2, Tuna .
22
Chapt er 3. CPU
To move all threads away from one or more specified CPUs, run the following command, replacing
CPUs with the number of the CPU you want to isolate.
# tuna --cpus CPUs --isolate
To include a CPU in the list of CPUs that can run certain threads, run the following command,
replacing CPUs with the number of the CPU you want to include.
# tuna --cpus CPUs --include
To move an interrupt request to a specified CPU, run the following command, replacing CPU with the
number of the CPU, and IRQs with the comma-delimited list of interrupt requests you want to move.
# tuna --irqs IRQs --cpus CPU --move
Alternatively, you can use the following command to find all interrupt requests of the pattern sfc1*.
# tuna -q sfc1* -c7 -m -x
To change the policy and priority of a thread, run the following command, replacing thread with the
thread you want to change, policy with the name of the policy you want the thread to operate under,
and level with an integer from 0 (lowest priority) to 99 (highest priority).
# tuna --threads thread --priority policy:level
23
Chapter 4. Memory
This chapter outlines the memory management capabilities of Red Hat Enterprise Linux 7. Section 4.1,
Considerations discusses memory related factors that affect performance. Section 4.2, Monitoring
and diagnosing performance problems teaches you how to use Red Hat Enterprise Linux 7 tools to
diagnose performance problems related to memory utilization or configuration details. Section 4.3,
Configuration tools discusses the tools and strategies you can use to solve memory related
performance problems in Red Hat Enterprise Linux 7.
24
Chapt er 4 . Memory
Red Hat Enterprise Linux provides the Huge Translation Lookaside Buffer (HugeTLB), which allows
memory to be managed in very large segments. This lets a greater number of address mappings be
cached at one time, which reduces the likelihood of TLB misses, thereby improving performance in
applications with large memory requirements.
For details about configuring HugeTLB, see Section 4.3.1, Configuring huge pages .
25
Note
Memch eck can only report these errors; it cannot prevent them from occurring. If your
program accesses memory in a way that would normally cause a segmentation fault, the
segmentation fault still occurs. However, memch eck will log an error message immediately
prior to the fault.
Because memch eck uses instrumentation, applications executed with memch eck run ten to thirty
times slower than usual.
To run memch eck on an application, execute the following command:
# valgrind --tool=memcheck application
You can also use the following options to focus memch eck output on specific types of problem.
- - leak- ch eck
After the application finishes executing, memch eck searches for memory leaks. The default
value is --l eak-check= summary, which prints the number of memory leaks found. You
can specify --l eak-check= yes or --l eak-check= ful l to output details of each
individual leak. To disable, specify --l eak-check= no .
- - u n d ef - valu e- erro rs
The default value is --und ef-val ue-erro rs= yes, which reports errors when undefined
values are used. You can also specify --und ef-val ue-erro rs= no , which will disable
this report and slightly speed up Memcheck.
- - ig n o re- ran g es
Specifies one or more ranges that memch eck should ignore when checking for memory
addressability, for example, --i g no re-rang es= 0 xP P -0 xQ Q ,0 xR R -0 xSS.
For a full list of memch eck options, see the documentation included at
/usr/share/d o c/val g ri nd -versi o n/val g ri nd _manual . pd f.
26
Chapt er 4 . Memory
- - D 1
Specifies the size, associativity, and line size of the first level data cache, like so: -D 1= size,associativity,line_size.
- - LL
Specifies the size, associativity, and line size of the last level cache, like so: -LL= size,associativity,line_size.
- - cach e- sim
Enables or disables the collection of cache access and miss counts. This is enabled (-cache-si m= yes) by default. D isabling both this and --branch-si m leaves cach eg rin d
with no information to collect.
- - b ran ch - sim
Enables or disables the collection of branch instruction and incorrect prediction counts.
This is enabled (--branch-si m= yes) by default. D isabling both this and --cache-si m
leaves cach eg rin d with no information to collect.
C ach eg rin d writes detailed profiling information to a per-process cacheg ri nd . o ut. pid
file, where pid is the process identifier. This detailed information can be further processed by
the companion cg _an n o t at e tool, like so:
# cg_annotate cachegrind.out.pid
C ach eg rin d also provides the cg_diff tool, which makes it easier to chart program performance
before and after a code change. To compare output files, execute the following command, replacing
first with the initial profile output file, and second with the subsequent profile output file.
# cg_diff first second
The resulting output file can be viewed in greater detail with the cg _an n o t at e tool.
For a full list of cach eg rin d options, see the documentation included at
/usr/share/d o c/val g ri nd -versi o n/val g ri nd _manual . pd f.
27
- - h eap - ad min
Specifies the number of bytes per block to use for administration when heap profiling is
enabled. The default value is 8 bytes.
- - st acks
Specifies whether massif profiles the stack. The default value is --stack= no , as stack
profiling can greatly slow massif . Set this option to --stack= yes to enable stack
profiling. Note that massif assumes that the main stack starts with a size of zero in order to
better indicate the changes in stack size that relate to the application being profiled.
- - t ime- u n it
Specifies the interval at which massif gathers profiling data. The default value is i
(instructions executed). You can also specify ms (milliseconds, or realtime) and B (bytes
allocated or deallocated on the heap and stack). Examining bytes allocated is useful for
short run applications and for testing purposes, as it is most reproducible across different
hardware.
Massif outputs profiling data to a massi f. o ut. pid file, where pid is the process identifier of the
specified application. The ms_p rin t tool graphs this profiling data to show memory consumption
over the execution of the application, as well as detailed information about the sites responsible for
allocation at points of peak memory allocation. To graph the data from the massi f. o ut. pid file,
execute the following command:
# ms_print massif.out.pid
For a full list of Massif options, see the documentation included at /usr/share/d o c/val g ri nd versi o n/val g ri nd _manual . pd f.
28
Chapt er 4 . Memory
h u g ep ag es
D efines the number of persistent huge pages configured in the kernel at boot time. The
default value is 0. It is only possible to allocate (or deallocate) huge pages if there are
sufficient physically contiguous free pages in the system. Pages reserved by this parameter
cannot be used for other purposes.
This value can be adjusted after boot by changing the value of the
/pro c/sys/vm/nr_hug epag es file.
In a NUMA system, huge pages assigned with this parameter are divided equally between
nodes. You can assign huge pages to specific nodes at runtime by changing the value of
the node's /sys/d evi ces/system/no d e/node_id/hug epag es/hug epag es10 4 8576 kB/nr_hug epag es file.
For more information, read the relevant kernel documentation, which is installed in
/usr/share/d o c/kernel -d o ckernel _versi o n/D o cumentati o n/vm/hug etl bpag e. txt by default.
h u g ep ag esz
D efines the size of persistent huge pages configured in the kernel at boot time. Valid values
are 2 MB and 1 GB. The default value is 2 MB.
d ef au lt _h u g ep ag esz
D efines the default size of persistent huge pages configured in the kernel at boot time. Valid
values are 2 MB and 1 GB. The default value is 2 MB.
You can also use the following parameters to influence huge page behaviour during runtime.
/sys/d evices/syst em/n o d e/node_id/h u g ep ag es/h u g ep ag es- size/n r_h u g ep ag es
D efines the number of huge pages of the specified size assigned to the specified NUMA
node. This is supported as of Red Hat Enterprise Linux 7.1. The following example moves
adds twenty 2048 kB huge pages to no d e2.
# numastat -cm | egrep 'Node|Huge'
Node 0 Node 1 Node 2 Node 3
Total add
sysctl vm.overcommit_memory=1
in /etc/sysctl.conf
and execute
AnonHugePages
0
2
0
8
10
HugePages_Total
0
0
0
0
0
HugePages_Free
0
0
0
0
0
HugePages_Surp
0
0
0
0
0
# echo 20 > /sys/devices/system/node/node2/hugepages/hugepages2048kB/nr_hugepages
# numastat -cm | egrep 'Node|Huge'
Node 0 Node 1 Node 2 Node 3 Total
AnonHugePages
0
2
0
8
10
HugePages_Total
0
0
40
0
40
HugePages_Free
0
0
40
0
40
HugePages_Surp
0
0
0
0
0
29
30
Chapt er 4 . Memory
D efines the maximum number of memory map areas that a process can use. The default
value (6 5530 ) is appropriate for most cases. Increase this value if your application needs
to map more than this number of files.
min _f ree_kb yt es
Specifies the minimum number of kilobytes to keep free across the system. This is used to
determine an appropriate value for each low memory zone, each of which is assigned a
number of reserved free pages in proportion to their size.
Warning
Extreme values can damage your system. Setting min_free_kbytes to an
extremely low value prevents the system from reclaiming memory, which can result in
system hangs and OOM-killing processes. However, setting min_free_kbytes too
high (for example, to 510% of total system memory) causes the system to enter an
out-of-memory state immediately, resulting in the system spending too much time
reclaiming memory.
o o m_ad j
In the event that the system runs out of memory and the panic_on_oom parameter is set to
0 , the o o m_ki l l er function kills processes until the system can recover, starting from the
process with the highest o o m_sco re.
The oom_adj parameter helps determine the oom_score of a process. This parameter is
set per process identifier. A value of -17 disables the o o m_ki l l er for that process. Other
valid values are from -16 to 15.
Note
Processes spawned by an adjusted process inherit that process's o o m_sco re.
s wap p in ess
A value from 0 to 10 0 which controls the degree to which the system favour anonymous
memory or the page cache. A high value improves file-system performance, while
aggressively swapping less active processes out of RAM. A low value avoids swapping
processes out of memory, which usually decrease latency, at the cost of I/O performance.
The default value is 6 0 .
Warning
Setting swappi ness= = 0 will very aggressively avoids swapping out, which
increase the risk of OOM killing under strong memory and I/O pressure.
31
D efines the maximum allowed number of events in all active asynchronous input/output
contexts. The default value is 6 5536 . Modifying this value does not pre-allocate or resize
any kernel data structures.
f ile- max
D efines the maximum number of file handles allocated by the kernel. The default value
matches the value of fi l es_stat. max_fi l es in the kernel, which is set to the largest
value out of either NR_FILE (8192 in Red Hat Enterprise Linux), or the result of the
following:
(mempages * (PAGE_SIZE / 1024)) / 10
Raising this value can resolve errors caused by a lack of available file handles.
32
33
The I/O scheduler determines when and for how long I/O operations run on a storage device. It is
also known as the I/O elevator.
Red Hat Enterprise Linux 7 provides three I/O schedulers.
d ead lin e
The default I/O scheduler for all block devices except SATA disks. D ead l i ne attempts to
provide a guaranteed latency for requests from the point at which requests reach the I/O
scheduler. This scheduler is suitable for most use cases, but particularly those in which
read operations occur more often than write operations.
Queued I/O requests are sorted into a read or write batch and then scheduled for execution
in increasing LBA order. Read batches take precedence over write batches by default, as
applications are more likely to block on read I/O. After a batch is processed, d ead l i ne
checks how long write operations have been starved of processor time and schedules the
next read or write batch as appropriate. The number of requests to handle per batch, the
number of read batches to issue per write batch, and the amount of time before requests
expire are all configurable; see Section 5.3.4, Tuning the deadline scheduler for details.
c f q
The default scheduler only for devices identified as SATA disks. The Completely Fair
Queueing scheduler, cfq , divides processes into three separate classes: real time, best
effort, and idle. Processes in the real time class are always performed before processes in
the best effort class, which are always performed before processes in the idle class. This
means that processes in the real time class can starve both best effort and idle processes of
processor time. Processes are assigned to the best effort class by default.
C fq uses historical data to anticipate whether an application will issue more I/O requests in
the near future. If more I/O is expected, cfq idles to wait for the new I/O, even if I/O from
other processes is waiting to be processed.
Because of this tendency to idle, the cfq scheduler should not be used in conjunction with
hardware that does not incur a large seek penalty unless it is tuned for this purpose. It
should also not be used in conjunction with other non-work-conserving schedulers, such
as a host-based hardware RAID controller, as stacking these schedulers tends to cause a
large amount of latency.
C fq behavior is highly configurable; see Section 5.3.5, Tuning the cfq scheduler for
details.
n o o p
The no o p I/O scheduler implements a simple FIFO (first-in first-out) scheduling algorithm.
Requests are merged at the generic block layer through a simple last-hit cache. This can be
the best scheduler for CPU-bound systems using fast storage.
For details on setting a different default I/O scheduler, or specifying a different scheduler for a
particular device, see Section 5.3, Configuration tools .
34
5 .1 .3.1 . XFS
XFS is a robust and highly scalable 64-bit file system. It is the default file system in Red Hat
Enterprise Linux 7. XFS uses extent-based allocation, and features a number of allocation schemes,
including pre-allocation and delayed allocation, both of which reduce fragmentation and aid
performance. It also supports metadata journaling, which can facilitate crash recovery. XFS can be
defragmented and enlarged while mounted and active, and Red Hat Enterprise Linux 7 supports
several XFS-specific backup and restore utilities.
As of Red Hat Enterprise Linux 7.0 GA, XFS is supported to a maximum file system size of 500 TB,
and a maximum file offset of 8 EB (sparse files). For details about administering XFS, see the Red Hat
Enterprise Linux 7 Storage Administration Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/. For assistance tuning XFS
for a specific purpose, see Section 5.3.7.1, Tuning XFS .
5 .1 .3.2 . Ext 4
Ext4 is a scalable extension of the ext3 file system. Its default behavior is optimal for most work loads.
However, it is supported only to a maximum file system size of 50 TB, and a maximum file size of
16 TB. For details about administering ext4, see the Red Hat Enterprise Linux 7 Storage Administration
Guide, available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/. For
assistance tuning ext4 for a specific purpose, see Section 5.3.7.2, Tuning ext4 .
35
US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_LinuxPerformance_Tuning_Guide-Storage_and_File_Systems-Configuration_tools.html#sectRed_Hat_Enterprise_Linux-Performance_Tuning_Guide-Configuration_toolsConfiguring_file_systems_for_performance.
5 .1 .3.4 . GFS2
GFS2 is part of the High Availability Add-On, which provides clustered file system support to Red Hat
Enterprise Linux 7. GFS2 provides a consistent file system image across all servers in a cluster,
allowing servers to read from and write to a single shared file system.
GFS2 is supported to a maximum file system size of 250 TB.
For details about administering GFS2, see the Red Hat Enterprise Linux 7 Storage Administration Guide,
available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/. For
assistance tuning GFS2 for a specific purpose, see Section 5.3.7.4, Tuning GFS2 .
5 .1 .4 .1 . Co nside rat io ns at fo rm at t im e
Some file system configuration decisions cannot be changed after the device is formatted. This
section covers the options available to you for decisions that must be made before you format your
storage device.
S iz e
Create an appropriately-sized file system for your workload. Smaller file systems have
proportionally shorter backup times and require less time and memory for file system
checks. However, if your file system is too small, its performance will suffer from high
fragmentation.
B lo ck siz e
The block is the unit of work for the file system. The block size determines how much data
can be stored in a single block, and therefore the smallest amount of data that is written or
read at one time.
The default block size is appropriate for most use cases. However, your file system will
perform better and store data more efficiently if the block size (or the size of multiple blocks)
is the same as or slightly larger than amount of data that is typically read or written at one
time. A small file will still use an entire block. Files can be spread across multiple blocks, but
this can create additional runtime overhead. Additionally, some file systems are limited to a
certain number of blocks, which in turn limits the maximum size of the file system.
Block size is specified as part of the file system options when formatting a device with the
mkfs command. The parameter that specifies the block size varies with the file system; see
the mkfs man page for your file system for details. For example, to see the options available
when formatting an XFS file system, execute the following command.
$ man mkfs.xfs
G eo met ry
36
File system geometry is concerned with the distribution of data across a file system. If your
system uses striped storage, like RAID , you can improve performance by aligning data and
metadata with the underlying storage geometry when you format the device.
Many devices export recommended geometry, which is then set automatically when the
devices are formatted with a particular file system. If your device does not export these
recommendations, or you want to change the recommended settings, you must specify
geometry manually when you format the device with mkf s.
The parameters that specify file system geometry vary with the file system; see the mkfs man
page for your file system for details. For example, to see the options available when
formatting an ext4 file system, execute the following command.
$ man mkfs.ext4
E xt ern al jo u rn als
Journaling file systems document the changes that will be made during a write operation in
a journal file prior to the operation being executed. This reduces the likelihood that a
storage device will become corrupted in the event of a system crash or power failure, and
speeds up the recovery process.
Metadata-intensive workloads involve very frequent updates to the journal. A larger journal
uses more memory, but reduces the frequency of write operations. Additionally, you can
improve the seek time of a device with a metadata-intensive workload by placing its journal
on dedicated storage that is as fast as, or faster than, the primary storage.
Warning
Ensure that external journals are reliable. Losing an external journal device will
cause file system corruption.
External journals must be created at format time, with journal devices being specified at
mount time. For details, see the mkfs and mo unt man pages.
$ man mkfs
$ man mount
37
For further information, see the Red Hat Enterprise Linux 7 Storage Administration Guide,
available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
Access T ime
Every time a file is read, its metadata is updated with the time at which access occurred
(ati me). This involves additional write I/O. In most cases, this overhead is minimal, as by
default Red Hat Enterprise Linux 7 updates the ati me field only when the previous access
time was older than the times of last modification (mti me) or status change (cti me).
However, if updating this metadata is time consuming, and if accurate access time data is
not required, you can mount the file system with the no ati me mount option. This disables
updates to metadata when a file is read. It also enables no d i rati me behavior, which
disables updates to metadata when a directory is read.
R ead - ah ead
Read-ahead behavior speeds up file access by pre-fetching data that is likely to be needed
soon and loading it into the page cache, where it can be retrieved more quickly than if it
were on disk. The higher the read-ahead value, the further ahead the system pre-fetches
data.
Red Hat Enterprise Linux attempts to set an appropriate read-ahead value based on what it
detects about your file system. However, accurate detection is not always possible. For
example, if a storage array presents itself to the system as a single LUN, the system detects
the single LUN, and does not set the appropriate read-ahead value for an array.
Workloads that involve heavy streaming of sequential I/O often benefit from high readahead values. The storage-related tuned profiles provided with Red Hat Enterprise Linux 7
raise the read-ahead value, as does using LVM striping, but these adjustments are not
always sufficient for all workloads.
The parameters that define read-ahead behavior vary with the file system; see the mount
man page for details.
$ man mount
38
This type of discard operation is configured at mount time with the d i scard option, and
runs in real time without user intervention. However, online discard only discards blocks
that are transitioning from used to free. Red Hat Enterprise Linux 7 supports online discard
on XFS and ext4 formatted devices.
Red Hat recommends batch discard except where online discard is required to maintain
performance, or where batch discard is not feasible for the system's workload.
P re- allo cat io n
Pre-allocation marks disk space as being allocated to a file without writing any data into
that space. This can be useful in limiting data fragmentation and poor read performance.
Red Hat Enterprise Linux 7 supports pre-allocating space on XFS, ext4, and GFS2 devices
at mount time; see the mo unt man page for the appropriate parameter for your file system.
Applications can also benefit from pre-allocating space by using the fal l o cate(2)
g l i bc call.
39
If analysis with vmst at shows that the I/O subsystem is responsible for reduced performance,
administrators can use io st at to determine the responsible I/O device.
vmst at is provided by the procps-ng package. For detailed information about using vmst at , see the
man page:
$ man vmstat
40
The seekwat ch er tool can use b lkt race output to graph I/O over time. It focuses on the Logical
Block Address (LBA) of disk I/O, throughput in megabytes per second, the number of seeks per
second, and I/O operations per second. This can help you to identify when you are hitting the
operations-per-second limit of a device.
For more detailed information about this tool, see the man page:
$ man seekwatcher
41
42
unless you have measured the overhead of this check, Red Hat recommends the default
value of 1.
r ead _exp ire
The number of milliseconds in which a read request should be scheduled for service. The
default value is 50 0 (0.5 seconds).
w rit e_exp ire
The number of milliseconds in which a write request should be scheduled for service. The
default value is 50 0 0 (5 seconds).
w rit es_st arved
The number of read batches that can be processed before processing a write batch. The
higher this value is set, the greater the preference given to read batches.
43
44
45
This section covers the tuning parameters specific to each file system supported in Red Hat
Enterprise Linux 7. Parameters are divided according to whether their values should be configured
when you format the storage device, or when you mount the formatted device.
Where loss in performance is caused by file fragmentation or resource contention, performance can
generally be improved by reconfiguring the file system. However, in some cases the application may
need to be altered. In this case, Red Hat recommends contacting Customer Support for assistance.
4 KB
16 KB
64 KB
100000200000
1000001000000
>1000000
10000002000000
100000010000000
>10000000
For detailed information about the effect of directory block size on read and write workloads
in file systems of different sizes, see the XFS documentation.
To configure directory block size, use the mkfs. xfs -l option. See the mkfs. xfs man
page for details.
Allo cat io n g ro u p s
An allocation group is an independent structure that indexes free space and allocated
inodes across a section of the file system. Each allocation group can be modified
independently, allowing XFS to perform allocation and deallocation operations
46
47
L o g siz e
Pending changes are aggregated in memory until a synchronization event is triggered, at
which point they are written to the log. The size of the log determines the number of
concurrent modifications that can be in-progress at one time. It also determines the
maximum amount of change that can be aggregated in memory, and therefore how often
logged data is written to disk. A smaller log forces data to be written back to disk more
frequently than a larger log. However, a larger log uses more memory to record pending
modifications, so a system with limited memory will not benefit from a larger log.
Logs perform better when they are aligned to the underlying stripe unit; that is, they start
and end at stripe unit boundaries. To align logs to the stripe unit, use the mkfs. xfs -d
option. See the mkfs. xfs man page for details.
To configure the log size, use the following mkfs. xfs option, replacing logsize with the
size of the log:
# mkfs.xfs -l size=logsize
For further details, see the mkfs. xfs man page:
$ man mkfs.xfs
L o g st rip e u n it
Log writes on storage devices that use RAID 5 or RAID 6 layouts may perform better when
they start and end at stripe unit boundaries (are aligned to the underlying stripe unit).
mkfs. xfs attempts to set an appropriate log stripe unit automatically, but this depends on
the RAID device exporting this information.
Setting a large log stripe unit can harm performance if your workload triggers
synchronization events very frequently, because smaller writes need to be padded to the
size of the log stripe unit, which can increase latency. If your workload is bound by log write
latency, Red Hat recommends setting the log stripe unit to 1 block so that unaligned log
writes are triggered as possible.
The maximum supported log stripe unit is the size of the maximum log buffer size (256 KB).
It is therefore possible that the underlying storage may have a larger stripe unit than can be
configured on the log. In this case, mkfs. xfs issues a warning and sets a log stripe unit of
32 KB.
To configure the log stripe unit, use one of the following options, where N is the number of
blocks to use as the stripe unit, and size is the size of the stripe unit in KB.
mkfs.xfs -l sunit=Nb
mkfs.xfs -l su=size
For further details, see the mkfs. xfs man page:
$ man mkfs.xfs
5.3.7.1.2. Mo u n t o p t io n s
I n o d e allo cat io n
48
compress=zlib - Better compression ratio. It's the default and safe for older kernels.
(default).
compress=lzo - Faster compression but does not compress as much as zlib.
compress=no - D isables compression (starting with kernel 3.6).
compress-force=method - Enable compression even for files that don't compress well,
like videos and dd images of disks. The options are compress-force=zlib and compressforce=lzo.
Highly recommended for file systems greater than 1 TB in size. The inode64 parameter
configures XFS to allocate inodes and data across the entire file system. This ensures that
inodes are not allocated largely at the beginning of the file system, and data is not largely
allocated at the end of the file system, improving performance on large file systems.
L o g b u f f er siz e an d n u mb er
The larger the log buffer, the fewer I/O operations it takes to write all changes to the log. A
larger log buffer can improve performance on systems with I/O-intensive workloads that do
not have a non-volatile write cache.
The log buffer size is configured with the logbsize mount option, and defines the
maximum amount of information that can be stored in the log buffer; if a log stripe unit is not
set, buffer writes can be shorter than the maximum, and therefore there is no need to reduce
the log buffer size for synchronization-heavy workloads. The default size of the log buffer is
32 KB. The maximum size is 256 KB and other supported sizes are 64 KB, 128 KB or power
of 2 multiples of the log stripe unit between 32 KB and 256 KB.
The number of log buffers is defined by the logbufs mount option. The default value is 8
log buffers (the maximum), but as few as two log buffers can be configured. It is usually not
necessary to reduce the number of log buffers, except on memory-bound systems that
cannot afford to allocate memory to additional log buffers. Reducing the number of log
buffers tends to reduce log performance, especially on workloads sensitive to log I/O
latency.
D elay ch an g e lo g g in g
XFS has the option to aggregate changes in memory before writing them to the log. The
delaylog parameter allows frequently modified metadata to be written to the log
periodically instead of every time it changes. This option increases the potential number of
operations lost in a crash and increases the amount of memory used to track metadata.
However, it can also increase metadata modification speed and scalability by an order of
magnitude, and does not reduce data or metadata integrity when fsync, fd atasync, or
sync are used to ensure data and metadata is written to disk.
5 .3.7 .2 . T uning e xt 4
This section covers some of the tuning parameters available to ext4 file systems at format and at
mount time.
5.3.7.2.1. Fo rmat t in g o p t io n s
I n o d e t ab le in it ializ at io n
Initializing all inodes in the file system can take a very long time on very large file systems.
By default, the initialization process is deferred (lazy inode table initialization is enabled).
However, if your system does not have an ext4 driver, lazy inode table initialization is
49
disabled by default. It can be enabled by setting l azy_i tabl e_i ni t to 1). In this case,
kernel processes continue to initialize the file system after it is mounted.
This section describes only some of the options available at format time. For further formatting
parameters, see the mkfs. ext4 man page:
$ man mkfs.ext4
5.3.7.2.2. Mo u n t o p t io n s
I n o d e t ab le in it ializ at io n rat e
When lazy inode table initialization is enabled, you can control the rate at which
initialization occurs by specifying a value for the init_itable parameter. The amount of
time spent performing background initialization is approximately equal to 1 divided by the
value of this parameter. The default value is 10 .
Au t o mat ic f ile syn ch ro n iz at io n
Some applications do not correctly perform an fsync after renaming an existing file, or
after truncating and rewriting. By default, ext4 automatically synchronizes files after each of
these operations. However, this can be time consuming.
If this level of synchronization is not required, you can disable this behavior by specifying
the no auto _d a_al l o c option at mount time. If no auto _d a_al l o c is set, applications
must explicitly use fsync to ensure data persistence.
J o u rn al I/O p rio rit y
By default, journal I/O has a priority of 3, which is slightly higher than the priority of normal
I/O. You can control the priority of journal I/O with the journal_ioprio parameter at
mount time. Valid values for journal_ioprio range from 0 to 7, with 0 being the highest
priority I/O.
This section describes only some of the options available at mount time. For further mount options,
see the mo unt man page:
$ man mount
50
51
Chapter 6. Networking
The networking subsystem is comprised of a number of different parts with sensitive connections.
Red Hat Enterprise Linux 7 networking is therefore designed to provide optimal performance for most
workloads, and to optimize its performance automatically. As such, it is not usually necessary to
manually tune network performance. This chapter discusses further optimizations that can be made
to functional networking systems.
Network performance problems are sometimes the result of hardware malfunction or faulty
infrastructure. Resolving these issues is beyond the scope of this document.
52
that are not copied to the requesting application, or by an increase in UD P input errors
(InErro rs) in /pro c/net/snmp. For information about monitoring your system for these
errors, see Section 6.2.1, ss and Section 6.2.5, /proc/net/snmp .
6.2.1. ss
ss is a command-line utility that prints statistical information about sockets, allowing administrators
to assess device performance over time. By default, ss lists open non-listening TCP sockets that
have established connections, but a number of useful options are provided to help administrators
filter out statistics about specific sockets.
Red Hat recommends ss over n et st at in Red Hat Enterprise Linux 7.
ss is provided by the iproute package. For more information, see the man page:
$ man ss
6.2.2. ip
The ip utility lets administrators manage and monitor routes, devices, routing policies, and tunnels.
The i p mo ni to r command can continuously monitor the state of devices, addresses, and routes.
ip is provided by the iproute package. For details about using ip, see the man page:
$ man ip
6.2.3. dropwat ch
D ro p wat ch is an interactive tool that monitors and records packets that are dropped by the kernel.
For further information, see the d ro p wat ch man page:
$ man dropwatch
6.2.4 . et ht ool
The et h t o o l utility allows administrators to view and edit network interface card settings. It is useful
for observing the statistics of certain devices, such as the number of packets dropped by that device.
You can view the status of a specified device's counters with ethto o l -S and the name of the
device you want to monitor.
$ ethtool -S devname
For further information, see the man page:
53
$ man ethtool
54
Further, some network performance problems are better resolved by altering the application than by
reconfiguring your network subsystem. It is generally a good idea to configure your application to
perform frequent posix calls, even if this means queuing data in the application space, as this allows
data to be stored flexibly and swapped in or out of memory as required.
55
If analysis reveals high latency, your system may benefit from poll-based rather than interrupt-based
packet receipt.
56
If a socket queue that receives a limited amount of traffic in bursts, increasing the depth of
the socket queue to match the size of the bursts of traffic may prevent packets from being
dropped.
CPU5
0
IR-PCI-MSI-edge
IR-PCI-MSI-edge
IR-PCI-MSI-edge
IR-PCI-MSI-edge
57
p1p1-3
93:
p1p1-4
94:
p1p1-5
622
IR-PCI-MSI-edge
2475
IR-PCI-MSI-edge
The preceding output shows that the NIC driver created 6 receive queues for the p1p1 interface
(p1p1-0 through p1p1-5). It also shows how many interrupts were processed by each queue, and
which CPU serviced the interrupt. In this case, there are 6 queues because by default, this particular
NIC driver creates one queue per CPU, and this system has 6 CPUs. This is a fairly common pattern
amongst NIC drivers.
Alternatively, you can check the output of l s -1
/sys/d evi ces/*/*/device_pci_address/msi _i rq s after the network driver is loaded. For
example, if you are interested in a device with a PCI address of 0 0 0 0 : 0 1: 0 0 . 0 , you can list the
interrupt request queues of that device with the following command:
# ls -1 /sys/devices/*/*/0000:01:00.0/msi_irqs
101
102
103
104
105
106
107
108
109
RSS is enabled by default. The number of queues (or the CPUs that should process network activity)
for RSS are configured in the appropriate network device driver. For the bnx2x driver, it is configured
in num_queues. For the sfc driver, it is configured in the rss_cpus parameter. Regardless, it is
typically configured in /sys/cl ass/net/device/q ueues/rx-queue/, where device is the name
of the network device (such as eth1) and rx-queue is the name of the appropriate receive queue.
When configuring RSS, Red Hat recommends limiting the number of queues to one per physical CPU
core. Hyper-threads are often represented as separate cores in analysis tools, but configuring
queues for all cores including logical cores such as hyper-threads has not proven beneficial to
network performance.
When enabled, RSS distributes network processing equally between available CPUs based on the
amount of processing each CPU has queued. However, you can use the ethto o l --show-rxfhindir and --set-rxfh-indir parameters to modify how network activity is distributed, and weight
certain types of network activity as more important than others.
The i rq bal ance daemon can be used in conjunction with RSS to reduce the likelihood of crossnode memory transfers and cache line bouncing. This lowers the latency of processing network
packets.
58
59
D ata received from a single sender is not sent to more than one CPU. If the amount of data received
from a single sender is greater than a single CPU can handle, configure a larger frame size to reduce
the number of interrupts and therefore the amount of processing work for the CPU. Alternatively,
consider NIC offload options or faster CPUs.
Consider using numactl or taskset in conjunction with RFS to pin applications to specific cores,
sockets, or NUMA nodes. This can help prevent packets from being processed out of order.
60
A.1. irqbalance
irq b alan ce is a command line tool that distributes hardware interrupts across processors to
improve system performance. It runs as a daemon by default, but can be run once only with the -o nesho t option.
The following parameters are useful for improving performance.
- - p o wert h resh
Sets the number of CPUs that can idle before a CPU is placed into powersave mode. If more
CPUs than the threshold are more than 1 standard deviation below the average softirq
workload and no CPUs are more than one standard deviation above the average, and
have more than one irq assigned to them, a CPU is placed into powersave mode. In
powersave mode, a CPU is not part of irq balancing so that it is not woken unnecessarily.
- - h in t p o licy
D etermines how irq kernel affinity hinting is handled. Valid values are exact (irq affinity
hint is always applied), subset (irq is balanced, but the assigned object is a subset of the
affinity hint), or i g no re (irq affinity hint is ignored completely).
- - p o licyscrip t
D efines the location of a script to execute for each interrupt request, with the device path
and irq number passed as arguments, and a zero exit code expected by irq b alan ce. The
script defined can specify zero or more key value pairs to guide irq b alan ce in managing
the passed irq.
The following are recognized as valid key value pairs.
b an
Valid values are true (exclude the passed irq from balancing) or fal se (perform
balancing on this irq).
b alan ce_level
Allows user override of the balance level of the passed irq. By default the balance
level is based on the PCI device class of the device that owns the irq. Valid values
are no ne, packag e, cache, or co re.
n u ma_n o d e
Allows user override of the NUMA node that is considered local to the passed irq.
If information about the local node is not specified in ACPI, devices are
considered equidistant from all nodes. Valid values are integers (starting from 0)
that identify a specific NUMA node, and -1, which specifies that an irq should be
considered equidistant from all nodes.
- - b an irq
The interrupt with the specified interrupt request number is added to the list of banned
61
The interrupt with the specified interrupt request number is added to the list of banned
interrupts.
You can also use the IRQBALANCE_BANNED_CPUS environment variable to specify a mask of CPUs
that are ignored by irq b alan ce.
For further details, see the man page:
$ man irqbalance
A.2. T una
T u n a allows you to control processor and scheduling affinity. This section covers the command line
interface, but a graphical interface with the same range of functionality is also available. Launch the
graphical utility by running tuna at the command line.
T u n a accepts a number of command line parameters, which are processed in sequence. The
following command distributes load across a four socket system.
tuna --socket 0 --isolate \
--thread my_real_time_app --move \
--irq serial --socket 1 --move \
--irq eth* --socket 2 --spread \
--show_threads --show_irqs
- - g u i
Starts the graphical user interface.
- - cp u s
Takes a comma-delimited list of CPUs to be controlled by T u n a. The list remains in effect
until a new list is specified.
- - co n f ig _f ile_ap p ly
Takes the name of a profile to apply to the system.
- - co n f ig _f ile_list
Lists the pre-loaded profiles.
- - cg ro u p
Used in conjunction with --show_threads. D isplays the type of control group that
processes displayed with --show_threads belong to, if control groups are enabled.
Requires -P
- - af f ect _ch ild ren
When specified, T u n a affects child threads as well as parent threads.
- - f ilt er
D isables the disolay of selected CPUs in --gui. Requires -c
- - iso lat e
Takes a comma-delimited list of CPUs. T u n a migrates all threads away from the CPUs
62
Takes a comma-delimited list of CPUs. T u n a migrates all threads away from the CPUs
specified. Requires -c or -s
- - in clu d e
Takes a comma-delimited list of CPUs. Tuna allows all threads to run on the CPUs
specified. Requires -c or -s
- - n o _kt h read s
When this parameter is specified, Tuna does not affect kernel threads.
- - mo ve
Moves selected entities to CPU-List. Requires -c or -s.
- - p rio rit y
Specifies the scheduler policy and priority for a thread. Valid scheduler policies are O T HER ,
FIFO , R R , BAT C H, or ID LE.
When the policy is FIFO or R R , valid priority values are integers from 1 (lowest) to 99
(highest). The default value is 1. For example, tuna --thread s 786 1 -pri o ri ty= R R : 4 0 sets a policy of R R (round-robin) and a priority of 4 0 for thread 786 1.
When the policy is O T HER , BAT C H, or ID LE, the only valid priority value is 0 , which is also
the default. Requires -t.
- - sh o w_t h read s
Show the thread list.
- - sh o w_irq s
Show the IRQ list.
- - irq s
Takes a comma-delimited list of IRQs that T u n a affects. The list remains in effect until a new
list is specified. IRQs can be added to the list by using + and removed from the list by using
-.
- - save
Saves the kernel threads schedules to the specified file.
- - so cket s
Takes a comma-delimited list of CPU sockets to be controlled by T u n a. This option takes
into account the topology of the system, such as the cores that share a single processor
cache, and that are on the same physical chip.
- - t h read s
Takes a comma-delimited list of threads to be controlled by T u n a. The list remains in effect
until a new list is specified. Threads can be added to the list by using + and removed from
the list by using -.
- - n o _u t h read s
Prevents the operation from affecting user threads.
- - wh at _is
63
A.3. et ht ool
The et h t o o l utility allows administrators to view and edit network interface card settings. It is useful
for observing the statistics of certain devices, such as the number of packets dropped by that device.
et h t o o l, its options, and its usage, are comprehensively documented on the man page.
$ man ethtool
A.4 . ss
ss is a command-line utility that prints statistical information about sockets, allowing administrators
to assess device performance over time. By default, ss lists open non-listening TCP sockets that
have established connections, but a number of useful options are provided to help administrators
filter out statistics about specific sockets.
One commonly used command is ss -tmpi e, which displays all TCP sockets (t, internal TCP
information (i ), socket memory usage (m), processes using the socket (p), and detailed socket
information (i ).
Red Hat recommends ss over n et st at in Red Hat Enterprise Linux 7.
ss is provided by the iproute package. For more information, see the man page:
$ man ss
A.5. t uned
T u n ed is a tuning daemon that can adapt the operating system to perform better under certain
workloads by setting a tuning profile. It can also be configured to react to changes in CPU and
network use and adjusts settings to improve performance in active devices and reduce power
consumption in inactive devices.
To configure dynamic tuning behavior, edit the dynamic_tuning parameter in the
/etc/tuned /tuned -mai n. co nf file. You can also configure the amount of time in seconds
between tuned checking usage and updating tuning details with the update_interval parameter.
For further details about tuned, see the man page:
64
$ man tuned
A.6. t uned-adm
t u n ed - ad m is a command line tool that provides a number of different profiles to improve
performance in a number of specific use cases. It also provides a sub-command (tuned -ad m
reco mmend ) that assesses your system and outputs a recommended tuning profile. This also sets
the default profile for your system at install time, so can be used to return to the default profile.
As of Red Hat Enterprise Linux 7, t u n ed - ad m includes the ability to run any command as part of
enabling or disabling a tuning profile. This allows you to add environment specific checks that are
not available in t u n ed - ad m, such as checking whether the system is the master database node
before selecting which tuning profile to apply.
Red Hat Enterprise Linux 7 also provides the include parameter in profile definition files, allowing
you to base your own t u n ed - ad m profiles on existing profiles.
The following tuning profiles are provided with t u n ed - ad m and are supported in Red Hat
Enterprise Linux 7.
t h ro u g h p u t - p erf o rman ce
A server profile focused on improving throughput. This is the default profile, and is
recommended for most systems.
This profile favors performance over power savings by setting i ntel _pstate and
mi n_perf_pct= 10 0 . It enables transparent huge pages, uses cp u p o wer to set the
perfo rmance cpufreq governor, and sets the input/output scheduler to d ead l i ne. It also
sets kernel.sched_min_granularity_ns to 10 s,
kernel.sched_wakeup_granularity_ns to 15 s, and vm.dirty_ratio to 4 0 % .
l at en cy- p erf o rman ce
A server profile focused on lowering latency. This profile is recommended for latencysensitive workloads that benefit from c-state tuning and the increased TLB efficiency of
transparent huge pages.
This profile favors performance over power savings by setting i ntel _pstate and
max_perf_pct= 10 0 . It enables transparent huge pages, uses cp u p o wer to set the
perfo rmance cpufreq governor, and requests a cpu_dma_latency value of 1.
n et wo rk- lat en cy
A server profile focused on lowering network latency.
This profile favors performance over power savings by setting i ntel _pstate and
mi n_perf_pct= 10 0 . It disables transparent huge pages, and automatic NUMA balancing.
It also uses cp u p o wer to set the perfo rmance cpufreq governor, and requests a
cpu_dma_latency value of 1. It also sets busy_read and busy_poll times to 50 s,
and tcp_fastopen to 3.
n et wo rk- t h ro u g h p u t
A server profile focused on improving network throughput.
This profile favors performance over power savings by setting i ntel _pstate and
max_perf_pct= 10 0 and increasing kernel network buffer sizes. It enables transparent
huge pages, and uses cp u p o wer to set the perfo rmance cpufreq governor. It also sets
65
kernel.sched_min_granularity_ns to 10 s,
kernel.sched_wakeup_granularity_ns to 15 s, and vm.dirty_ratio to 4 0 % .
virt u al- g u est
A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtual machines.
This profile favors performance over power savings by setting i ntel _pstate and
max_perf_pct= 10 0 . It also decreases the swappiness of virtual memory. It enables
transparent huge pages, and uses cp u p o wer to set the perfo rmance cpufreq governor. It
also sets kernel.sched_min_granularity_ns to 10 s,
kernel.sched_wakeup_granularity_ns to 15 s, and vm.dirty_ratio to 4 0 % .
virt u al- h o st
A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtualization
hosts.
This profile favors performance over power savings by setting i ntel _pstate and
max_perf_pct= 10 0 . It also decreases the swappiness of virtual memory. This profile
enables transparent huge pages and writes dirty pages back to disk more frequently. It
uses cp u p o wer to set the perfo rmance cpufreq governor. It also sets
kernel.sched_min_granularity_ns to 10 s,
kernel.sched_wakeup_granularity_ns to 15 s, kernel.sched_migration_cost
to 5 s, and vm.dirty_ratio to 4 0 % .
For detailed information about the power saving profiles provided with tuned-adm, see the Red Hat
Enterprise Linux 7 Power Management Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
For detailed information about using t u n ed - ad m, see the man page:
$ man tuned-adm
A.7. perf
The p erf tool provides a number of useful commands, some of which are listed in this section. For
detailed information about p erf , see the Red Hat Enterprise Linux 7 Developer Guide, available from
http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/, or refer to the man pages.
p erf st at
This command provides overall statistics for common performance events, including
instructions executed and clock cycles consumed. You can use the option flags to gather
statistics on events other than the default measurement events. As of Red Hat
Enterprise Linux 6.4, it is possible to use perf stat to filter monitoring based on one or
more specified control groups (cgroups).
For further information, read the man page:
$ man perf-stat
p erf reco rd
This command records performance data into a file which can be later analyzed using
perf repo rt. For further details, read the man page:
66
$ man perf-record
p erf rep o rt
This command reads the performance data from a file and analyzes the recorded data. For
further details, read the man page:
$ man perf-report
p erf list
This command lists the events available on a particular machine. These events vary based
on the performance monitoring hardware and the software configuration of the system. For
further information, read the man page:
$ man perf-list
p erf t o p
This command performs a similar function to the t o p tool. It generates and displays a
performance counter profile in realtime. For further information, read the man page:
$ man perf-top
p erf t race
This command performs a similar function to the st race tool. It monitors the system calls
used by a specified thread or process and all signals received by that application.
Additional trace targets are available; refer to the man page for a full list:
$ man perf-trace
A.9. vmst at
Vmst at outputs reports on your system's processes, memory, paging, block input/output, interrupts,
and CPU activity. It provides an instantaneous report of the average of these events since the
machine was last booted, or since the previous report.
- a
D isplays active and inactive memory.
67
- f
D isplays the number of forks since boot. This includes the fo rk, vfo rk, and cl o ne
system calls, and is equivalent to the total number of tasks created. Each process is
represented by one or more tasks, depending on thread usage. This display does not
repeat.
- m
D isplays slab information.
- n
Specifies that the header will appear once, not periodically.
- s
D isplays a table of various event counters and memory statistics. This display does not
repeat.
d elay
The delay between reports in seconds. If no delay is specified, only one report is printed,
with the average values since the machine was last booted.
c o u n t
The number of times to report on the system. If no count is specified and delay is defined,
vmst at reports indefinitely.
- d
D isplays disk statistics.
- p
Takes a partition name as a value, and reports detailed statistics for that partition.
- S
D efines the units output by the report. Valid values are k (1000 bytes), K (1024 bytes), m
(1000000 bytes), or M (1048576 bytes).
- D
Report summary statistics about disk activity.
For detailed information about the output provided by each output mode, see the man page:
$ man vmstat
A.10. x86_energy_perf_policy
The x86 _en erg y_p erf _p o licy tool allows administrators to define the relative importance of
performance and energy efficiency. It is provided by the kernel-tools package.
To view the current policy, run the following command:
# x86_energy_perf_policy -r
68
A.11. t urbost at
The t u rb o st at tool provides detailed information about the amount of time that the system spends in
different states. T u rb o st at is provided by the kernel-tools package.
By default, t u rb o st at prints a summary of counter results for the entire system, followed by counter
results every 5 seconds, under the following headings:
p kg
The processor package number.
c o re
The processor core number.
C PU
The Linux CPU (logical processor) number.
% c0
The percentage of the interval for which the CPU retired instructions.
G H z
When this number is higher than the value in TSC, the CPU is in turbo mode
T SC
The average clock speed over the course of the entire interval.
% c1, % c3, an d % c6
69
The percentage of the interval for which the processor was in the c1, c3, or c6 state,
respectively.
% p c3 o r % p c6
The percentage of the interval for which the processor was in the pc3 or pc6 state,
respectively.
Specify a different period between counter results with the -i option, for example, run turbo stat i 10 to print results every 10 seconds instead.
Note
Upcoming Intel processors may add additional c-states. As of Red Hat Enterprise Linux 7.0,
t u rb o st at provides support for the c7, c8, c9, and c10 states.
A.12. numast at
The n u mast at tool is provided by the numactl package, and displays memory statistics (such as
allocation hits and misses) for processes and the operating system on a per-NUMA-node basis. The
default tracking categories for the numastat command are outlined as follows:
n u ma_h it
The number of pages that were successfully allocated to this node.
n u ma_miss
The number of pages that were allocated on this node because of low memory on the
intended node. Each numa_mi ss event has a corresponding numa_fo rei g n event on
another node.
n u ma_f o reig n
The number of pages initially intended for this node that were allocated to another node
instead. Each numa_fo rei g n event has a corresponding numa_mi ss event on another
node.
i n t erleave_h it
The number of interleave policy pages successfully allocated to this node.
l o cal_n o d e
The number of pages successfully allocated on this node, by a process on this node.
o t h er_n o d e
The number of pages allocated on this node, by a process on another node.
Supplying any of the following options changes the displayed units to megabytes of memory
(rounded to two decimal places), and changes other specific n u mast at behaviors as described
below.
- c
70
Horizontally condenses the displayed table of information. This is useful on systems with a
large number of NUMA nodes, but column width and inter-column spacing are somewhat
unpredictable. When this option is used, the amount of memory is rounded to the nearest
megabyte.
- m
D isplays system-wide memory usage information on a per-node basis, similar to the
information found in /pro c/memi nfo .
- n
D isplays the same information as the original numastat command (numa_hi t,
numa_mi ss, numa_fo rei g n, i nterl eave_hi t, l o cal _no d e, and o ther_no d e), with
an updated format, using megabytes as the unit of measurement.
- p p at t ern
D isplays per-node memory information for the specified pattern. If the value for pattern is
comprised of digits, n u mast at assumes that it is a numerical process identifier. Otherwise,
n u mast at searches process command lines for the specified pattern.
Command line arguments entered after the value of the -p option are assumed to be
additional patterns for which to filter. Additional patterns expand, rather than narrow, the
filter.
- s
Sorts the displayed data in descending order so that the biggest memory consumers
(according to the total column) are listed first.
Optionally, you can specify a node, and the table will be sorted according to the node
column. When using this option, the node value must follow the -s option immediately, as
shown here:
numastat -s2
D o not include white space between the option and its value.
- v
D isplays more verbose information. Namely, process information for multiple processes will
display detailed information for each process.
- V
D isplays numastat version information.
- z
Omits table rows and columns with only zero values from the displayed information. Note
that some near-zero values that are rounded to zero for display purposes will not be
omitted from the displayed output.
A.13. numact l
71
N u mact l lets administrators run a process with a specified scheduling or memory placement policy.
N u mact l can also set a persistent policy for shared memory segments or files, and set the processor
affinity and memory affinity of a process.
N u mact l provides a number of useful options. This appendix outlines some of these options and
gives suggestions for their use, but is not exhaustive.
- - h ard ware
D isplays an inventory of available nodes on the system, including relative distances
between nodes.
- - memb in d
Ensures that memory is allocated only from specific nodes. If there is insufficient memory
available in the specified location, allocation fails.
- - cp u n o d eb in d
Ensures that a specified command and its child processes execute only on the specified
node.
- - p h ycp u b in d
Ensures that a specified command and its child processes execute only on the specified
processor.
- - lo calallo c
Specifies that memory should always be allocated from the local node.
- - p ref erred
Specifies a preferred node from which to allocate memory. If memory cannot be allocated
from this specified node, another node will be used as a fallback.
For further details about these and other parameters, see the man page:
$ man numactl
A.14 . numad
n u mad is an automatic NUMA affinity management daemon. It monitors NUMA topology and
resource usage within a system in order to dynamically improve NUMA resource allocation and
management.
Note that when n u mad is enabled, its behavior overrides the default behavior of automatic NUMA
balancing.
72
# numad -i 0
Stopping n u mad does not remove the changes it has made to improve NUMA affinity. If system use
changes significantly, running n u mad again will adjust affinity to improve performance under the
new conditions.
To restrict n u mad management to a specific process, start it with the following options.
# numad -S 0 -p pid
- p pid
This option adds the specified pid to an explicit inclusion list. The process specified will not
be managed until it meets the n u mad process significance threshold.
- S 0
This sets the type of process scanning to 0 , which limits n u mad management to explicitly
included processes.
For further information about available n u mad options, refer to the n u mad man page:
$ man numad
73
statistics can eventually contradict each other after large amounts of cross-node merging. As such,
numad can become confused about the correct amounts and locations of available memory, after the
KSM daemon merges many memory pages. KSM is beneficial only if you are overcommitting the
memory on your system. If your system has sufficient free memory, you may achieve higher
performance by turning off and disabling the KSM daemon.
A.15. OProfile
OProfile is a low overhead, system-wide performance monitoring tool provided by the oprofile
package. It uses the performance monitoring hardware on the processor to retrieve information about
the kernel and executables on the system, such as when memory is referenced, the number of
second-level cache requests, and the number of hardware interrupts received. OProfile is also able to
profile applications that run in a Java Virtual Machine (JVM).
OProfile provides the following tools. Note that the legacy o pco ntro l tool and the new o perf tool
are mutually exclusive.
o p h elp
D isplays available events for the systems processor along with a brief description of each.
o p imp o rt
Converts sample database files from a foreign binary format to the native format for the
system. Only use this option when analyzing a sample database from a different
architecture.
o p an n o t at e
Creates annotated source for an executable if the application was compiled with debugging
symbols.
o p co n t ro l
Configures which data is collected in a profiling run.
o p erf
Intended to replace o pco ntro l . The o perf tool uses the Linux Performance Events
subsystem, allowing you to target your profiling more precisely, as a single process or
system-wide, and allowing OProfile to co-exist better with other tools using the performance
monitoring hardware on your system. Unlike o pco ntro l , no initial setup is required, and it
can be used without the root privileges unless the --system-wi d e option is in use.
o p rep o rt
Retrieves profile data.
o p ro f iled
Runs as a daemon to periodically write sample data to disk.
Legacy mode (o pco ntro l , o pro fi l ed , and post-processing tools) remains available, but is no
longer the recommended profiling method.
For further information about any of these commands, see the OProfile man page:
$ man oprofile
74
A.16. t askset
The t askset tool is provided by the util-linux package. It allows administrators to retrieve and set the
processor affinity of a running process, or launch a process with a specified processor affinity.
Important
t askset does not guarantee local memory allocation. If you require the additional
performance benefits of local memory allocation, Red Hat recommends using n u mact l
instead of taskset.
To set the CPU affinity of a running process, run the following command:
# taskset -c processors pid
Replace processors with a comma delimited list of processors or ranges of processors (for example,
1,3,5-7. Replace pid with the process identifier of the process that you want to reconfigure.
To launch a process with a specified affinity, run the following command:
# taskset -c processors -- application
Replace processors with a comma delimited list of processors or ranges of processors. Replace
application with the command, options and arguments of the application you want to run.
For more information about t askset , see the man page:
$ man taskset
75
C h arlie B o yle
R evisio n 10.08- 35
Wed Au g 6 2015
Added BTRFS information to 5.3.7.3
C h arlie B o yle
R evisio n 04 .7- 33
Wed Au g 5 2015
Added BTRFS compression information to 5.1.3
C h arlie B o yle
R evisio n 03.6 - 32
Mo n Au g 3 2015
Changed heading level for file systems in 5.1.3 File Systems
C h arlie B o yle
R evisio n 03.4 - 32
Wed Ju n 5 2015
Updated SSD information 5.5.1 part of 1189350.
C h arlie B o yle
R evisio n 03.4 - 31
Wed Ap r 8 2015
C h arlie B o yle
Updated VMSTAT parameters in 5.2.1 to reflect man page, BZ 1131829.
R evisio n 02.4 - 30
Wed Ap r 8 2015
Updated VMSTAT parameters to reflect man page, BZ 1131829.
C h arlie B o yle
R evisio n 02.4 - 29
T u e Ap r 7 2015
updated Tuna help, BZ 1131829.
C h arlie B o yle
R evisio n 02.4 - 28
T u e Ap r 7 2015
Changed Turbostat Ghz description, BZ 992461.
C h arlie B o yle
R evisio n 02.4 - 27
Fri Mar 27 2015
Corrected parameter setting for, BZ 1145906.
C h arlie B o yle
R evisio n 02.4 - 26
T h u Mar 26 2015
C h arlie B o yle
Corrected details of Section 4.3 Configuration tools, BZ 1122400.
R evisio n 0.3- 25
T h u Mar 26 2015
Corrected details of throughput performance, BZ 1122132.
C h arlie B o yle
R evisio n 0.3- 24
Mo n Feb 23 2015
Corrected details of confirming busy poll support, BZ 1080703.
Lau ra B ailey
R evisio n 0.3- 23
Building for RHEL 7.1 GA.
T u e Feb 17 2015
Lau ra B ailey
R evisio n 0.3- 22
T u e Feb 17 2015
Added busy poll support check, BZ 1080703.
Lau ra B ailey
R evisio n 0.3- 21
T h u Feb 05 2015
Lau ra B ailey
Noted new tuned profile parameter, cmdline, BZ 1130818.
Added note to the 'New in 7.1' section re. new discard parameter in swapon command, BZ 1130826.
76
R evisio n 0.3- 20
Fri Jan 09 2015
Fix error in pgrep command. BZ 1155253.
Improve the description of vm.swappiness. BZ 1148419.
C h arles B o yle
R evisio n 0.3- 19
Fri D ec 05 2014
Updating sort_order for splash page presentation.
Lau ra B ailey
R evisio n 0.3- 18
Wed N o v 26 2014
Added link to PCP article index for 7.1 Beta, BZ 1083387.
Lau ra B ailey
R evisio n 0.3- 15
Mo n N o v 24 2014
Added note on idle balancing changes, BZ 1131851.
Lau ra B ailey
R evisio n 0.3- 14
Wed N o v 19 2014
Lau ra B ailey
Updated huge page allocation details, BZ 1131367.
Added example of how to allocate huge pages during runtime, BZ 1131367.
R evisio n 0.3- 12
T h u N o v 13 2014
Lau ra B ailey
Added new default values for SHMALL and SHMMAX in RHEL 7.1, BZ 1131848.
R evisio n 0.3- 11
Mo n N o v 10 2014
Lau ra B ailey
Added note about support status of clustered allocation / bigalloc for ext4, BZ 794607.
R evisio n 0.3- 10
Fri O ct 31 2014
D ocumented per-node static huge pages, BZ 1131832.
Lau ra B ailey
R evisio n 0.3- 9
T u e Ju l 22 2014
Added description of latencytap.stp script, BZ 988155.
Lau ra B ailey
R evisio n 0.3- 7
T h u Ju n 26 2014
Lau ra B ailey
Corrected typographical error in the CPU chapter; thanks Jiri Hladky.
Removed references to tuned altering the I/O scheduler; thanks Jiri Hladky.
R evisio n 0.3- 5
Wed Ju n 11 2014
Lau ra B ailey
Added trailing slash to access.redhat.com links that wouldn't redirect.
R evisio n 0.3- 4
T u e Ju n 10 2014
Lau ra B ailey
Added interrupt and CPU banning details to irqbalance appendix BZ 852981.
R evisio n 0.3- 3
Rebuilding for RHEL 7.0 GA.
Mo n Ap r 07 2014
Lau ra B ailey
R evisio n 0.3- 2
Mo n Ap r 07 2014
Updated book structure for RT#294949.
Lau ra B ailey
R evisio n 0.2- 38
Mo n Ap r 07 2014
Added updated OProfile data, BZ 955882.
Removing outdated comments.
Lau ra B ailey
R evisio n 0.2- 34
Lau ra B ailey
Fri Ap r 04 2014
77
R evisio n 0.2- 27
Fri Mar 28 2014
Lau ra B ailey
Corrected busy_poll section based on feedback from Jeremy Eder, RT276607.
Corrected nohz_full section and added details based on feedback from Jeremy Eder, RT284423.
Added further detail to SystemTap sections, BZ 955884.
Added further detail to the SSD section, BZ 955900.
Added further detail on the tuned-adm recommend command, BZ 794623.
Corrected note about automatic NUMA balancing in features section, BZ 794612.
Corrected a number of terminology issues and example output issues regarding NUMA, including a
new image, BZ 1042800.
Corrected details about irqbalance in conjunction with RSS based on feedback from Jeremy Eder.
R evisio n 0.2- 19
Fri Mar 21 2014
Lau ra B ailey
Added details about transparent huge pages to the Memory chapter, BZ 794621.
Corrected use of terms related to NUMA nodes, BZ 1042800.
Updated kernel limits, BZ 955894.
D rafted tickless kernel section, RT284423.
D rafted busy polling section, RT276607.
Updated information about file system barriers.
Removed unclear information about per-node huge page assignment. BZ 1079079 created to add
more useful information in future.
Added details about solid state disks, BZ 955900.
Removed review markers.
R evisio n 0.2- 14
T h u Mar 13 2014
Lau ra B ailey
Applied feedback from Jeremy Eder and Joe Mario.
Noted updates to Tuna GUI from BZ 955872.
Added details about SystemTap to the Networking chapter and Tools Reference appendix,
BZ 955884.
R evisio n 0.2- 12
Fri Mar 07 2014
Lau ra B ailey
Noted support for automatic NUMA migration, as per BZ 794612.
Applied additional feedback from Jeremy Eder.
R evisio n 0.2- 11
Fri Mar 07 2014
Applied feedback from Jeremy Eder.
Lau ra B ailey
R evisio n 0.2- 10
Mo n Feb 24 2014
Lau ra B ailey
Corrected Ext4 information based on feedback from Luk Czerner (BZ #794607).
R evisio n 0.2- 9
Mo n Feb 17 2014
Lau ra B ailey
Corrected the CPU chapter based on feedback from Bill Gray.
Corrected and added to the Memory chapter and Tools Reference based on feedback from Bill Gray.
R evisio n 0.2- 8
78
Mo n Feb 10 2014
Lau ra B ailey
Lau ra B ailey
R evisio n 0.1- 10
Pre-Beta customer build.
Lau ra B ailey
Wed N o v 27 2013
R evisio n 0.1- 9
T u e O ct 15 2013
Minor corrections based on customer feedback (BZ #1011676).
Lau ra B ailey
R evisio n 0.1- 7
Mo n Sep 09 2013
Merged new content from RHEL 6.5.
Applied editor feedback.
Lau ra B ailey
R evisio n 0.1- 6
Wed May 29 2013
Lau ra B ailey
Updated ext4 file system limits (BZ #794607).
Corrected theoretical maximum of a 64-bit file system.
Added the New Features section to track performance-related changes.
Changed default I/O scheduler from cfq to d ead l i ne (BZ #794602).
Added draft content for BTRFS tuning (BZ #794604).
Updated XFS section to provide clearer recommendations about directory block sizes, and updated
XFS supported limits (BZ #794616.
R evisio n 0.1- 2
T h u rs Jan 31 2013
Updated and published as RHEL 7 draft.
R evisio n 0.1- 1
Wed Jan 16 2013
Branched from the RHEL 6.4 version of this document.
Lau ra B ailey
79