Learn Ansible Quickly RHCE
Learn Ansible Quickly RHCE
Learn Ansible Quickly RHCE
I dedicate this book to my late friend, Mostafa. You will never be forgotten.
BRIEF CONTENTS
Acknowledgments viii
Introduction ix
iii
CONTENTS IN DETAIL
Acknowledgments viii
Introduction ix
About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Why this Book? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
How to use this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Contacting Me . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Don’t Forget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
iv
Understanding variable precedence . . . . . . . . . . . . . . . . . . . 44
Gathering and showing facts . . . . . . . . . . . . . . . . . . . . . . . . . 46
Creating customs facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Capturing output with registers . . . . . . . . . . . . . . . . . . . . . . . . 51
Knowledge Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
v
Chapter 9: Ansible Roles 103
Understanding Ansible Roles . . . . . . . . . . . . . . . . . . . . . . . . . 103
Role directory structure . . . . . . . . . . . . . . . . . . . . . . . . . 103
Storing and locating roles . . . . . . . . . . . . . . . . . . . . . . . . 104
Using roles in playbooks . . . . . . . . . . . . . . . . . . . . . . . . . 104
Using Ansible Galaxy for Ready-Made Roles . . . . . . . . . . . . . . . . 106
Searching for roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Getting role information . . . . . . . . . . . . . . . . . . . . . . . . . 106
Installing and using roles . . . . . . . . . . . . . . . . . . . . . . . . 107
Using a requirements file to install multiple roles . . . . . . . . . . . 110
Creating Custom Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Managing Order of Task Execution . . . . . . . . . . . . . . . . . . . . . . 116
Knowledge Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
vi
Task 9: Scheduling Tasks . . . . . . . . . . . . . . . . . . . . . . . . 172
Task 10: Archiving Files . . . . . . . . . . . . . . . . . . . . . . . . . 172
Task 11: Creating Custom Facts . . . . . . . . . . . . . . . . . . . . 173
Task 12: Creating Custom Roles . . . . . . . . . . . . . . . . . . . . 173
Task 13: Software Repositories . . . . . . . . . . . . . . . . . . . . . 174
Task 14: Using Conditionals . . . . . . . . . . . . . . . . . . . . . . . 174
Task 15: Installing Software . . . . . . . . . . . . . . . . . . . . . . . 175
vii
Acknowledgments
I am very thankful to everyone who’s supported me in the process of creating
this book.
First and foremost, I would like to thank Abhishek Prakash who served as the
main editor of the book. Without Abhishek, this book would be full of random
errors!
I am also very grateful for the support I have from the Linux Foundation; I was
the recipient of the 2016 & 2020 LiFT scholarship award, and I was provided with
free training courses from the Linux Foundation that have benefited me tremen-
dously in my career.
A big shoutout also goes to all readers on Linux Handbook and my 160,000+
students on Udemy. You guys are all awesome. You gave me the motivation to
write this book and bring it to life.
Last but not least, thanks to Linus Torvalds, the creator of Linux. You have
changed the lives of billions on this earth for the better. God bless you!
viii
Introduction
Learn Ansible Quickly is a fully practical hands-on guide for learning Ansible. It
will get you up and running with Ansible in no time.
Ahmed holds two BSc degrees in Computer Science and Mathematics from the
University of Regina. He also holds the following certifications:
• Red Hat Certified Engineer (RHCE).
• Red Hat Certified System Administrator (RHCSA).
• Linux Foundation Certified System Administrator (LFCS).
• AWS Certified DevOps Engineer - Professional.
• AWS Certified Solutions Architect - Associate.
• Azure DevOps Engineer Expert.
• Azure Solutions Architect Expert.
• Cisco Certified Network Associate Routing & Switching (CCNA).
ix
To get the most out of this book, you should read it in a chronological order as
every chapter prepares you for the next, and so I recommend that you start with
the first chapter and then the second chapter and so on till you reach the finish
line. Also, you need to write and run every playbook in the book by yourself, do
not just read an Ansible playbook!
Contacting Me
There is nothing I like more in this life than talking about Ansible, DevOps, and
Linux; you can reach me at my personal email [email protected]. I get
a lot of emails every day, so please include Learn Ansible Quickly or #LAQ in the
subject of your email.
Don't Forget
I would appreciate any reviews or feedback, so please don’t forget to leave a review
after reading the book. Cheers!
x
Chapter 1: Hello Ansible
In this chapter, you will get to understand what is Ansible all about and how
is Ansible different from other automation and configuration management tools.
Next, you will create the setup for our Ansible environment that we are going to
use throughout the book to showcase all Ansible components and tools. Finally, you
will learn how to install Ansible on RHEL (Red Hat Enterprise Linux), CentOS,
and Ubuntu.
What is Ansible?
Ansible is an open-source configuration management, software provisioning, and
application deployment tool that makes automating your application deployments
and IT infrastructure operation very simple.
Ansible is very lightweight, easy to configure, and is not resource hungry because it
doesn’t need an agent to run (agentless) unlike other automation tools, for example,
Puppet which is agent based and is a bit complex to configure.
This explains why Ansible is growing in popularity each day and becoming the goto
automation tool for many enterprises.
I created one RHEL 8 (Red Hat Enterprise Linux 8) virtual machine that would
serve as the control node. A control node is, as its name suggest is basically a server
that is used to control other remote hosts (managed nodes).
I created three CentOS 8 virtual machine for managed nodes: node1, node2, and
node3. I also created an Ubuntu 18.04 for the last managed node.
1
Figure 1: Our Ansible Setup
I don’t have enough resources on my computer to create all these virtual machines
without my computer crashing. So, I have used Microsoft Azure for all the virtual
machines I have created as you can see in Figure 2:
Needless to say, that you are free any other cloud service providers like AWS, GCP
(Google Cloud Platform), Linode, Digital Ocean, UpCloud etc. You are also free
to create virtual machines locally on your computer using a virtualization software
like VirtualBox, VMware Player, or Fusion (MacOS).
2
Installing Ansible
Ansible relies on SSH and Python to do all the automation magic and so you only
need to install Ansible on the control node and make sure that OpenSSH and
Python is installed on both the control and the managed nodes.
Long story short, you don’t need to have Ansible installed on the managed nodes!
Checking Linux version information, you can see I am running RHEL 8.2 and I am
going to use this as the control node:
To get Ansible installed on a RHEL 8 system, you first have to register your system
with the subscription-manager command:
You will be prompted for a username and a password as you can see, if you don’t
have a Red Hat account, you can create an account and obtain a free trial at the fol-
lowing link: https://access.redhat.com/products/red-hat-enterprise-linux/evaluation
You would then need to attach the new subscription with the following command:
3
[root@control ~]# subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64
Status: Subscribed
Product Name: Red Hat Enterprise Linux for x86_64 - Extended Update Support
Status: Subscribed
Notice that you could have registered and attached the subscription in a single
command:
Now we have access to all the RHEL 8 repositories. You can list all the available
Ansible repositories by running the following command:
Now find the newest Ansible version repository and enable it. At the time of writing
this, ansible-2.9 is the latest version and so I am going to enable the ansible-2.9-
for-rhel-8-x86_64-rpms with the yum_config_manager command as follows:
You can now verify that the Ansible repository is indeed enabled by listing all the
enabled repositories on your system:
4
rhel-8-for-x86_64-appstream-eus-rhui-rpms Red Hat Enterprise Linux 8 ...
rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 ...
rhel-8-for-x86_64-baseos-eus-rhui-rpms Red Hat Enterprise Linux 8 ...
rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 ...
All the preliminary work is done. You can now finally install Ansible:
After the installation is complete. You can verify that Ansible is indeed installed
by running the command:
Awesome! You have now successfully installed Ansible on RHEL 8. I am sure you
may be thinking that was a lengthy process!
On the bright side, there will be no internet access on the exam, which means, your
control system will come equipped with all the repositories that you will need, and
so you won’t have to worry about using the subscription manager.
You can install and enable the EPEL repo by installing the epel-release package
as follows:
5
[root@node1 ~]# yum install -y ansible
Keep in mind that we installed Ansible on one of the managed nodes here (node1)
only for learning purposes; you only need to install Ansible on the control node.
You can add and enable ansible-2.9 ppa repository using the following command:
This takes us to the end of our first chapter. In the next chapter, you are going
to learn how to configure Ansible and get to run some really cool Ad-Hoc Ansible
commands.
6
Knowledge Check
If you haven’t already done so; Install the latest Ansible version on your RHEL 8
control node.
7
Chapter 2: Running AdHoc Commands
In the first chapter, you got acquainted with Ansible and learned how to install it.
In this chapter, you will learn how to manage static inventory in Ansible. You will
also understand various Ansible configuration settings.
Furthermore, you will explore few Ansible modules and you will get to run Ansible
Ad-Hoc commands.
For this reason, it’s recommended that you create a dedicated Ansible user with
sudo privileges (to all commands) on all hosts (control and managed hosts).
Remember, Ansible uses SSH and Python to do all the dirty work behind the scenes
and so here are the four steps you would have to follow after installing Ansible:
So, without further ado, let’s start with creating a new user named elliot on all
hosts:
After setting elliot’s password on all hosts, you can move to step 2; you can grant
elliot sudo privileges to all commands without password by adding the following
line to the /etc/sudoers file:
8
Now, login as user elliot on your control node and generate a ssh-key pair:
Finally, you can copy elliot’s public ssh key to all managed hosts using the ssh-
copy-id command as follows:
You should now be able to ssh into all managed nodes without being prompted for
a password; you will only be asked to enter a ssh passphrase (if you didn’t leave it
empty, ha-ha).
9
Building your Ansible inventory
An Ansible inventory file is a basically a file that contains a list of servers, group
of servers, or ip addresses that references that hosts that you want to be managed
by Ansible (managed nodes).
The /etc/ansible/hosts is the default inventory file. I will now show you how you
to create your own inventory files in Ansible.
Now, let’s make a new Ansible project directory named in /home/elliot named
plays which you will use to store all your Ansible related things (playbooks, inven-
tory files, roles, etc) that you will create from this point onwards:
Notice that everything you will create from this point moving forward will be on
the control node.
You can now run the following ansible command to list all your hosts in the myhosts
inventory file:
10
The -i option was used to specify the myhosts inventory file. If you omit the -i op-
tion, Ansible will look for hosts in the /etc/ansible/hosts inventory file instead.
Keep in mind that I am using hostnames here and that all the nodes (vms) I have
created on Azure are on the same subnet and I don’t have to worry about DNS as
it’s handled by Azure.
If you don’t have a working DNS server, you can add your nodes IP address/host-
name entries in /etc/hosts, Figure 3 is an example:
Figure 3: /etc/hosts
[prod]
node3
node4
You can list the hosts in the prod group by running the following command:
11
1. all → contains all the hosts in the inventory
2. ungrouped → contains all the hosts that are not a member of any group
(aside from all).
Let’s add an imaginary host node5 to the myhosts inventory file to demonstrate
the ungrouped group:
[test]
node1
node2
[prod]
node3
node4
Notice that I added node5 to the very beginning (and not the end), otherwise, it
would be considered a member of the prod group.
Now you can run the following command to list all the ungrouped hosts:
You can also create a group (parent) that contains subgroups (children). Take a
look at the following example:
[web_prod]
node2
[db_dev]
node3
[db_prod]
node4
[development:children]
12
web_dev
db_dev
[production:children]
web_prod
db_prod
The development group contains all the hosts that are in web_dev plus all the
members that are in db_dev. Similarly, the production group contains all the
hosts that are in web_prod plus all the members that are in db_prod.
13
Configuring Ansible
In this section, you will learn about the most important Ansible configuration set-
tings. Throughout the entire book, I will show you other configuration settings as
the need arises.
The ansible --version command will show you which configuration file you are
currently using:
The two most important sections that you need to define in your Ansible configu-
ration file are:
1. [defaults]
2. [privilege_escalation]
In the [defaults] section, here are the most important settings you need to be aware
of:
14
• forks → specifies the number of host that ansible can manage/process in
parallel; default is 5.
• host_key_checking → specifies whether you want to turn on/off SSH key
host checking; default is True.
In the [privilege_escalation] section, you can configure the following settings:
• become → specify where to allow/disallow privilege escalation; default is
False.
• become_method → specify the privilege escalation method; default is sudo.
• become_user → specify the user you become through privilege escalation;
default is root.
• become_ask_pass → specify whether to ask or not ask for privilege esca-
lation password; default is False.
Keep in mind, you don’t need to commit any of these settings to memory. They
are all documented in /etc/ansible/ansible.cfg.
Now create your own ansible.cfg configuration file in your Ansible project directory
/home/elliot/plays and set the following settings (as shown in Figure 4):
Figure 4: /home/elliot/plays/ansible.cfg
Now run the ansible --version command one more time; you should see that your
new configuration file is now in effect:
15
Running AdHoc Commands in Ansible
Until this point, you have really just been installing, setting up your environment,
and configuring Ansible. Now, the real fun begins!
An Ansible ad-hoc command is a great tool that you can use to run a single task
on one or more managed nodes. A typical Ansible ad-hoc command follows the
general syntax:
The easiest way to understand how Ansible ad-hoc commands work is simply run-
ning one! So, go ahead and run the following ad-hoc command:
I was prompted to enter my ssh key passphrase and then the server uptime of node1
was displayed! Now, check Figure 5 to help you understand each element of the
ad-hoc command you just ran:
You would probably have guessed it by now; ansible modules are reusable, stan-
dalone scripts that can be used by the Ansible API, or by the ansible or ansible-
playbook commands.
The command module is one of the many modules that Ansible has to offer. You
can run the ansible-doc -l command to see a list of all the available Ansible
modules:
16
Currently, there are 3387 Ansible modules available, and they increase every day!
You can pass any command way you wish to run as an option to the Ansible com-
mand module.
If you don’t have an empty ssh key passphrase (just like me); then you would
have to run ssh-agent to avoid the unnecessary headache of being prompted for a
passphrase every single time Ansible try to access your managed nodes:
Testing Connectivity
You may want to test if Ansible can connect to all your managed nodes before
getting into the more serious tasks; for this, you can use the ping module and
specify all your managed hosts as follows:
17
"ping": "pong"
}
As you can see with all the SUCCESS messages in the output. Notice that the
Ansible ping module doesn’t need any options. Some Ansible modules require
options and some does not, just like the case with Linux commands.
If you want to how to use a specific Ansible module, then you can run ansible-doc
followed by the module name.
For example, you can view the description of the ping module and how to use it
by running:
This will open up the ping module documentation page as shown in Figure 6:
18
When reading modules documentation, pay especial attention to see if any option
is prefixed by the equal sign (=). In this case, it’s a mandatory option that you
must include.
Also, if you scroll all the way down, you can see some examples of how to run the
ad-hoc commands or Ansible playbooks (that we will discuss later).
19
Command vs. Shell vs. Raw Modules
There are three Ansible modules that people often confuse with one another; these
are:
1. command
2. shell
3. raw
Those three modules achieve the same purpose; they run commands on the man-
aged nodes. But there are key differences that separates the three modules.
You can’t use piping or redirection with the command module. For example, the
following ad-hoc command will result in an error:
That’s because the command module doesn’t support pipes or redirection. You
can use the shell module instead if you want to use pipes or redirection. Run the
same command again, but this time, use the shell module instead:
Works like a charm! It successfully displayed the first five lines of the lscpu com-
mand output on node2.
Ansible uses SSH and Python scripts behind the scenes to do all the magic. Now,
the raw module just uses SSH and bypasses the Ansible module subsystem. This
way, the raw module would successfully work on the managed node even if python
is not installed (on the managed node).
20
root@node4:/usr/bin# mkdir hide
root@node4:/usr/bin# mv python* hide/
Now check what will happen if I run an Ansible ad-hoc with the shell or command
module targeting node4:
I get errors! Now I will try to achieve the same task; but this time, I will use the
raw module:
21
UBUNTU_CODENAME=bionic
Shared connection to node4 closed.
As you can see, the raw module was the only module out of three modules to carry
out the task successfully. Now I will go back fix the mess that I did on node4:
root@node4:/usr/bin/hide# mv * ..
Figure 7 summarizes the different use cases for the three modules:
Alright! This takes us to the end of the second chapter. In the next chapter, you
are going to learn how to create and run Ansible playbooks.
22
Knowledge Check
Create a bash script named adhoc-cmds.sh that will run few ad-hoc commands
on your managed nodes.
3. Create the file /tmp/hello.txt with the contents “Hello, Friend!” on node3.
Hint: Use the shell or copy modules to create the /tmp/hello.txt file.
Solution to the exercise is provided at the end of the book.
23
Chapter 3: Ansible Playbooks
In the previous chapter, you learned how to use Ansible ad-hoc commands to run a
single task on your managed hosts. In this chapter, you will learn how to automate
multiple tasks on your managed hosts by creating and running Ansible playbooks.
To better understand the differences between Ansible ad-hoc command and Ansi-
ble playbooks; you can think of Ansible ad-hoc commands as Linux commands and
playbooks as bash scripts.
Ansible ad-hoc commands are ideal to perform tasks that are not executed fre-
quently such us getting servers uptime, retrieving system information, etc. ]]
On the other hand, Ansible playbooks are ideal to automate complex tasks like sys-
tem patches, application deployments, firewall configurations, user management,
etc.
Please note that I have included all the playbooks, scripts, and files in this book in
this GitHub repository: https://github.com/kabary/rhce8
You should also be aware that YAML files also must have either a .yaml or .yml
extension. I personally prefer .yml because it’s less typing, and I am lazy.
24
Also, YAML is indentation sensitive. A two-spaces indentation is the recommended
indentation to use in YAML; however, YAML will follow whatever indentation
system a file uses as long as it’s consistent.
It is beyond annoying to keep hitting two spaces in your keyboard and so do yourself
a favor and include the following line in the /.vimrc file:
This will convert the tabs into two spaces whenever you are working on a YAML
file.
Now let’s create your first playbook. In your project directory, create a file named
first-playbook.yml that has the following contents:
This playbook will run on all hosts and uses the file module to create a file named
/tmp/foo.conf; you also set the mode: 0664 and owner: elliot module options
to specify the file permissions and the owner of the file. Finally, you set the state:
touch option to make sure the file gets created if it doesn’t already exist.
To run the playbook, you can use the ansible-playbook command followed by the
playbook filename:
25
TASK [create a new file] ***************
changed: [node4]
changed: [node3]
changed: [node1]
changed: [node2]
The output of the playbook run is pretty self-explanatory. For now, pay special
attention to changed=1 in the PLAY RECAP summary which means that one
change was executed successfully on the managed node.
Let’s run the following ad-hoc command to verify that the file /tmp/foo.conf is
indeed created on all the managed hosts:
Notice that the following ad-hoc command will accomplish the same task as first-
playbook.yml playbook:
To read more about the file module, check its Ansible documentation page:
26
Running multiple plays with Ansible Playbook
You have only created one play that contains one task in first-playbook.yml play-
book. A playbook can contain multiple plays and each play can in turn contains
multiple tasks.
Let’s create a playbook named multiple-plays.yml that has the following content:
Notice that I used the package module on the first play as it is the generic module
to manage packages and it autodetects the default package manager on the man-
aged nodes. I used the apt module on the second play as I am only running it on
an Ubuntu host (node4).
The yum and dnf modules also exists and they work on CentOS and RHEL
systems.
I also used the archive module to create a gzip compressed archive /tmp/logs.tar.gz
that contains all the files in the /var/log directory.
27
Go ahead and run the multiple-plays.yml playbook:
Everything looks good. You can quickly check if the /tmp/logs.tar.gz archive
exists on all nodes by running the following ad-hoc command:
28
I also recommend you check the following Ansible documentation pages and check
the examples section:
29
Verifying your playbooks
Although I have already shown you the steps to run Ansible playbooks, it is always
a good idea to verify your playbook before actually running it. This ensures that
your playbook is free of potential errors.
You can use the --syntax-check option to check if your playbook has syntax errors:
You may also want to use the --check option to do a dry run of your playbook
before actually running the playbook:
Notice that doing a dry run of the playbook will not commit any change on the
managed nodes.
You can use the --list-hosts option to list the hosts of each play in your playbook:
playbook: multiple-plays.yml
30
play #1 (all): first play TAGS: []
pattern: ['all']
hosts (4):
node4
node2
node1
node3
You can also list the tasks of each play in your playbook by using the --list-tasks
option:
playbook: multiple-plays.yml
You can also check the ansible-playbook man page for a comprehensive list of
options.
31
Reusing tasks and playbooks
You may find yourself writing multiple playbooks that all share a common list of
tasks. In this case, it’s better to create a file that contains a list of all the common
tasks and then you can reuse them in your playbooks.
To demonstrate, let’s create a file named group-tasks.yml that contains the fol-
lowing tasks:
Now you can use the import_tasks module to run all the tasks in group-tasks.yml
in your first playbook as follows:
You can also use the import_playbook module to reuse an entire playbook. For
example, you can create a new playbook named reuse-playbook.yml that has the
following content:
32
hosts: all
tasks:
- name: Reboot the servers
reboot:
msg: Server is rebooting ...
Also notice that you can only import a playbook on a new play level; that is, you
can’t import a play within another play.
You can also use the include module to reuse tasks and playbooks. For example,
you can replace the import_playbook statement with the include statement as
follows:
The only difference is that the import statements are pre-processed at the time
playbooks are parsed. On the other hand, include statements are processed as they
are encountered during the execution of the playbook. So, in summary, import is
static while include is dynamic.
33
Running selective tasks and plays
You can choose not to run a whole playbook and instead may want to run specific
task(s) or play(s) in a playbook. To do this, you can use tags.
For example, you can tag the install git task in the multiple-plays.yml playbook
as follows:
Now you can use the --tags option followed by the tag name git to only run the
install git task:
34
As you can see, the first two plays were skipped and only the install git task did
run. You can also see changed=0 in the PLAY RECAP and that’s because git
is already installed on node4.
Alright! This takes us to the end of this chapter. In the next chapter, you are going
to learn how to work with Ansible variables, facts, and registers.
35
Knowledge Check
Create a playbook named lab3.yml that will accomplish the following tasks:
36
Chapter 4: Ansible Variables, Facts, and Registers
There will always be a lot of variances across your managed systems. For this rea-
son, you need to learn how to work with Ansible variables.
In this chapter, you will learn how to define and reference variables Ansible. You
will also learn how to use Ansible facts to retrieve information on your managed
nodes.
Furthermore, you will also learn how to use registers to capture task output.
For example, you can define a fav_color variable and set its value to yellow as
follows:
---
- name: Working with variables
hosts: all
vars:
fav_color: yellow
Now how do you use (reference) the fav_color variable? Ansible uses the Jinja2
templating system to work with variables. You will learn all about Jinja2 in chap-
ter 7, but for now you just need to know the very basics.
To get the value of the fav_color variable; you need to surround it by a pair of
curly brackets as follows:
Notice that if your variable is the first element (or only element) in the line, then
using quotes is mandatory as follows:
37
Now let’s write a playbook named variables-playbook.yml that puts all this
together:
I have used the debug module along with the msg module option to print the value
of the fav_color variable.
Now run the playbook and you shall see your favorite color displayed as follows:
vars:
port_nums: [21,22,23,25,80,443]
38
You could have also used the following way to define port_nums which is equiv-
alent:
vars:
port_nums:
- 21
- 22
- 23
- 25
- 80
- 443
vars:
users:
bob:
username: bob
uid: 1122
shell: /bin/bash
lisa:
username: lisa
uid: 2233
shell: /bin/sh
There are two different ways you can use to access dictionary elements:
• dict_name[’key’] → users[’bob’][’shell’]
39
• dict_name.key → users.bob.shell
Now you can edit the variables-playbook.yml playbook to show lists and dictio-
naries in action:
users:
bob:
username: bob
uid: 1122
shell: /bin/bash
lisa:
username: lisa
uid: 2233
shell: /bin/sh
tasks:
- name: Show 2nd item in port_nums
debug:
msg: SSH port is {{ port_nums[1] }}
You can now run the playbook to display the second element in port_nums and
show bob’s uid:
40
Including external variables
Just like you can import (or include) tasks in a playbook. You can do the same
thing with variables as well. That is, in a playbook, you can include variables de-
fined in an external file.
To demonstrate, let’s create a file named myvars.yml that contains our port_nums
list and users dictionary:
users:
bob:
username: bob
uid: 1122
shell: /bin/bash
lisa:
username: lisa
uid: 2233
shell: /bin/sh
Keep in mind that vars_files preprocesses and load the variables right at the start
of the playbook. You can also use the include_vars module to dynamically load
your variables in your playbook:
41
[elliot@control plays]$ cat variables-playbook.yml
---
- name: Working with variables
hosts: node1
tasks:
- name: Load the variables
include_vars: myvars.yml
For example, the following greet.yml playbook asks the running user to enter his
name and then displays a personalized greeting message:
tasks:
- name: Greet the user
debug:
msg: Hello {{ username }}
Notice I used private: no so that you can see your input on the screen as you type
it; by default, it’s hidden.
42
TASK [Greet the user] **************************
ok: [node1] => {
"msg": "Hello Elliot"
}
To demonstrate, edit your inventory file so that your managed nodes are grouped
in the following three groups:
[webservers]
node2
node3
[dbservers]
node4
Now to create variables that are specific to your managed nodes; first, you need
to create a directory named host_vars. Then inside host_vars, you can create
variables files with filenames that corresponds to your nodes hostname as follows:
Now let’s create a playbook named motd.yml that demonstrates how host_vars
work:
43
hosts: all
tasks:
- name: Set motd = value of message variable.
copy:
content: "{{ message }}"
dest: /etc/motd
I used the copy module to copy the contents of the message variable onto the
/etc/motd file on all nodes. Now after running the playbook; you should see that
the contents of /etc/motd has been updated on all nodes with the corresponding
message value:
Awesome! Similarly, you can use create a group_vars directory and then include
all group related variables in a filename that corresponds to the group name as
follows:
I will let you create a playbook that runs on all nodes; each node will install the
package that is set in the node’s corresponding group pkg variable.
If the same variable is set at different levels; the most specific level gets precedence.
For example, a variable that is set on a play level takes precedence over the same
variable set on a host level (host_vars).
44
Furthermore, a variable that is set on the command line using the --extra-vars
takes the highest precedence, that is, it will overwrite anything else.
Now let’s run the playbook while using the -e (--extra-vars) option to set the
value of the fav_distro variable to “CentOS” from the command line:
Notice how the command line’s fav_distro value “CentOS” took precedence over
the play’s fav_distro value “Ubuntu”.
45
Gathering and showing facts
You can retrieve or discover variables that contain information about your managed
hosts. These variables are called facts and Ansible uses the setup module to gather
these facts. The IP address on one of your managed nodes is an example of a fact.
You can run the following ad-hoc command to gather and show all the facts on
node1:
This is only a fraction of all the facts related to node1 that you are going to see
displayed on your terminal. Notice how the facts are stored in dictionaries or lists
and they all belong to the ansible_facts dictionary.
46
[elliot@control plays]$ ansible-playbook motd.yml
You can turn off facts gathering by setting gather_facts boolean to false right in
your play header as follows:
If you run the motd.yaml playbook again; it will skip facts gathering:
The same way you show a variable’s value; you can also use to show a fact’s value.
The following show-facts.yml playbook displays the value of few facts on node1:
47
tasks:
- name: display node1 ipv4 address
debug:
msg: IPv4 address is {{ ansible_facts.default_ipv4.address }}
48
Creating customs facts
You may want to create your own custom facts. To do this, you can either use the
set_fact module to add temporarily facts or the /etc/ansible/facts.d directory
to add permanent facts on your managed nodes.
I am going to show you how to add permanent facts to your managed nodes. It’s a
three steps process:
So, first, let’s create a cool.fact file on your control node that includes some cool
facts:
Notice that your facts filename must have the .fact extension.
For the second step, you are going to use the file module to create and the /etc/an-
sible/facts.d directory on the managed node(s). And lastly for the third step, you
are going to use the copy module to copy cool.fact file from the control node to
the managed node(s).
49
Now run the playbook:
The cool facts are now permanently part of node1 facts; you can verify with the
following ad-hoc command:
"ansible_local": {
"cool": {
"fun": {
"kiwi": "fruit",
"matrix": "movie",
"octupus": "'8 legs'"
}
}
}
50
Capturing output with registers
Some tasks will not show any output when running a playbook. For instance,
running commands on your managed nodes using the command, shell, or raw
modules will not display any output when running a playbook.
You can use a register to capture the output of a task and save it to a variable.
This allows you to make use of a task output elsewhere in a playbook by simply
addressing the registered variable.
The playbook starts by running the uptime command on the proxy group hosts
(node1) and registers the command output to the server_uptime variable.
Then, you use the debug module along with the var module option to inspect the
server_uptime variable. Notice, that you don’t need to surround the variable
with curly brackets here.
Finally, the last task in the playbook shows the output (stdout) of the registered
variable server_uptime.
51
ok: [node1]
Alright! This takes us to the end of this chapter. In the next chapter, you are going
to learn how to use loops in Ansible.
52
Knowledge Check
Create a playbook named lab4.yml that will accomplish the following tasks:
1. Displays the output of the “free -h” command on all managed nodes.
53
Chapter 5: Ansible Loops
You may sometimes want to repeat a task multiple times. For example, you may
want to create multiple users, start/stop multiple services, or change ownership on
several files on your managed hosts.
In this chapter, you will learn how to use Ansible loops to repeat a task multiple
times without having to rewrite the whole task over and over again.
Notice that I use the item variable with Ansible loops. The task would run five
times which is equal to the number of elements in the prime list.
On the first run, the item variable will be set to first element in the prime array
(2). On the second run, the item variable will be set to the second element in the
prime array (3) and so on.
Go ahead and run the playbook to see all the elements of the prime list displayed:
54
ok: [node1] => (item=3) => {
"msg": 3
}
ok: [node1] => (item=5) => {
"msg": 5
}
ok: [node1] => (item=7) => {
"msg": 7
}
ok: [node1] => (item=11) => {
"msg": 11
}
Now let’s apply loops to a real life application. For example, you can create an
add-users.yml playbook that would add multiple users on all the hosts in the
dbservers host group:
Notice that I also used the dotted notation item.username and item.pass to ac-
cess the keys values inside the hashes/dictionaries of the dbusers list.
55
It is also worth noting that I used the password_hash(’sha512’) filter to en-
crypt the user passwords with the sha512 hashing algorithm as the user module
wouldn’t allow setting unencrypted user passwords.
RHCE Exam Tip: You will have access to the docs.ansible.com page
on your exam. It a very valuable resource, especially under the “Frequently
Asked Questions” section; you will find numerous How-to questions with
answers and explanations.
You can verify that the three users are added by following up with an Ansible ad-hoc
command:
56
Looping over dictionaries
You can only use loops with lists. You will get an error if you try to loop over a
dictionary.
As you can see in the error message, it explicitly says that it requires a list.
To fix this error; you can use the dict2items filter to convert a dictionary to a list.
So, in the print-dict.yml playbook; edit the line:
57
and apply the dict2items filter as follows:
58
Looping over a range of numbers
You can use the range() function along with the list filter to loop over a range of
numbers.
For example, the following task would print all the numbers from 0 → 9:
You can also start your range at a number other than zero. For example, the
following task would print all the numbers from 5 → 14:
By default, the stride is set to 1. However, you can set a different stride.
For example, the following task would print all the even IP addresses in the 192.168.1.x
subnet:
59
Looping over inventories
You can use the Ansible built-in groups variable to loop over all your inventory
hosts or just a subset of it. For instance, to loop over all your inventory hosts; you
can use:
If you want to loop over all the hosts in the webservers group, you can use:
To see how this works in a playbook; take a look at the following loop-inventory.yml
playbook:
This playbook tests if node1 is able to ping all other hosts in your inventory. Go
ahead and run the playbook:
60
If you get any errors; this would mean that your managed hosts are not able to ping
(reach) each other.
61
}
ok: [node1] => (item=3) => {
"msg": "7 seconds remaining ..."
}
ok: [node1] => (item=4) => {
"msg": "6 seconds remaining ..."
}
ok: [node1] => (item=5) => {
"msg": "5 seconds remaining ..."
}
ok: [node1] => (item=6) => {
"msg": "4 seconds remaining ..."
}
ok: [node1] => (item=7) => {
"msg": "3 seconds remaining ..."
}
ok: [node1] => (item=8) => {
"msg": "2 seconds remaining ..."
}
ok: [node1] => (item=9) => {
"msg": "1 seconds remaining ..."
}
I will end this chapter on the happy birthday note! In the next chapter, you are
going to learn how to add decision-making skills to your Ansible playbooks.
62
Knowledge Check
Create a playbook named lab5.yml that will only run on the localhost and display
all the even numbers that are between 20 and 40 (Inclusive).
63
Chapter 6: Decision Making in Ansible
In this chapter, you will learn how to add decision making skills to your Ansible
playbooks.
64
skipping: [node2]
skipping: [node3]
ok: [node4] => {
"msg": "This is an Ubuntu Server."
}
In the playbook output, notice how TASK [Detect Ubuntu Servers] skipped
the first three nodes as they are all running CentOS and only ran on node4 as it
is running Ubuntu.
The playbook first starts by saving the contents of the /etc/os-release file into the
os_release register variable. Then the second tasks displays the message “Run-
ning CentOS …” only if the word ‘CentOS’ is found in os_release standard output.
65
[elliot@control plays]$ ansible-playbook centos-servers.yml
Notice how TASK [Detect CentOS Servers] only ran on the first three nodes
and skipped node4 (Ubuntu).
66
reboot:
msg: "Server is rebooting ..."
when: >
ansible_facts['distribution'] == "CentOS" and
ansible_facts['distribution_major_version'] == "8"
You can also use the logical or operator to run a task if any of the conditions is
true. For example, the following task would reboot servers that are running either
CentOS or RedHat:
tasks:
- name: Reboot CentOS and RedHat Servers
reboot:
msg: "Server is rebooting ..."
when: >
ansible_facts['distribution'] == "CentOS" or
ansible_facts['distribution'] == "RedHat"
For example, the following print-even.yml playbook will print all the even num-
bers in the range(1,11):
Go ahead and run the playbook to see the list of all even numbers in the range(1,11):
67
TASK [Gathering Facts] ******************
ok: [node1]
68
msg: "You are free!"
when: weekend and not on_call | bool
Notice that I used the bool filter here to convert the on_call value to its boolean
equivalent (no →false).
Also, you should be well aware that not false is true and so the whole condition
will evaluate to true in this case; you are free!
You can also test to see whether a variable has been set or not; for example, the
following task will only run if the car variable is defined:
tasks:
- name: Run only if you got a car
debug:
msg: "Let's go on a road trip ..."
when: car is defined
The following task uses the fail module to fail if the keys variable is undefined:
tasks:
- name: Fail if you got no keys
fail:
msg: "This play require some keys"
when: keys is undefined
69
Handling Exceptions with Blocks
Now let’s talk about handling exceptions in Ansible.
The playbook runs on the webservers group hosts and has one block with the
name Install and start Apache that includes two tasks:
1. Install httpd
2. Start and enable httpd
The first task Install httpd uses the yum module to install the httpd apache
package. The second task Start and enable httpd uses the service module to
start and enabled httpd to start on boot.
Notice that the playbook has a third task that doesn’t belong to the Install and
start Apache block.
Now go ahead and run the playbook to install and start httpd on the webservers
nodes:
70
[elliot@control plays]$ ansible-playbook install-apache.yml
You can also follow up with an ad-hoc command to verify that httpd is indeed up
and running:
71
Nov 03 19:35:13 node3 systemd[1]: Starting The Apache HTTP Server...
Nov 03 19:35:13 node3 systemd[1]: Started The Apache HTTP Server.
Nov 03 19:35:13 node3 httpd[47122]: Server configured, listening on: port 80
node2 | CHANGED | rc=0 >>
� httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service;
enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-11-03 19:35:13 UTC; 1min 37s ago
Docs: man:httpd.service(8)
Main PID: 43695 (httpd)
Status: "Running, listening on: port 80"
Tasks: 213 (limit: 11935)
Memory: 25.1M
CGroup: /system.slice/httpd.service
��43695 /usr/sbin/httpd -DFOREGROUND
��43696 /usr/sbin/httpd -DFOREGROUND
��43697 /usr/sbin/httpd -DFOREGROUND
��43698 /usr/sbin/httpd -DFOREGROUND
��43699 /usr/sbin/httpd -DFOREGROUND
You can use the rescue section to include all the tasks that you want to run in case
one or more tasks in the block has failed.
tasks:
- name: Handling error example
block:
- name: run a command
command: uptime
72
rescue:
- name: Runs when the block failed
debug:
msg: "Block failed; let's try to fix it here ..."
Notice how the second task in the block run a bad command generates an error
and in turn the third task in the block never gets a chance to run. The tasks inside
the rescue section will run because the second task in the block has failed.
You can also use ignore_errors: yes to ensure that Ansible continue executing
the tasks in the playbook even if a task has failed:
tasks:
- name: Handling error example
block:
- name: run a command
command: uptime
rescue:
- name: This will not run
debug:
msg: "Errors were ignored! ... not going to run."
Notice that in this example, you ignored the errors in the second task run a bad
command in the block and that’s why the third task was able to run. Also, the
rescue section will not run as you ignored the error in the second task in the block.
You can also add an always section to a block. Tasks in the always section will
always run regardless whether the block has failed or not.
73
tasks:
- name: Handling Errors Example
block:
- name: run a command
command: uptime
rescue:
- name: Runs when the block fails
debug:
msg: "Block failed! let's try to fix it here ..."
always:
- name: This will always run
debug:
msg: "Whether the block has failed or not ... I will always run!"
74
PLAY RECAP *********************************************************************
node1: ok=4 changed=1 unreachable=0 failed=0 skipped=0
As you can see; the rescue section did run as the second task in the block has failed
and you didn’t ignore the errors. Also, the always section did (and will always)
run.
handlers:
- name: add elliot
user:
name: elliot
groups: engineers
append: yes
The first task Create engineers group creates the engineers group and also no-
tifies the add elliot handler.
75
TASK [Gathering Facts] ************************
ok: [node1]
Notice that creating the engineers caused a change on node1 and as a result trig-
gered the add elliot handler.
You can also run a quick ad-hoc command to verify that user elliot is indeed a
member of the engineers group:
Ansible playbooks and modules are idempotent which means that if a change in
configuration occurred on the managed nodes; it will not redo it again!
76
ok: [node1] => {
"msg": "I am just another task."
}
As you can see; the Create engineers group task didn’t not cause or report a
change this time because the engineers group already exists on node1 and as a
result; the add elliot handler did not run.
handlers:
- name: handler1
debug:
msg: "I can handle dates"
Notice how the first task Run the date command triggers handler1. Now go
ahead and run the playbook:
77
TASK [Run the uptime command] *******************
changed: [node1]
Both tasks Run the date command and Run the uptime command reported
changes and handler1 was triggered. You can argue that running date and up-
time commands don’t really change anything on the managed node and you are
totally right!
Now let’s edit the playbook to stop the Run the date command task from
reporting changes:
handlers:
- name: handler1
debug:
msg: "I can handle dates"
78
TASK [Run the date command] *********************
ok: [node1]
As you can see, the Run the date command task didn’t report a change this
time and as a result, handler1 was not triggered.
handlers:
- name: restart ssh
service:
name: sshd
state: restarted
Notice I used the blockinfile module to insert multiple lines of text into the
/etc/ssh/sshd_config configuration file. The Edit SSH Configuration task
also triggers the restart ssh handler upon change.
79
[elliot@control plays]$ ansible-playbook configure-ssh.yml
Everything looks good! Now let’s quickly take a look the last few lines in the
/etc/ssh/sshd_config file:
Amazing! Exactly as you expected it to be. Keep in mind that if you rerun the
configure-ssh.yml playbook, Ansible will not edit (or append) the /etc/ssh/sshd_config
file. You can try it for yourself!
I also recommend you take a look at the blockinfile and lineinfile documentation
pages to understand the differences and the use of each module:
80
[elliot@control plays]$ ansible-doc blockinfile
[elliot@control plays]$ ansible-doc lineinfile
Alright! This takes us to the end of the sixth chapter. In the next chapter, you are
going to learn how to use Jinja2 Templates to deploy files and configure services
dynamically in Ansible.
81
Knowledge Check
Create a playbook named lab6.yml that will accomplish the following tasks:
82
Chapter 7: Jinja2 Templates
In the previous chapter, you learned how to do simple file modifications by using
the blockinfile or lineinfile Ansible modules.
In this chapter, you will learn how to use Jinja2 templating engine to carry out
more involved and dynamic file modifications.
You will learn how to access variables and facts in Jinja2 templates. Furthermore,
you will learn how to use conditional statements and loop structures in Jinja2.
Let’s create a templates directory to keep thing cleaner and more organized:
Now create your first Jinja2 template file with the name index.j2:
Notice that Jinja2 template filenames must end with the .j2 extension.
Now go one step back to your project directory and create the following check-
apache.yml:
83
service:
name: httpd
state: started
Note that the httpd package was already installed in a previous chapter.
In this playbook, you first make sure Apache is running in the first task Start
httpd. Then use the template module in the second task Create index.html
using Jinja2 to process and transfer the index.j2 Jinja2 template file you created
to the destination /var/www/html/index.html.
Everything looks good so far; let’s run a quick ad-hoc Ansible command to check
the contents of index.html on the webservers nodes:
84
Amazing! Notice how Jinja2 was able to pick up the values of the inventory_hostname
built-in variable and the webserver_message variable in your playbook.
You can also use the curl command to see if you get a response from both web-
servers:
85
Accessing facts in Jinja2
You can access facts in Jinja2 templates the same way you access facts from your
playbook.
To demonstrate, change to your templates directory and create the info.j2 Jinja2
file with the following contents:
hostname={{ ansible_facts['hostname'] }}
fqdn={{ ansible_facts['fqdn'] }}
ipaddr={{ ansible_facts['default_ipv4']['address'] }}
distro={{ ansible_facts['distribution'] }}
distro_version={{ ansible_facts['distribution_version'] }}
nameservers={{ ansible_facts['dns']['nameservers'] }}
totalmem={{ ansible_facts['memtotal_mb'] }}
freemem={{ ansible_facts['memfree_mb'] }}
Notice that info.j2 accesses eight different facts. Now go back to your project
directory and create the following server-info.yml playbook:
Notice that you are creating /tmp/server-info.txt on all hosts based on the
info.j2 template file. Go ahead and run the playbook:
86
TASK [Create server-info.txt using Jinja2] ********
changed: [node4]
changed: [node1]
changed: [node3]
changed: [node2]
Everything looks good! Now let’s run a quick ad-hoc command to inspect the
contents of the /tmp/server-info.txt file on one of the nodes:
hostname=node1
fqdn=node1.linuxhandbook.local
ipaddr=10.0.0.5
distro=CentOS
distro_version=8.2
nameservers=['168.63.129.16']
totalmem=1896
freemem=1087
As you can see, Jinja2 was able to access and process all the facts.
87
Conditional statements in Jinja2
You can use the if conditional statement in Jinja2 for testing various conditions
and comparing variables. This allows you to determine your file template execution
flow according to your test conditions.
{% if selinux_status == "enabled" %}
"SELINUX IS ENABLED"
{% elif selinux_status == "disabled" %}
"SELINUX IS DISABLED"
{% else %}
"SELINUX IS NOT AVAILABLE"
{% endif %}
The first statement in the template creates a new variable selinux_status and set
its value to ansible_facts[’selinux’][’status’].
Notice how the if statement in Jinja2 mimics Python’s if statement; just don’t
forget to use {% endif %}.
Now go back to your project directory and create the following selinux-status.yml
playbook:
88
Go ahead and run the playbook:
From the playbook output; you can see that SELinux is enabled on both node1
and node3. I disabled SELinux on node2 before running the playbook and
node4 doesn’t have SELinux installed because Ubuntu uses AppArmor instead
of SELinux.
Finally, you can run the following ad-hoc command to inspect the contents of
selinux.out on all the managed hosts:
89
[elliot@control plays]$ ansible all -m command -a "cat /tmp/selinux.out"
node4 | CHANGED | rc=0 >>
"SELINUX IS DISABLED"
"SELINUX IS ENABLED"
"SELINUX IS ENABLED"
90
Looping in Jinja2
You can use the for statement in Jinja2 to loop over items in a list, range, etc. For
example, the following for loop will iterate over the numbers in the range(1,11)
and will hence display the numbers from 1 →10:
{% for i in range(1,11) %}
Number {{ i }}
{% endfor %}
Notice how the for loop in Jinja2 mimics the syntax of Python’s for loop; again
don’t forget to end the loop with {% endfor %}.
Now let’s create a full example that shows off the power of for loops in Jinja2.
Change to your templates directory and create the following hosts.j2 template file:
Notice here you used a new built-in special (magic) variable hostvars which is ba-
sically a dictionary that contains all the hosts in inventory and variables assigned
to them.
You iterated over all the hosts in your inventory and then for each host; you dis-
played the value of three variables:
1. hostvars[host].ansible_facts.default_ipv4.address
2. hostvars[host].ansible_facts.fqdn
3. hostvars[host].ansible_facts.hostname
Notice also that you must include those three variables on the same line side by
side to match the format of the /etc/hosts file.
Now go back to your projects directory and create the following local-dns.yml
playbook:
91
- name: Update /etc/hosts using Jinja2
template:
src: hosts.j2
dest: /etc/hosts
Everything looks good so far; now run the following ad-hoc command to verify that
/etc/hosts file is properly updated on node1:
I hope you now realize the power of Jinja2 templates in Ansible. In the next
chapter, you are going to learn how to protect sensitive information and files using
Ansible Vault.
92
Knowledge Check
Create a Jinja2 template file named motd.j2 that has the following contents:
Then create a playbook named lab7.yml that will edit the /etc/motd file on all
managed hosts based on the motd.j2 Jinja2 template file.
93
Chapter 8: Ansible Vault
There are many situations where you would want to use sensitive information in
Ansible. For instance, you may want to set user’s password, transfer certificates or
keys, etc.
Furthermore, you will learn how to use encrypted variables and files in your play-
books.
It will first prompt you for a vault password that you can use whenever you want
to open the file later afterwards. After you enter the password, it will open the file
with your default file editor and so you can go ahead and insert the following line:
Now save and exit then try to view the contents of the secret.txt file:
94
As you can see, you can’t really view the line you just inserted because the file is
now encrypted with Ansible vault.
If you want to view the original content of a vault encrypted file; you can use the
ansible-vault view command as follows:
You can also store your vault password in a separate file. For example, if your vault
password for the encrypted file secret.txt was L!n*Xh#N%b,Ook; you can store
in a separate file secret-vault.txt as follows:
Now you can use the --vault-password-file option followed by the path to your
vault password file along with the ansible-vault view command to view the con-
tents of secret.txt as follows:
As you can see; I didn’t get prompted for a vault password this time around.
To modify the contents of a vault encrypted file; you can use the ansible-vault
edit command as follows:
95
Decrypting encrypted files
Let’s create another encrypted file named secret2.txt:
You may decide later that the information in secret2.txt is no longer sensitive. If
you want to decrypt a vault encrypted file; you can use the ansible-vault decrypt
command as follows:
As you can see; the contents of the file secret2.txt is no longer encrypted.
96
Changing an encrypted file's password
You can also encrypt existing files using the ansible-create encrypt command;
for instance, you can encrypt the unencrypted secret2.txt file again as follows:
You may notice that the vault password of secret2.txt got compromised. In this
case, you can use the ansible-vault rekey command to change the vault password
as follows:
Notice that you needed to enter the old vault password before entering the new one.
97
Decrypting content at run time in playbooks
You can use vault encrypted files in your Ansible playbooks. To demonstrate, let’s
first create a new encrypted file named web-secrets.yml:
Now create a new Ansible playbook named vault-playbook.yml with the following
contents:
Now run the playbook again but pass the --ask-vault-pass option this time around:
98
[elliot@control plays]$ ansible-playbook --ask-vault-pass vault-playbook.yml
Vault password:
As you can see; the playbook prompted you for the vault password and then runs
successfully.
You could have also used the --vault-password-file if you have stored your vault
password in a file:
You can also use the --vault-id option to access multiple encrypted files in your
playbook. To demonstrate, let’s create another encrypted file named db-secrets.yml:
99
- name: Accessing Vaults in Playbooks
hosts: node2
vars_files:
- web-secrets.yml
- db-secrets.yml
tasks:
- name: Show secret1 value
debug:
msg: "{{ secret1 }}"
To run the playbook; you have to provide the vault password for each encrypted
file using the --vault-id option as follows:
The playbook prompts you to enter vault password for both files: web-secrets.yml
and db-secrets.yml and then runs successfully. You could have also used files to
store your vault passwords; in this case, this is the command you need to run your
playbook:
100
I hope you have now learned how to protect sensitive information and how to use
encrypted files in your playbooks by using Ansible vault. In the next chapter, you
will learn how to create and use Ansible roles.
101
Knowledge Check
Using the vault password: 7uZAcMBVz
Create a vault encrypted file named mysecret.txt that has the following contents:
102
Chapter 9: Ansible Roles
So far you have been creating Ansible playbooks to automate a certain task on your
managed nodes. There is a huge chance that someone else has already designed an
Ansible solution to the problem/task you are trying to solve and that’s exactly what
Ansible roles is all about.
In this chapter, you will understand how roles are structured in Ansible. You will
also learn to use ready-made roles from Ansible Galaxy.
Furthermore, you will learn to create your own custom Ansible roles.
1. defaults → contains default variables for the role that are meant to be easily
overwritten.
2. vars → contains standard variables for the role that are not meant to be
overwritten in your playbook.
6. files → contains static files which are accessed in the role tasks.
7. tests → may contain an optional inventory file, as well as test.yml playbook
that can be used to test the role.
8. meta → contains role metadata such as author information, license, depen-
dencies, etc.
Keep in mind that a role may have all the aforementioned directories or just a subset
of them. In fact, you can define an empty role that has zero directories, although
won’t be useful!
103
Storing and locating roles
By default, Ansible will look for roles in two locations:
2. In /etc/ansible/roles directory.
You can however choose to store your roles in a different location; if you choose
to do so, you have to specify and set the roles_path configuration option in your
Ansible configuration file (ansible.cfg).
For instance, to statically import roles in a playbook, you can use the roles keyword
in the playbook header as follows:
---
- name: Including roles statically
hosts: all
roles:
- role1
- role2
104
To dynamically import roles; you can use the include_role module as follows:
---
- name: Including roles dynamically
hosts: all
tasks:
- name: import dbserver role
include_role:
name: db_role
when: inventory_hostname in groups['dbservers']
105
Using Ansible Galaxy for ReadyMade Roles
Imagine a place where all the Ansible roles you need are already provided for free;
That place is called Ansible Galaxy and it’s real!
Ansible Galaxy is a public website that where community provided roles are offered.
It is basically a public repository that hosts a massive amount of Ansible roles. I
recommend you pay a visit to Ansible Galaxy’s website galaxy.ansible.com and
check out its amazing content.
For example, you can search for role in the Galaxy’s repository by using the ansible-
galaxy search command as follows:
As you can see; it listed all the Galaxy’s roles related to my search term mariadb.
106
For example, based on the search results that you got from ansible-galaxy search
mariadb; you may decide to get more information on the geerlingguy.mysql role:
Role: geerlingguy.mysql
description: MySQL server for RHEL/CentOS and Debian/Ubuntu.
active: True
commit: 0a354d6ad1e4f466aad5f789ba414f31b97296fd
commit_message: Switch to travis-ci.com.
commit_url: https://api.github.com/repos/geerlingguy/ansible-role-mysql>
company: Midwestern Mac, LLC
created: 2014-03-01T03:32:33.675832Z
download_count: 1104067
forks_count: 665
github_branch: master
github_repo: ansible-role-mysql
github_user: geerlingguy
id: 435
imported: 2020-10-29T19:36:43.709197-04:00
is_valid: True
issue_tracker_url: https://github.com/geerlingguy/ansible-role-mysql/is>
license: license (BSD, MIT)
min_ansible_version: 2.4
modified: 2020-10-29T23:36:43.716291Z
open_issues_count: 24
You can now use the ansible-galaxy install command to install the geerling-
guy.mysql role as follows:
107
Notice that I used the -p option to specify the path where I want the role to be
installed. By default, Ansible Galaxy would install the role in the ~/.ansible/roles
directory.
Now go ahead and change to your roles directory and list the contents of the
geerlingguy.mysql role directory:
8 directories, 26 files
As you can see; the geerlingguy.mysql role follows the standard directory struc-
ture that I had shown you earlier.
108
Now let’s go back to the plays directory and create a new playbook named mysql-
role.yml that applies the geerlingguy.mysql role on the dbservers group host:
.
.
.
After the playbook is done running; mysql should be up and running on node4:
109
Active: active (running) since Tue 2020-11-10 23:31:51 UTC; 7min ago
Process: 26544 ExecStart=/usr/sbin/mysqld --daemonize
--pid-file=/run/mysqld/mysqld.pid (code=exited, status=0/SUCCESS)
Process: 26524 ExecStartPre=/usr/share/mysql/mysql-systemd-start
pre (code=exited, status=0/SUCCESS)
Main PID: 26546 (mysqld)
Tasks: 28 (limit: 2265)
CGroup: /system.slice/mysql.service
��26546 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid
If you no longer need a role; you can delete it using the ansible-galaxy remove
command as follows:
Each role you define in your requirements file will have one more of the following
attributes:
# From Github
- src: https://github.com/bennojoy/nginx
110
name: nginx_role
version: master
Now you can use the ansible-galaxy install command along with the -r option
to install the three roles in your requirements.yml file:
As you can see; the three roles defined in the requirements.yml file are successfully
installed. You can also use the ansible-galaxy list command to list the installed
roles along with their versions:
111
Creating Custom Roles
You can also define your own roles. To do that, you can use the ansible-galaxy
init command to define your role structure.
To demonstrate, let’s create a new role named httpd-role. First, change to your
roles directory and then run the ansible-galaxy init command followed by the
new role name as follows:
Notice that a new role was created with the name httpd-role and it has all the
typical role directories.
8 directories, 8 files
Now start defining tasks, templates, and default variables for the new httpd-role.
First, you can start by defining the tasks by editing tasks/main.yml so that it
contains the following contents:
112
[elliot@control httpd-role]$ cat tasks/main.yml
---
# tasks file for httpd-role
Then you can create the Jinja2 template file index.j2 inside the role’s templates
directory:
Finally, you can define and set the sysadmin variable in defaults/main.yml:
sysadmin: [email protected]
Alright! Now you are done creating the role. Let’s create a playbook that uses the
httpd-role.
Go back to your project directory and create the apache-role.yml playbook with
the following contents:
113
[elliot@control plays]$ cat apache-role.yml
---
- name: Using httpd-role
hosts: webservers
roles:
- role: httpd-role
sysadmin: [email protected]
Notice that you did overwrite the sysadmin variable in the playbook. Go ahead
and run the playbook:
Everything looks good. Let’s verify by checking the response you get on both
webservers (node2 and node3):
114
Welcome to node3
If you ever create an awesome Ansible role that you think a lot of people can
benefit from; don’t forget to publish your role to Ansible Galaxy to share it
with the World!
115
Managing Order of Task Execution
You need to be well aware of the order of task execution in an Ansible playbook.
If you use the roles keywork to import a role statically; then all the tasks in the
role will run before all other tasks (included under the tasks section) in your play.
You can use the pre_tasks keyword to include any tasks you want to run before
statically imported roles. You can also use the post_tasks keyword to include any
tasks you want to run after all the tasks under the tasks section.
post_tasks:
- name: Runs last
debug:
msg: "I will run last (post_task)."
pre_tasks:
- name: Runs first
debug:
msg: "I will run first (pre_task)."
roles:
- role: myrole
116
The playbook uses a role named myrole that I have created that just has two
simple tasks:
1 directory, 1 file
[elliot@control plays]$ cat roles/myrole/tasks/main.yml
- name:
debug:
msg: “I am the first task in myrole.”
- name:
debug:
msg: “I am the second task in myrole.”
117
TASK [Runs last] ******************************
ok: [node1] => {
"msg": "I will run last (post_tasks)."
}
As you can see; pre_tasks runs first followed by the two tasks in myrole then
tasks and finally post_tasks runs last.
I hope you enjoyed learning how to use and create Ansible roles. In the next chapter,
you are going to learn how to install and use roles that are specific to RHEL systems
using RHEL System Roles.
118
Knowledge Check
Download the geerlingguy.haproxy role via Ansible Galaxy.
Then create a playbook named lab9.yml that will run on the proxy host group
and will accomplish the following tasks:
1. Load balance http requests between the hosts in the webservers host group
by using the geerlingguy.haproxy role.
2. Use the Round Robin load balancing method.
3. HAProxy backend servers will only handle HTTP requests (port 80).
curl http://node1
and you should get two different responses (one from node2 and the other from
node3).
119
Chapter 10: RHEL System Roles
In the previous chapter; you learned to use Ansible Galaxy’s roles and create your
own custom roles. Let’s continue the discussion on Ansible roles but this time; we
will focus on RHEL System Roles.
Red Hat has created a collection of Ansible roles that is primarily targeting RHEL
systems; these collections of roles are referred to as Red Hat Enterprise Linux
(RHEL) System Roles.
In this chapter, you will learn how to install and use RHEL System Roles to manage
and automate standard RHEL operations.
Installed:
rhel-system-roles-1.0-10.el8_1.noarch
Complete!
120
drwxr-xr-x. 6 root root 114 Nov 14 22:44 rhel-system-roles.postfix
drwxr-xr-x. 8 root root 138 Nov 14 22:44 rhel-system-roles.selinux
drwxr-xr-x. 10 root root 215 Nov 14 22:44 rhel-system-roles.storage
drwxr-xr-x. 11 root root 187 Nov 14 22:44 rhel-system-roles.timesync
As you can see from listing the contents of /usr/share/ansible/roles; the follow-
ing RHEL system roles are currently provided:
It’s highly likely that additional RHEL system roles will be introduced in the future.
You can also find documentation and example playbooks for RHEL system roles in
the /usr/share/doc/rhel-system-roles directory:
I will now show you how to use two of the most popular RHEL system roles:
• rhel-system-roles.selinux
• rhel-system-roles.timesync
121
Using RHEL SELinux System Role
You can use the RHEL SELinux system role (rhel-system-roles.selinux) to do
any of the following:
Keep in mind that a reboot maybe required to apply an SELinux change; for exam-
ple, switching between enabled and disabled SELinux modes will require a reboot
for the change to take effect.
You are then free to choose to reboot the host and reapply the role again to finish
the changes; this can be done using a block-rescue construct as you can see in the
example playbook in the SELinux role’s documentation directory:
122
# prepare prerequisites which are used in this playbook
tasks:
- name: Creates directory
file:
path: /tmp/test_dir
state: directory
- name: Add a Linux System Roles SELinux User
user:
comment: Linux System Roles SELinux User
name: sar-user
- name: execute the role and catch errors
block:
- include_role:
name: rhel-system-roles.selinux
rescue:
# Fail if failed for a different reason than selinux_reboot_required.
- name: handle errors
fail:
msg: "role failed"
when: not selinux_reboot_required
123
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
Now create a playbook named disable-selinux.yml that has the following contents:
124
[elliot@control plays]$ ansible-playbook disable-selinux.yml
Notice how the first time you applied the role; it failed as a reboot was required.
125
The failure was then handled in the rescue section in the playbook where you re-
boot the host and then reapply the role again to make sure that it succeeded.
Now let’s run an ad-hoc command to verify that SELinux is now disabled on node1:
Success! Now that you have learned how to use the SELinux RHEL system role;
don’t forget to check the role’s documentation thoroughly to learn how to apply the
role in different scenarios. There are many variables that you can overwrite as you
can see in the role’s defaults directory:
126
Using RHEL TimeSync System Role
You can use the RHEL TimeSync system role (rhel-system-roles.timesync) to
configure NTP on your managed hosts.
There are various variables that you can overwrite as you can see in the role’s
defaults directory:
The most important variable is the timesync_ntp_servers list. Each item in the
timesync_ntp_servers will consist of different attributes, of which the following
two are the most common:
To demonstrate how to use the timesync rhel system role, go ahead and copy the
example-timesync-playbook.yml playbook in /usr/share/doc/rhel-system-
roles/timesync to your project directory:
Now go ahead and edit the playbook so that it has the following contents:
127
- hostname: 3.north-america.pool.ntp.org
iburst: yes
tzone: America/Chicago
roles:
- rhel-system-roles.timesync
tasks:
- name: Set Timezone
timezone:
name: "{{ tzone }}"
Notice that I also defined a new variable named tzone and created a task that sets
the host time zone using the timezone module.
128
Now run the following two ad-hoc Ansible commands to verify that the NTP servers
and time zone are updated on node1:
Please Notice: I didn’t explain how SELinux or NTP works here as this book
only focuses on the Ansible side of things. It is assumed that you are already
on an RHCSA skill level as Red Hat also recommends.
I hope you enjoyed learning about RHEL System roles. In the next chapter, you are
going to learn the most common Ansible modules used for automating day-to-day
administrative tasks and operations.
129
Knowledge Check
Create a playbook named lab10.yml that will run on node3 and will accomplish
the following tasks:
Hint: Use the selinux RHEL System role and check /usr/share/doc/rhel-system-
roles/selinux/ to see the example playbook.
130
Chapter 11: Managing Systems with Ansible
So far, you have learned about all the core components of Ansible. Now it’s time to
learn about the most common Ansible modules that are used for performing daily
administrative tasks.
In this chapter, you will learn how to manage users, groups, software and processes
with Ansible. You will also learn how to configure networking and local storage on
your Ansible managed systems.
• user → manages user accounts and user attributes. For Windows targets,
use the win_user module instead.
• group → manages presence of groups on a host. For Windows targets, use
the win_group module instead.
• pamd → edits PAM service’s type, control, module path and module argu-
ments.
• authorized_key → copies SSH public key from Ansible control node to the
target user .ssh/authorized_keys file in the managed node.
• acl → sets and retrieves file ACL information.
You need to be aware that the authorized_key module doesn’t generate SSH
keys; To generate, SSH keys, you can use the generate_ssh_key option with the
user module.
Also, keep in mind that there is no sudo module in Ansible. You can use Jinja2
and other modules like lineinfile, blockinfile, replace, or copy to edit sudo con-
figurations.
Now let’s create a playbook that uses some of the aforementioned modules to show
you how you can manage users and groups in Ansible. But first, let’s create three
new users (angela, tyrell, and darlene) on our control node.
Go ahead and create a bash script named adhoc.sh that contains the following
three ad-hoc commands:
131
ansible localhost -m user -a "name=tyrell uid=888
password={{ 'L!n*X' | password_hash('sha512') }} generate_ssh_key=yes"
Notice that you used localhost in the ad-hoc commands to refer to the control
node. The generate_ssh_key option is used to generate SSH key pairs for the
three users.
You could have also used the ssh_key_passphrase option to set a non-empty ssh
key pass phrase. In this case, you are going to create empty ssh key pass phrases
as you didn’t use ssh_key_passphrase option.
Now make your bash script executable and then run it:
132
"name": "tyrell",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"ssh_fingerprint": "3072 SHA256:10OzT7IGVdrjCk9ONgAwnvTTiuV+wp38nCju9g2eMHM
ansible-generated on control (RSA)",
"ssh_key_file": "/home/tyrell/.ssh/id_rsa",
"ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXLWgkQ6uBPxI6IUKA
zmshD0/C5zoBMG52q/BukV7l58F/wa5xyEPvLPtoVEst8kQ00Zmqs51inamLO45KHjoV1sG2mjb
f4REbo6hgojAssGMvjIdPpdZtTALkLNXG+WLU5PbUnYdx5cAmnSETt1ul16GZ3Rox6grQto8sWj
CO4d8gnqqxQevKvfv8pNqQoOMImZ5+bSNcvqBWpwfp3CsPMGC3qYIQG/wi0GAjfu+oXWp2dtA7W
mBRHBhQ3oCcMjSjE9GujAMp0+IeMnkFZUZIf/fZrDyDFZzW740+Vxa8f5aEtgxwk2Obzkyrg4ib
zWbmGlfmxs3OpnDDvblDRjcWmFzGLp5L0NlqPStGbMWYDZi04GKMthw6/XzIkThRBVqIwNQD1N2
W/cT4aAvcaIUc18ba8KzDQAMgso9mE1QFtLX5C+Z/3PBGp5rbQ3pTBjXKFuHh9UTrTh/A0bCi8t
W00jURInR/gAAupZG7FmUcHhCz31pWfpmr1r1YoepoHuc= ansible-generated on control",
"state": "present",
"system": false,
"uid": 888
}
localhost | CHANGED => {
"changed": true,
"comment": "",
"create_home": true,
"group": 1004,
"home": "/home/darlene",
"name": "darlene",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"ssh_fingerprint": "3072 SHA256:c9W+8AjuCsvNSMcokHzhJ91k27hd8HI1YhE6OPbkzgE
ansible-generated on control (RSA)",
"ssh_key_file": "/home/darlene/.ssh/id_rsa",
"ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNEj6cS/xgNVfO+I05
rq1Y8s9SSORmDrUpKmjIp6jWEq29vl9O1a1IAgmvPWAPfCYZPdtomqUbrL2R+7qdUq3AiWsLtds
IBiXhCaY3twQlGVKW40u7CnohFEuFrRT1DHQN6IjTbSu/z1fT7QGfQKc4OZ6tiBaHmaFFDByzIc
BWW5LAvGvqHF4cDFJOEyjEJ9Ih4CAYjO2smGP8tX4hFbJbKvhyI4G4C2KMCCtVlhcBnFENkB42G
2gJEf+4hFAPdV2YR+O1vAfV7FSPu3WI6qly2BlqiYrEq17B+UodxdEg9EgWi7PivzsKdiHm5TY8
jscp005lIOBKpeIXpbabTuhbAFwrFbhG722H1C5QlADpPQIHsFASkehESqnSYEgvr9rMzXUfMnm
8EJG/RvNLG4iuiJBoVom5ST3Xkg9auytLeHDciLKpCIGeWNl1MIhzeDopS2CBeh6xJPUAiShhtQ
ZzubBAqsC3SgNCqGhs0R6BM0qtF42pncVfKOtpLF1Zyqc= ansible-generated on control",
"state": "present",
"system": false,
"uid": 889
}
Awesome! The three users are now created. You can verify it by listing the users:
133
drwx------. 5 elliot elliot 170 Nov 16 23:11 elliot
drwx------. 3 tyrell tyrell 74 Nov 16 23:11 tyrell
And you can also verify that the ssh keys are generated:
Now go ahead and create a directory named keys in your project directory that
will store the public ssh key of the newly created users:
Alright! Now let’s create an Ansible playbook that will accomplish the following
tasks on node4:
134
- devops
- secops
usrs:
- username: angela
id: 887
pass: 'L!n*X'
- username: tyrell
id: 888
pass: 'L!n*X'
- username: darlene
id: 889
pass: 'L!n*X'
tasks:
- name: Create groups
group:
name: "{{ item }}"
loop: "{{ grps }}"
Notice the use of the lookup plugin in the last task Copy users SSH public key
to access the users public ssh keys files. You can also check the authorized_key
documentation for more information and examples on how to use the lookup plugin:
135
PLAY [Manage Users & Groups] *******************
You should now be able to ssh from the control node to node4 with any of the
three users (angela, tyrell, and darlene).
136
Managing software
You can use the following modules to manage software in Ansible:
Now let’s create a playbook that uses some of the aforementioned modules to show
you how you can manage software in Ansible. The playbook will accomplish the
following things on node1:
Go ahead and create a playbook named manage-software.yml that has the fol-
lowing contents:
137
[elliot@control plays]$ cat manage-software.yml
---
- name: Manage Software
hosts: node1
vars:
pkg_name: zabbix-agent
tasks:
- name: Create a new repo
yum_repository:
file: zabbix
name: zabbix-monitoring
baseurl: https://repo.zabbix.com/zabbix/5.2/rhel/8/x86_64/
description: "Zabbix 5.2 Repo"
enabled: yes
gpgcheck: no
138
ok: [node1] => {
"ansible_facts.packages[pkg_name]": [
{
"arch": "x86_64",
"epoch": null,
"name": "zabbix-agent",
"release": "1.el8",
"source": "rpm",
"version": "5.2.1"
}
]
}
Now let’s quickly verify the contents of the newly created zabbix repository:
As you can see; the repository is configured exactly how you specified in the play-
book.
By default, Ansible doesn’t gather package facts. Thus, I used the package_facts
module in the playbook to gather package information as facts. You can check the
package_facts documentation for more information:
139
Managing processes and tasks
You can use the following modules to manage processes and tasks in Ansible:
Keep in mind that the cron module has a name option which is used to uniquely
identify entries in the crontab. The name option has no meaning for Cron, but it
helps Ansible in managing crontab entries.
For example, Ansible uses the name option so that it can remove specific entries
in crontab. You will see how it works in the next playbook.
Now let’s create a playbook that schedules a new cron job on node2. The cron job
will run as user elliot and will append the message “Two minutes have passed!”
to the system log and will run every two minutes. You should let the cron job run
only run twice!
Go ahead and create a playbook named cronjob.yml that has the following con-
tents:
140
user: elliot
state: absent
Notice that I also used the pause module to wait for five minutes (let the job run
twice) before removing the “two-mins” cron job.
As you can; see the playbook created the “two-mins” cron job then waited for five
minutes and then removed the “two-mins” cron job.
You can check the contents of the /var/log/messages file to see if the logger has
successfully appended the messages to the system log (syslog):
141
Indeed! The “two-mins” cron job ran exactly twice as you wanted.
You should also know that there is no dedicated module to interact with Linux
processes in Ansible. If you want to send signals to a process; you can just use
regular Linux commands like kill, pkill, killall with the command, shell, or raw
Ansible modules.
142
Configuring local storage
You can use the following modules to configure local storage in Ansible:
• parted → This module allows configuring block device partition using the
‘parted’ command line tool.
• lvg → This module creates, removes or resizes volume groups.
• lvol → This module creates, removes or resizes logical volumes.
• filesystem → This module creates a filesystem.
• mount → This module controls active and configured mount points in ‘/etc/fstab’.
• vdo → This module controls the VDO dedupe and compression device.
Now let’s create a new playbook that will accomplish the following tasks on node4:
1. Create two new 2GB partitions on any available secondary disk (/dev/sdc
in my case).
2. Create a new volume group named dbvg that is composed of the two parti-
tions you created from task 1 (/dev/sdc1 and /dev/sdc2 in my case).
3. Inside the dbvg volume group, create a logical volume named dblv of size
3GB.
4. Create an ext4 filesystem on the dblv logical file.
5. Create a new directory /database owned with permissions set to 0755.
You need to create (or attach) a new secondary disk (size at least 4 GB) to your
node4 virtual machine to be able to follow along with the playbook.
143
Go ahead and create a new playbook named manage-storage.yml that has the
following contents:
144
dev: "/dev/{{ item.vgrp }}/{{ item.name }}"
fstype: ext4
loop: "{{ lvs }}"
145
PLAY RECAP *********************
node4: ok=7 changed=6 unreachable=0 failed=0 skipped=0
Looks good so far! Now let’s run an ad-hoc Ansible command to verify everything
is correct on node4:
Everything is correct; by now you should have realized how powerful Ansible is!
Imagine trying to write this Ansible playbook as a bash script ... Good Luck!
Again; a friendly reminder that you don’t need to memorize any of the modules
options. Just consult with the documentation pages:
146
Configuring network interfaces
You can use the following modules to configure network interfaces in Ansible:
• nmcli → manages the network devices. Create, modify and manage various
connection and device type e.g., ethernet, teams, bonds, vlans, etc.
• hostname → set system’s hostname, supports most OSs/Distributions, in-
cluding those using systemd.
• firewalld → This module allows for addition or deletion of services and ports
(either TCP or UDP) in either running or permanent firewalld rules.
Now let’s create a new playbook that will accomplish the following tasks on node1:
1. Configure the secondary network interface (eth1 in my case) with the follow-
ing settings:
• Connection Name → ether-two
• Type → ethernet
• IPv4 Address → 192.168.177.3
• Subnet Mask → 255.255.0.0
• DNS Servers → 8.8.8.8 and 8.8.4.4
2. Change the hostname of node1 to node1.linuxhandbook.local
3. Allow HTTP and HTTPS traffic in the firewall public zone.
You need to create (or attach) a new secondary network interface to follow along
with the playbook.
You can run the following ad-hoc command to list the available network interfaces
on node1:
As you can see; I have two network interfaces: eth0 and eth1.
147
[elliot@node1 ~]$ nmcli connection show
NAME UUID TYPE DEVICE
System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0
Wired connection 1 f536da88-7978-33dc-8afc-632274ba5661 ethernet eth1
Go ahead and create a new playbook named manage-network.yml that has the
following contents:
handlers:
- name: restart firewalld
service:
name: firewalld
state: restarted
148
PLAY [Configure a NIC] **************
149
TX packets 32 bytes 2214 (2.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Keep in mind that you also have the option to use the network RHEL System
Role when you are dealing with RHEL managed hosts.
To summarize; Figure 10 helps you categorize and identify the most commonly used
Ansible modules for managing Linux systems.
150
This brings us to the end of this chapter! In the next chapter, you are going to
learn various troubleshooting techniques in Ansible.
151
Knowledge Check
Create a playbook named lab11.yml that will accomplish the following tasks:
1. Create a cron job as user root named clean-tmp on all managed nodes.
152
Chapter 12: Ansible Troubleshooting
We all dream of a world where we never make mistakes or errors while running our
Ansible playbooks.
In reality, such world doesn’t exist and so you need to have troubleshooting skills,
so you are ready to deal with errors in your playbooks.
In this chapter, you will learn how to enable logging in your playbooks. You will
also learn a few other Ansible modules that can help you with troubleshooting.
Finally, you will learn how to troubleshoot connectivity issues in Ansible.
This would enable Ansible playbooks and ad-hoc commands to log its output to a
file named playbooks.log in your project directory.
153
tasks:
- name: Print favorite Linux Blog
debug:
msg: "{{ blog }}"
---
- name: Playbook logging enabled
^ here
You can see there is an error ’var’ is not a valid attribute for a Play. Now check
the contents of the playbooks.log file:
---
- name: Playbook logging enabled
^ here
As you can see; it captured and timestamped the output of the playbook. You
should also consider using log rotation if you are going to enable logging in Ansible.
You can fix the error in the faulty-playbook.yml playbook by changing var to
vars.
Notice that it’s a good practice to use the --syntax-check option to check for errors
before running any playbook:
154
[elliot@control plays]$ ansible-playbook --syntax-check faulty-playbook.yml
ERROR! 'var' is not a valid attribute for a Play
---
- name: Playbook logging enabled
^ here
You can specify the verbosity option to control when a debug task should execute.
Now go ahead and run the playbook with one level of verbosity by using the -v
option:
155
TASK [Gathering Facts] *************************
ok: [node1]
Notice how the second task [show ip address] was skipped! That’s because you
specified verbosity: 2. To trigger task two; you need to specify at least two levels
of verbosity (-vv) while running the playbook as follows:
156
META: ran handlers
META: ran handlers
Task 2 [show ip address] ran this time as we have specified two levels of verbosity.
You can check the ansible-playbook man page to read more about the different
verbosity options that you can use when running Ansible playbooks. Also, make
sure you check the debug module documentation page:
157
Using the assert module
The assert module can also come in handy while troubleshooting as you can use it
to test whether a specific condition is met.
To demonstrate, go ahead and create a playbook named assert.yml that has the
following contents:
This playbook contains three tasks. The first task uses the assert module to check
if node1 has more than 500 MB of free RAM; if it doesn’t, it will fail and displays
the message ”Low on memory!”.
The second task uses the stat module to retrieve /etc/motd file facts (similar to
the stat command in Linux) and then registers it to the motd variable.
Finally, the third task uses the assert module to check if /etc/motd is a directory
which will indeed fail with the message ”/etc/motd is not a directory!”
Now go ahead and run the playbook to see all this in action:
158
ok: [node1] => {
"changed": false,
"msg": "All assertions passed"
}
You can check the assert and stat documentation pages to see more examples of
how you can use both modules:
159
Running playbooks in check mode
You can use the --check (-C) option with the ansible-playbook command to do
a dry run of a playbook. This will basically just predict and show you what will
happen when running the playbook without actually changing anything.
You can also set the boolean check_mode: yes within a task to always run
that specific task in check mode. On the other hand, you can set the boolean
check_mode: no so that a task will never run in check mode.
Notice that the second task name: create a second file will always run in check
mode and the third task name: create a third file create a third file task will
never run in check mode.
160
TASK [create a file] *****************
ok: [node1]
Notice that the first two tasks ran in check mode while the third task did actually
run on node1 and created the file /tmp/file3:
You can also use the --diff option along with the --check option to see the differ-
ences that would be made by template files on managed hosts.
161
Troubleshooting connectivity problems
You may encounter connectivity issues in Ansible; connectivity problems usually
fall in one of the following two categories:
If you are facing network issues, then you may need to check the following settings:
1. If a managed host has more than one IP address or hostname; you can use
ansible_host in your Ansible inventory file to specify the IP address or
hostname you want to connect with.
node1.linuxhandbook.local ansible_ host=192.169.177.122
2. If SSH is configured to listen on a different port other than the default (22).
Then you can use ansible_port in your Ansible inventory file to specify the
SSH port to connect on.
node2.linuxhandbook.local ansible_port=5555 ansible_ host=192.169.177.122
If you are facing authentication issues, then you need to check the following settings:
162
}
node1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"ping": "pong"
}
node2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"ping": "pong"
}
node3 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"ping": "pong"
}
You can also use the --become option along with ping module to test if the Ansible
remote user is able to escalate privileges:
For example, the following ad-hoc command will test whether node2 can connect
to its local server webpage:
163
"accept_ranges": "bytes",
"ansible_facts": {
"discovered_interpreter_python": "/usr/libexec/platform-python"
},
"changed": false,
"connection": "close",
"content_length": "124",
"content_type": "text/html; charset=UTF-8",
"cookies": {},
"cookies_string": "",
"date": "Sun, 22 Nov 2020 01:48:21 GMT",
"elapsed": 0,
"etag": "\"7c-5b3f366f0ce84\"",
"last_modified": "Fri, 13 Nov 2020 02:01:09 GMT",
"msg": "OK (124 bytes)",
"redirected": false,
"server": "Apache/2.4.37 (centos)",
"status": 200,
"url": "http://localhost"
}
As you can; a connection was successful as status: 200 response was returned
which indicates a successful connection.
This brings us to the end our book. You have now mastered one of the most pop-
ular DevOps tools and you are also very well equipped to pass the RHCE EX294
examination to become a Red Hat Certified Engineer.
164
Knowledge Check
Create a playbook named lab12.yml that has the following contents:
---
- name: Fix the error
hosts: localhost
vars:
grps: ["devops","secops","netops"]
tasks:
- name: create groups
group:
name: "{{ item }}"
loop: {{ grps }}
1. Enable Ansible logging and save all Ansible logs to the file lab12.out.
165
Chapter 13: Final Sample Exam
Before you attempt the RHCE exam; I have few words of advice to you that I
believe will help you a lot when you write your exam. So without further ado, here
are my advices:
• When you get stuck on one task, skip it and move on to solve other tasks,
then if time permits, go back to try and solve the tasks that you skipped.
To make the most out of this sample exam, treat it as if it’s the real RHCE exam
and so try to finish all the sample exam tasks in less than four hours. Also, take a
10 minutes break after two hours to help you relax and regain focus.
As with the real exam, no answers to the sample exam questions will be provided.
I will only give you few hints for some of the tasks.
166
Exam Requirements
There are 15 questions in total.
You will need five RHEL 8 (or CentOS 8) virtual machines to be able to successfully
complete all questions.
One VM will be configured as an Ansible control node. Other four VMs will be
used to apply playbooks to solve the sample exam questions.
There are few other requirements that should be met before starting the sample
exam:
• The control node has passwordless SSH access to all managed servers (using
the root user).
• There are no regular users created on any of the servers.
167
Task 1: Installing and configuring Ansible
Install ansible on the control node and configure the following:
• On the control node; create a regular user named automation with the pass-
word G*Auto1!. Use this user for all sample exam tasks and playbooks,
unless you are working on Task 2 that requires creating the automation
user on the managed hosts.
• All playbooks and other Ansible configuration that you create for this sample
exam should be stored in /home/automation/plays.
168
Task 2: Running AdHoc Commands
Generate an SSH key pair on the control node for the automation user. You can
perform this step manually.
Write a bash script adhoc.sh that uses Ansible ad-hoc commands to achieve the
following:
After running the adhoc.sh bash script, you should be able to SSH into all man-
aged hosts using the automation user without a password, as well as a run all
privileged commands.
Hint: Specify the root user when running the ad-hoc commands.
Here is an example!
• The playbook should overwrite the contents of the /etc/motd file with the
contents of the message variable. The value of the message variable varies
according to the inventory host group.
• For hosts in the dev group; /etc/motd should have the following message
“This is a dev server.”
• For hosts in the test group; /etc/motd should have the following message
“This is a test server.”
• For hosts in the prod group; /etc/motd should have the following message
“This is a prod server.”
169
Task 4: Configuring SSH Server
Create a playbook sshd.yml that runs on all managed hosts and configures the
SSH daemon as follows:
Make sure that the SSH daemon is restarted after any change in its configuration.
user_pass: T*mP1#$
170
Task 6: Users and Groups
Create a variables file users-list.yml that has the following content:
---
users:
- username: brad
uid: 774
- username: david
uid: 775
- username: robert
uid: 776
- username: jason
uid: 777
Then, create a playbook users.yml that runs on all managed nodes and uses the
vault file secret.yml to achieve the following:
1. geerlingguy.docker
2. geerlingguy.kubernetes
3. geerlingguy.rabbitmq
Then, create a playbook docker.yml that runs on hosts in the dev group and will
use the geerlingguy.docker role to install docker.
171
Task 8: Using RHEL System Roles
Create a playbook selinux.yml that runs on hosts in the test group and does the
following:
• The cron job appends the message ”One hour has passed!” to the system log.
172
Task 11: Creating Custom Facts
Create a playbook facts.yml that runs on hosts in the prod group and does the
following:
Hint: Do not forget to copy your custom.fact file from your control node to the
/etc/ansible/facts.d directory on the managed node(s).
Welcome to {{ inventory_hostname }}
This is an Apache Web Server.
Please contact {{ sysadmin }} for any questions or concerns.
Create a playbook httpd.yml that uses the httpd-role and runs on hosts in the
test group. The sysadmin variable will show your email and you can set it the
playbook.
173
Task 13: Software Repositories
Create a playbook repo.yml that will runs on hosts in the prod group and does
the following:
• If a server has more than 2048 MB of RAM, then the kernel parameter
vm.swappiness is set to 10.
• If a server has less than 2048 MB of RAM, then the error message ”Available
RAM is less than 2048 MB” is displayed.
174
Task 15: Installing Software
Create a playbook packages.yml that runs on all managed hosts and does the
following:
• Installs the nmap and wireshark packages on hosts in the dev group.
• Installs the tmux and tcpdump packages on hosts in the test group.
175
Chapter 14: Knowledge Check Solutions
Exercise 1 Solution
First, make sure your system is registered. Then list all the available ansible repos-
itories:
Choose the repository that provided the latest ansible version and enable it:
176
Exercise 2 Solution
First, create the bash script adhoc-cmds.sh:
177
Exercise 3 Solution
Solutions may vary but the following is a possible solution:
178
Exercise 4 Solution
Solutions may vary but the following is a possible solution:
179
Exercise 5 Solution
Solutions may vary but the following is a possible solution:
180
Exercise 6 Solution
Solutions may vary but the following is a possible solution:
181
Exercise 7 Solution
Solutions may vary but the following is a possible solution:
182
Exercise 8 Solution
Solutions may vary but the following is the “quickest” solution:
183
Exercise 9 Solution
Solutions may vary but the following is a possible solution.
Then you can use the following playbook to apply the role:
184
TASK [geerlingguy.haproxy : Copy HAProxy configuration in place.] ***
changed: [node1]
Finally, test to see if load balancing works properly by running the “curl http://node1”
command twice:
Please Note: Round Robin is the default load balancing method! That’s why you
don’t need to explicitly specify it.
185
Exercise 10 Solution
Solutions may vary but the following is a possible solution:
After running the playbook; you can run a quick ad-hoc command to check the
SELinux file context of /web:
As you can see; the /web directory SELinux file context is set to httpd_sys_content_t.
Please Note: For every time you change the SELinux file context of a file; you then
have to restore the file’s default SELinux security contexts.
186
Exercise 11 Solution
Solutions may vary but the following is a possible solution:
187
Exercise 12 Solution
Solutions may vary but the following is a possible solution.
with_items:
- {{ foo }}
with_items:
- "{{ foo }}"
188
tasks:
- name: create groups
group:
name: "{{ item }}"
loop: "{{ grps }}"
189
190