Ansible Automation For Sysadmins v2
Ansible Automation For Sysadmins v2
Ansible Automation For Sysadmins v2
com
Ansible Automation
for SysAdmins
Containers: Learn the lingo and get the basics in this quick and
easy containers primer.
Go: Find out about many uses of the go executable and the most
important packages in the Go standard library.
About Opensource.com
What is Opensource.com?
Introduction
Introduction 5
Chapters
Introduction
by Chris Short
Links
[1]
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html
Author
Red Hat Ansible | CNCF Ambassador | DevOps
| opensource.com Community Moderator |
Writes devopsish.com | Partially Disabled USAF
Veteran | He/Him
• Plays: A play is made up of tasks. For example, the play: • Test as often as you need to without fear of breaking things.
“Prepare a database to be used by a web server” is made Tasks describe a desired state, so if a desired state is al-
up of tasks: 1) Install the database package; 2) Set a pass- ready achieved, it will simply be ignored.
word for the database administrator; 3) Create a database; • Be sure all host names defined in /etc/ansible/hosts
and 4) Set access to the database. are resolvable.
• Playbook: A playbook is made up of plays. A playbook • Because communication to remote hosts is done using
could be: “Prepare my website with a database backend,” SSH, keys have to be accepted by the control machine,
and the plays would be 1) Set up the database server; and so either 1) exchange keys with remote hosts prior to start-
2) Set up the web server. ing; or 2) be ready to type in “Yes” to accept SSH key ex-
• Roles: Roles are used to save and organize playbooks and change requests for each remote host you want to manage.
allow sharing and reuse of playbooks. Following the previ- • Although you can combine tasks for different Linux distribu-
ous examples, if you need to fully configure a web server, tions in one playbook, it’s cleaner to write a separate play-
you can use a role that others have written and shared to book for each distro.
do just that. Since roles are highly configurable (if written
correctly), they can be easily reused to suit any given de- In the final analysis
ployment requirements. Ansible is a great choice for implementing automation in
• Ansible Galaxy: Ansible Galaxy [1] is an online repository your data center:
where roles are uploaded so they can be shared with oth- • It’s agentless, so it is simpler to install than other automa-
ers. It is integrated with GitHub, so roles can be organized tion tools.
into Git repositories and then shared via Ansible Galaxy. • Instructions are in YAML (though JSON is also supported)
These definitions and their relationships are depicted here: so it’s easier than writing shell scripts.
Please note this is just one way to organize the tasks that • It’s open source software, so contribute back to it and make
need to be executed. We could have split up the installation it even better!
of the database and the web server into separate playbooks
and into different roles. Most roles in Ansible Galaxy install Links
and configure individual applications. You can see exam- [1] https://galaxy.ansible.com/
ples for installing mysql [2] and installing httpd [3]. [2] https://galaxy.ansible.com/bennojoy/mysql/
[3] https://galaxy.ansible.com/xcezx/httpd/
Tips for writing playbooks [4] http://docs.ansible.com/
The best source for learning Ansible is the official documenta-
tion [4] site. And, as usual, online search is your friend. I rec- Author
ommend starting with simple tasks, like installing applications Jose is a Linux engineer at Dell EMC. He spends most days
or creating users. Once you are ready, follow these guidelines: learning new things, keeping stuff from breaking, and keeping
• When testing, use a small subset of servers so that your customers happy.
plays execute faster. If they are successful in one server,
they will be successful in others. Adapted from “Tips for success when getting started with Ansible” on
Opensource.com, published under a Creative Commons Attribution Share-
• Always do a dry run to make sure all commands are work- Alike 4.0 International License at https://opensource.com/article/18/2/tips-
ing (run with --check-mode flag). success-when-getting-started-ansible.
In the first line, we give the task a meaningful name so we - name: install epel-release
know what Ansible is doing. In the next line, the yum module yum:
updates the CentOS virtual machine (VM), then name: "*" name: epel-release
tells yum to update everything, and, finally, state: latest state: latest
updates to the latest RPM.
After updating the system, we need to restart and re- The shell module puts the system to sleep for 5 seconds
connect: then reboots. We use sleep to prevent the connection from
breaking, async to avoid timeout, and poll to fire & forget.
- name: restart system to reboot to newest kernel We pause for 10 seconds to wait for the VM to come back
shell: "sleep 5 && reboot" and use wait_for_connection to connect back to the VM
A sysadmin’s guide
to Ansible: How to
simplify tasks
by Jonathan Lozada De La Matta
There are many ways to automate common sysadmin tasks with Ansible. Here are several of them.
Managing users - {
user: 'dbadmin', key: "{{ lookup('file',
If you need to create a large list of users and groups with '/data/vm_temp_key.pub'), state: 'absent',
the users spread among the different groups, you can use comment: 'dbadmin key' }
loops. Let’s start by creating the groups:
Here, we specify the user, how to find the key by using
- name: create user groups lookup, the state, and a comment describing the purpose
group: of the key.
name: "{{ item }}"
loop: Installing packages
- postgresql Package installation can vary depending on the packaging
- nginx-test system you are using. You can use Ansible facts [8] to de-
- admin termine which module to use. Ansible does offer a generic
- dbadmin module called package [9] that uses ansible_pkg_mgr and
- hadoop calls the proper package manager for the system. For exam-
ple, if you’re using Fedora, the package module will call the
You can create users with specific parameters like this: DNF package manager.
The package module will work if you’re doing a simple in-
- name: all users in the department stallation of packages. If you’re doing more complex work,
user: you will have to use the correct module for your system. For
name: "{{ item.name }}" example, if you want to ignore GPG keys and install all the
group: "{{ item.group }}" security packages on a RHEL-based system, you need to
groups: "{{ item.groups }}" use the yum module. You will have different options depend-
uid: "{{ item.uid }}" ing on your packaging module [10], but they usually offer
state: "{{ item.state }}" more parameters than Ansible’s generic package module.
loop: Here is an example using the package module:
- {
name: 'admin1', group: 'admin', groups: 'nginx', uid:
'1234', state: 'present' } - name: install a package
- {
name: 'dbadmin1', group: 'dbadmin', groups: 'postgres', package:
uid: '4321', state: 'present' } name: nginx
- {
name: 'user1', group: 'hadoop', groups: 'wheel', uid: state: installed
'1067', state: 'present' }
- {
name: 'jose', group: 'admin', groups: 'wheel', uid: The following uses the yum module to install NGINX, disable
'9000', state: 'absent' } gpg_check from the repo, ignore the repository’s certificates,
and skip any broken packages that might show up.
Looking at the user jose, you may recognize that state: 'ab-
sent' deletes this user account, and you may be wondering - name: install a package
why you need to include all the other parameters when you’re yum:
just removing him. It’s because this is a good place to keep name: nginx
documentation of important changes for audits or security state: installed
compliance. By storing the roles in Git as your source of truth, disable_gpg_check: yes
you can go back and look at the old versions in Git if you later validate_certs: no
need to answer questions about why changes were made. skip_broken: yes
To deploy SSH keys for some of the users, you can use the
same type of looping as in the last example. Here is an example using Apt [11]. The Apt module tells Ansi-
ble to uninstall NGINX and not update the cache:
- name: copy admin1 and dbadmin ssh keys
authorized_key: - name: install a package
user: "{{ item.user }}" apt:
key: "{{ item.key }}" name: nginx
state: "{{ item.state }}" state: absent
comment: "{{ item.comment }}" update_cache: no
loop:
- {
user: 'admin1', key: "{{ lookup('file', '/data/test_ You can use loop when installing packages, but they are
temp_key.pub'), state: 'present', comment: 'admin1 key' } processed individually if you pass a list:
1. vagrant up
Infrastructure as Code (IaC). 2. Edit playbook.
Developers are always testing, and constant feedback is 3. vagrant provision
necessary to drive development. If it takes too long to get feed- 4. vagrant ssh to verify VM state.
back on a change, your steps might be too large, making errors 5. Repeat steps 2 to 4.
hard to spot. Baby steps and fast feedback are the essence
of TDD (test-driven development). But how do you apply this Occasionally, the VM should be destroyed and brought up
approach to the development of ad hoc playbooks or roles? again (vagrant destroy -f; vagrant up) to increase the
When you’re developing an automation, a typical workflow reliability of your playbook (i.e., to test if your automation is
would start with a new virtual machine. I will use Vagrant [1] working end-to-end).
to illustrate this idea, but you could use libvirt [2], Docker [3], Although this is a good workflow, you’re still doing all the
VirtualBox [4], or VMware [5], an instance in a private or pub- hard work of connecting to the VM and verifying that every-
lic cloud, or a virtual machine provisioned in your data center thing is working as expected.
hypervisor (oVirt [6], Xen [7], or VMware, for example). When tests are not automated, you’ll face issues similar to
When deciding which virtual machine to use, balance feed- those when you do not automate your infrastructure.
back speed and similarity with your real target environment. Luckily, tools like Testinfra [8] and Goss [9] can help auto-
The minimal start point with Vagrant would be: mate these verifications.
vagrant init centos/7 # or any other box I will focus on Testinfra, as it is written in Python and is the
Then add Ansible provisioning to your Vagrantfile: default verifier for Molecule. The idea is pretty simple: Auto-
mate your verifications using Python:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml" def test_nginx_is_installed(host):
end nginx = host.package("nginx")
assert nginx.is_installed
assert nginx.version.startswith("1.2")
def test_nginx_running_and_enabled(host):
nginx = host.service("nginx")
assert nginx.is_running
assert nginx.is_enabled
py.test --connection=ssh --hosts=server your code? Is there a more simple and fast way to develop
our playbooks and roles with automated tests?
In short, during infrastructure automation development, the
challenge is to provision new infrastructure, execute play- Molecule
books against them, and verify that your changes reflect the Molecule [11] helps develop roles using tests. The tool can
state you declared in your playbooks. even initialize a new role with test cases: molecule init
• What can Testinfra verify? role –role-name foo
• Infrastructure is up and running from the user’s point of Molecule is flexible enough to allow you to use different driv-
view (e.g., HTTPD or Nginx is answering requests, and ers for infrastructure provisioning, including Docker, Vagrant,
MariaDB or PostgreSQL is handling SQL queries). OpenStack, GCE, EC2, and Azure. It also allows the use of
• OS service is started and enabled different server verification tools, including Testinfra and Goss.
• A process is listening on a specific port Its commands ease the execution of tasks commonly used
• A process is answering requests during development workflow:
• Configuration files were correctly copied or generated • lint - Executes yaml-lint, ansible-lint, and flake8, re-
from templates porting failure if there are issues
• Virtually anything you do to ensure that your server state • syntax - Verifies the role for syntax errors
is correct • create - Creates an instance with the configured driver
• What safeties do these automated tests provide? • prepare - Configures instances with preparation playbooks
• Perform complex changes or introduce new features with- • converge - Executes playbooks targeting hosts
out breaking existing behavior (e.g., it still works in RHEL- • idempotence - Executes a playbook twice and fails in case
based distributions after adding support for Debian-based of changes in the second run (non-idempotent)
systems). • verify - Execute server state verification tools (testinfra
• Refactor/improve the codebase when new versions of An- or goss)
sible are released and new best practices are introduced. • destroy - Destroys instances
What we’ve done with Vagrant, Ansible, and Testinfra so far • test - Executes all the previous steps
is easily mapped to the steps described in the Four-Phase The login command can be used to connect to provisioned
Test [10] pattern—a way to structure tests that makes the servers for troubleshooting purposes.
test objective clear. It is composed of the following phases:
Setup, Exercise, Verify, and Teardown: Step by step
How do you go from no tests at all to a decent codebase
• Setup: Prepares the environment for the test execution being executed for every change/commit?
(e.g., spins up new virtual machines):
1. virtualenv (optional)
vagrant up
The virtualenv tool creates isolated environments, while
• Exercise: Effectively executes the code against the system virtualenvwrapper is a collection of extensions that facili-
under test (i.e., Ansible playbook): tate the use of virtualenv.
These tools prevent dependencies and conflicts between
vagrant provision Molecule and other Python packages in your machine.
• Verify: Verifies the previous step output: sudo pip install virtualenvwrapper
export WORKON_HOME=~/envs
py.test (with Testinfra) source /usr/local/bin/virtualenvwrapper.sh
mkvirtualenv mocule
• Teardown: Returns to the state prior to Setup:
2. Molecule
vagrant destroy
Install Molecule with the Docker driver:
The same idea we used for an ad hoc playbook could be
applied to role development and testing, but do you need to pip install molecule ansible docker
do all these steps every time you develop something new?
What if you want to use containers, or an OpenStack, in- Generate a new role with test scenarios:
stead of Vagrant? What if you’d rather use Goss than Testin-
fra? How do you run this continuously for every change in molecule init role -r role_name
Serverless is another step in the direction of managed services and plays nice with Ansible’s
agentless architecture.
- name: Pull all new resources back in as a variable That’s not quite everything you need, since the serverless
cloudformation_facts: project also must exist, and that’s where you’ll do the heavy
stack_name: prod-vpc lifting of defining your functions and event sources. For this
register: network_stack example, we’ll make a single function that responds to HTTP
requests. The Serverless Framework uses YAML as its con-
For serverless applications, you’ll definitely need a com- fig language (as does Ansible), so this should look familiar.
plement of Lambda functions in addition to any other Dy-
namoDB tables, S3 buckets, and whatever else. Fortu- # serverless.yml
nately, by using the lambda modules, Lambda functions service: fakeservice
can be created in the same way as the stack from the
last tasks: provider:
name: aws
- lambda: runtime: python3.6
name: sendReportMail
zip_file: "{{ deployment_package }}" functions:
runtime: python3.6 main:
handler: report.send handler: test_function.handler
memory_size: 1024 events:
role: "{{ iam_exec_role }}" - http:
register: new_function path: /
method: get
If you have another tool that you prefer for shipping the
serverless parts of your application, that works as well. The Links
open source Serverless Framework [4] has its own Ansible [1] https://www.ansible.com/
module that will work just as well: [2] https://en.wikipedia.org/wiki/Serverless_computing
[3] https://aws.amazon.com/lambda/
- serverless: [4] https://serverless.com/
service_path: '{{ project_dir }}'
stage: dev Author
register: sls Ryan is a Senior Software Engineer and spends most of
- name:
Serverless uses CloudFormation under the hood, so you his time on cloud-adjacent Open Source tooling, including
can easily pull info back into Ansible Ansible and the Serverless Framework.
cloudformation_facts:
Adapted from “Using Ansible for deploying serverless applications” on Opensource.
stack_name: "{{ sls.service_name }}" com, published under a Creative Commons Attribution Share-Alike 4.0 International
register: sls_facts License at https://opensource.com/article/17/8/ansible-serverless-applications.
4 Ansible playbooks
you should try
by Daniel Oh
roles: # W
hether to delete resources that might exist from previous
- wtanaka.unzip Istio installations
- zanini.sonar delete_resources: false
istio: Author
#
Install istio with or without istio-auth module Daniel Oh—DevOps Evangelist, CNCF Ambassador, Devel-
auth: false oper, Public Speaker, Writer, Opensource.com Author
Get Involved
If you find these articles useful, get involved! Your feedback helps improve the status
quo for all things DevOps.
Contribute to the Opensource.com DevOps resource collection, and join the team of
DevOps practitioners and enthusiasts who want to share the open source stories
happening in the world of IT.
The Open Source DevOps team is looking for writers, curators, and others who can help
us explore the intersection of open source and DevOps. We’re especially interested in
stories on the following topics:
• D
evOps practical how to’s
• D
evOps and open source
• D
evOps and talent
• D
evOps and culture
• D
evSecOps/rugged software
Additional Resources
Write for Us
Would you like to write for Opensource.com? Our editorial calendar includes upcoming themes,
community columns, and topic suggestions: https://opensource.com/calendar
Learn more about writing for Opensource.com at: https://opensource.com/writers
We're always looking for open source-related articles on the following topics:
Big data: Open source big data tools, stories, communities, and news.
Command-line tips: Tricks and tips for the Linux command-line.
Containers and Kubernetes: Getting started with containers, best practices,
security, news, projects, and case studies.
Education: Open source projects, tools, solutions, and resources for educators,
students, and the classroom.
Geek culture: Open source-related geek culture stories.
Hardware: Open source hardware projects, maker culture, new products, howtos,
and tutorials.
Machine learning and AI: Open source tools, programs, projects and howtos for
machine learning and artificial intelligence.
Programming: Share your favorite scripts, tips for getting started, tricks for
developers, tutorials, and tell us about your favorite programming languages and
communities.
Security: Tips and tricks for securing your systems, best practices, checklists,
tutorials and tools, case studies, and security-related project updates.
Keep in touch!
Sign up to receive roundups of our best articles,
giveaway alerts, and community announcements.
Visit opensource.com/email-newsletter to subscribe.