Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

The Book of Kubernetes: A Comprehensive Guide to Container Orchestration
The Book of Kubernetes: A Comprehensive Guide to Container Orchestration
The Book of Kubernetes: A Comprehensive Guide to Container Orchestration
Ebook684 pages6 hours

The Book of Kubernetes: A Comprehensive Guide to Container Orchestration

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This hands-on guidebook to the inner workings of containers peels back the layers to provide a deep understanding of what a container is, how containerization changes the way programs run, and how Kubernetes provides computing, networking, and storage.

Containers ensure that software runs reliably no matter where it’s deployed, and Kubernetes lets you manage all of your containers from a single control plane. In this comprehensive tour of the open-source platform, each chapter includes a set of example scripts with just enough automation to start your container exploration with ease. Beginning with an overview of modern architecture and the benefits of orchestration, you'll quickly learn how to create containers; how to deploy, administer and debug Kubernetes clusters all the way down to the OS; and how container networking works at the packet level across multiple nodes in a cluster.
LanguageEnglish
Release dateSep 6, 2022
ISBN9781718502659
The Book of Kubernetes: A Comprehensive Guide to Container Orchestration

Related to The Book of Kubernetes

Related ebooks

System Administration For You

View More

Related articles

Reviews for The Book of Kubernetes

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Book of Kubernetes - Alan Hohn

    INTRODUCTION

    Containers and Kubernetes together are changing the way that applications are architected, developed, and deployed. Containers ensure that software runs reliably no matter where it’s deployed, and Kubernetes lets you manage all of your containers from a single control plane.

    This book is designed to help you take full advantage of these essential new technologies, using hands-on examples not only to try out the major features but also to explore how each feature works. In this way, beyond simply being ready to deploy an application to Kubernetes, you’ll gain the skills to architect applications to be performant and reliable in a Kubernetes cluster, and to quickly diagnose problems when they arise.

    The Approach

    The biggest advantage of a Kubernetes cluster is that it hides the work of running containers across multiple hosts behind an abstraction layer. A Kubernetes cluster is a black box that runs what we tell it to run, with automatic scaling, failover, and upgrades to new versions of our application.

    Even though this abstraction makes it easier to deploy and manage applications, it also makes it difficult to understand what a cluster is doing. For this reason, this book presents each feature of container runtimes and Kubernetes clusters from a debugging perspective. Every good debugging session starts by treating the application as a black box and observing its behavior, but it doesn’t end there. Skilled problem solvers know how to open the black box, diving below the current abstraction layer to see how the program runs, how data is stored, and how traffic flows across the network. Skilled architects use this deep knowledge of a system to avoid performance and reliability issues. This book provides the detailed understanding of containers and Kubernetes that only comes from exploring not only what these technologies do but also how they work.

    In Part I, we’ll begin by running a container, but then we’ll dive into the container runtime to understand what a container is and how we can simulate a container using normal operating system commands. In Part II, we’ll install a Kubernetes cluster and deploy containers to it. We’ll also see how the cluster works, including how it interacts with the container runtime and how packets flow from container to container across the host network. The purpose is not to duplicate the reference documentation to show every option offered by every feature but to demonstrate how each feature is implemented so that all that documentation will make sense and be useful.

    A Kubernetes cluster is complicated, so this book includes extensive hands-on examples, with enough automation to allow you to explore each chapter independently. This automation, which is available at https://github.com/book-of-kubernetes/examples, is published under a permissive open source license, so you can explore, experiment, and use it in your own projects.

    Running Examples

    In many of this book’s example exercises, you’ll be combining multiple hosts together to make a cluster, or working with low-level features of the Linux kernel. For this reason, and to help you feel more comfortable with experimentation, you’ll be running examples entirely on temporary virtual machines. That way, if you make a mistake, you can quickly delete the virtual machine and start over.

    The example repository for this book is available at https://github.com/book-of-kubernetes/examples. All of the instructions for setting up to run examples are provided in a README.md file within the setup folder of the example repository.

    What You Will Need

    Even though you’ll be working in virtual machines, you’ll need a control machine to start from that can run Windows, macOS, or Linux. It can even be a Chromebook that supports Linux. If you are running Windows, you’ll need to use the Windows Subsystem for Linux (WSL) in order to get Ansible working. See the README.md in the setup folder for instructions.

    Run in the Cloud or Local

    To make these examples as accessible as possible, I’ve provided automation to run them either using Vagrant or Amazon Web Services (AWS). If you have access to a Windows, macOS, or Linux computer with at least eight cores and 8GB of memory, try installing VirtualBox and Vagrant and work with local virtual machines. If not, you can set yourself up to work with AWS.

    We use Ansible to perform AWS setup and automate some of the tedious steps. Each chapter includes a separate Ansible playbook that makes use of common roles and collections. This means that you can work examples from chapter to chapter, starting with a fresh installation each time. In some cases, I’ve also provided an extra provisioning playbook that you can optionally use to skip some of the detailed installation steps and get straight to the learning. See the README.md in each chapter’s directory for more information.

    Terminal Windows

    After you’ve used Ansible to provision your virtual machines, you’ll need to get at least one terminal window connected to run commands. The README.md file in each chapter will tell you how to do that. Before running any examples, you’ll first need to become the root user, as follows:

    sudo su -

    This will give you a root shell and set up your environment and home directory to match.

    RUNNING AS ROOT

    If you’ve worked with Linux before, you probably have a healthy aversion to working as root on a regular basis, so it might surprise you that all of the examples in this book are run as the root user. This is a big advantage of using temporary virtual machines and containers; when we act as the root user, we are doing so in a temporary, confined space that can’t reach out and affect anything else.

    As you move from learning about containers and Kubernetes to running applications in production, you’ll be applying security controls to your cluster that will limit administrative access and will ensure that containers cannot break out of their isolated environment. This often includes configuring your containers so that they run as a non-root user.

    In some examples, you’ll need to open multiple terminal windows in order to leave one process running while you inspect it from another terminal. How you do that is up to you; most terminal applications support multiple tabs or multiple windows. If you need a way to open multiple terminals within a single tab, try exploring a terminal multiplexer application. All of the temporary virtual machines used in the examples come with both screen and tmux installed and ready to use.

    PART I

    MAKING AND USING CONTAINERS

    Containers are essential to modern application architecture. They simplify packaging, deploying, and scaling application components. They enable building reliable and resilient applications that handle failure gracefully. However, containers can also be confusing. They look like completely different systems, with separate hostnames, networking, and storage, but they do not have many of the features of a separate system, such as a separate console or system services. To understand how containers look like separate systems without really being separate, let’s explore containers, container engines, and Linux kernel features.

    1

    WHY CONTAINERS MATTER

    It’s a great time to be a software developer. Creating a brand-new application and making it available to millions of people has never been easier. Modern programming languages, open source libraries, and application platforms make it possible to write a small amount of code and end up with lots of functionality. However, although it’s easy to get started and create a new application quickly, the best application developers are those who move beyond treating the application platform as a black box and really understand how it works. Creating a reliable, resilient, and scalable application requires more than just knowing how to create a Deployment in the browser or on the command line.

    In this chapter, we’ll look at application architecture in a scalable, cloud native world. We will show why containers are the preferred way to package and deploy application components, and how container orchestration addresses key needs for containerized applications. We’ll finish with an example application deployed to Kubernetes to give you an introductory glimpse into the power of these technologies.

    Modern Application Architecture

    The main theme of modern software applications is scale. We live in a world of applications with millions of simultaneous users. What is remarkable is the ability of these applications to achieve not only this scale but also a level of stability such that an outage makes headlines and serves as fodder for weeks or months of technical analysis.

    With so many modern applications running at large scale, it can be easy to forget that a lot of hard work goes into architecting, building, deploying, and maintaining applications of this caliber, whether the scale they’re designed for is thousands, millions, or billions of users. Our job in this chapter is to identify what we need from our application platform to run a scalable, reliable application, and to see how containerization and Kubernetes meet those requirements. We’ll start by looking at three key attributes of modern application architecture. Then we’ll move on to looking at three key benefits these attributes bring.

    Attribute: Cloud Native

    There are lots of ways to define cloud native technologies (and a good place to start is the Cloud Native Computing Foundation at https://cncf.io). I like to start with an idea of what the cloud is and what it enables so that we can understand what kind of architecture can make best use of it.

    At its heart, the cloud is an abstraction. We talked about abstractions in the introduction, so you know that abstractions are essential to computing, but we also need a deep understanding of our abstractions to use them properly. In the case of the cloud, the provider is abstracting away the real physical processors, memory, storage, and networking, allowing cloud users to simply declare a need for these resources and have them provisioned on demand. To have a cloud native application, then, we need an application that can take advantage of that abstraction. As much as possible, the application shouldn’t be tied to a specific host or a specific network layout, because we don’t want to constrain our flexibility in how application components are divided among hosts.

    Attribute: Modular

    Modularity is nothing new to application architecture. The goal has always been high cohesion, where everything within a module relates to a single purpose, and low coupling, where modules are organized to minimize intermodule communication. However, even though modularity remains a key design goal, the definition of what makes a module is different. Rather than just treat modularity as a way of organizing the code, modern application architecture today prefers to carry modularity into the runtime, providing each module with a separate operating system process and discouraging the use of a shared filesystem or shared memory for communication. Because modules are separate processes, communication between modules is standard network (socket) communication.

    This approach seems wasteful of hardware resources. It is more compact and faster to share memory than it is to copy data over a socket. But there are two good reasons to prefer separate processes. First, modern hardware is fast and getting faster, and it would be a form of premature optimization to imagine that sockets are not fast enough for our application. Second, no matter how large a server we have, there is going to be a limit to how many processes we can fit on it, so a shared memory model ultimately limits our ability to grow.

    Attribute: Microservice-Based

    Modern application architecture is based on modules in the form of separate processes—and these individual modules tend to be very small. In theory, a cloud can provide us with virtual servers that are as powerful as we need; however, in practice, using a few powerful servers is more expensive and less flexible than many small servers. If our modules are small enough, they can be deployed to cheap commodity servers, which means that we can leverage our cloud provider’s hardware to best advantage. Although there is no single answer as to how small a module needs to be in order to be a microservice, small enough that we can be flexible regarding where it is deployed is a good first rule.

    A microservice architecture also has practical advantages for organizing teams. Ever since Fred Brooks wrote The Mythical Man-Month, architects have understood that organizing people is one of the biggest challenges to developing large, complex systems. Building a system from many small pieces reduces the complexity of testing but also makes it possible to organize a large team of people without everyone getting in everyone else’s way.

    WHAT ABOUT APPLICATION SERVERS?

    The idea of modular services has a long history, and one popular way to implement it was building modules to run in an application server, such as a Java Enterprise environment. Why not then just continue to follow that pattern for applications?

    Although application servers were successful for many uses, they don’t have the same degree of isolation that a microservice architecture has. As a result, there are more issues with interdependency, leading to more complex testing and reduced team independence. Additionally, the typical model of having a single application server per host, with many applications deployed to it and sharing the same process space, is much less flexible than the containerized approaches you will see in this book.

    This is not to say that you should immediately throw away your application server architecture to use containers. There are lots of benefits to containerization for any architecture. But as you adopt a containerized architecture, over time it will make sense for you to move your code toward a true microservice architecture to take best advantage of what containers and Kubernetes offer.

    We’ve looked at three key attributes of modern architecture. Now, let’s look at three key benefits that result.

    Benefit: Scalability

    Let’s begin by envisioning the simplest application possible. We create a single executable that runs on a single machine and interacts with only a single user at a time. Now, suppose that we want to grow this application so that it can interact with thousands or millions of users at once. Obviously, no matter how powerful a server we use, eventually some computing resource will become a bottleneck. It doesn’t matter whether the bottleneck is processing, or memory, or storage, or network bandwidth; the moment we hit that bottleneck, our application cannot handle any additional users without hurting performance for others.

    The only possible way to solve this issue is to stop sharing the resource that caused the bottleneck. This means that we need to find a way to distribute our application across multiple servers. But if we’re really scaling up, we can’t stop there. We need to distribute across multiple networks as well, or we’ll hit the limit of what one network switch can do. And eventually, we will even need to distribute geographically, or we’ll saturate the broader network.

    To build applications with no limit to scalability, we need an architecture that can run additional application instances at will. And because an application is only as slow as its slowest component, we need to find a way to scale everything, including our data stores. It’s obvious that the only way to do this effectively is to create our application from many independent pieces that are not tied to specific hardware. In other words, cloud native microservices.

    Benefit: Reliability

    Let’s go back to our simplest possible application. In addition to scalability limits, it has another flaw. It runs on one server, and if that server fails, the entire application fails. Our application is lacking reliability. As before, the only possible way to solve this issue is to stop sharing the resource that could potentially fail. Fortunately, when we start distributing our application across many servers, we have the opportunity to avoid a single point of failure in the hardware that would bring down our application. And as an application is only as reliable as its least reliable component, we need to find a way to distribute everything, including storage and networks. Again, we need cloud native microservices that are flexible about where they are run and about how many instances are running at once.

    Benefit: Resilience

    There is a third, subtler advantage to cloud native microservice architecture. This time, imagine an application that runs on a single server, but it can easily be installed as a single package on as many servers as we like. Each instance can serve a new user. In theory, this application would have good scalability, given that we can always install it on another server. And overall, the application could be said to be reliable because a failure of a single server is going to affect only that one user, whereas the others can keep running as normal.

    What is missing from this approach is the concept of resilience, or the ability of an application to respond meaningfully to failure. A truly resilient application can handle a hardware or software failure somewhere in the application without an end user noticing at all. And although separate, unrelated instances of this application keep running when one instance fails, we can’t really say that the application exhibits resilience, at least not from the perspective of the unlucky user with the failed system.

    On the other hand, if we construct our application out of separate microservices, each of which has the ability to communicate over a network with other microservices on any server, the loss of a single server might cost us several microservice instances, but end users can be moved to other instances on other servers transparently, such that they don’t even notice the failure.

    Why Containers

    I’ve made modern application architecture with its fancy cloud native microservices sound pretty appealing. Engineering is full of trade-offs, however, so experienced engineers will suspect that there must be some pretty significant trade-offs, and, of course, there are.

    It’s very difficult to build an application from lots of small pieces. Organizing teams around microservices so that they can work independently from one another might be great, but when it comes time to put those together into a working application, the sheer number of pieces means worrying about how to package them up, how to deliver them to the runtime environment, how to configure them, how to provide them with (potentially conflicting) dependencies, how to update them, and how to monitor them to make sure they are working.

    This problem only grows worse when we consider the need to run multiple instances of each microservice. Now, we need a microservice to be able to find a working instance of another microservice, balancing the load across all of the working instances. We need that load balancing to reconfigure itself immediately if we have a hardware or software failure. We need to fail over seamlessly and retry failed work in order to hide that failure from the end user. And we need to monitor not just each individual service, but how all of them are working together to get the job done. After all, our users don’t care if 99 percent of our microservices are working correctly if the 1 percent failure prevents them from using our application.

    We have lots of problems to solve if we want to build an application out of many individual microservices, and we do not want each of our microservice teams working those problems, or they would never have time to write code! We need a common way to manage the packaging, deployment, configuration, and maintenance of our microservices. Let’s look at two categories of required attributes: those that apply to a single microservice, and those that apply to multiple microservices working together.

    Requirements for Containers

    For a single microservice, we need the following:

    Packaging Bundle the application for delivery, which needs to include dependencies so that the package is portable and we avoid conflicts between microservices.

    Versioning Uniquely identify a version. We need to update microservices over time, and we need to know what version is running.

    Isolation Keep microservices from interfering with one another. This allows us to be flexible about what microservices are deployed together.

    Fast startup Start new instances rapidly. We need this to scale and respond to failures.

    Low overhead Minimize required resources to run a microservice in order to avoid limits on how small a microservice can be.

    Containers are designed to address exactly these needs. Containers provide isolation together with low overhead and fast startup. And, as we’ll see in Chapter 5, a container runs from a container image, which provides a way to package an application with its dependencies and to uniquely identify the version of that package.

    Requirements for Orchestration

    For multiple microservices working together, we need:

    Clustering Provide processing, memory, and storage for containers across multiple servers.

    Discovery Provide a way for one microservice to find another. Our microservices might run anywhere on the cluster, and they might move around.

    Configuration Separate configuration from runtime, allowing us to reconfigure our application without rebuilding and redeploying our microservices.

    Access control Manage authorization to create containers. This ensures that the right containers run, and the wrong ones don’t.

    Load balancing Spread requests among working instances in order to avoid the need for end users or other microservices to track all microservice instances and balance the load themselves.

    Monitoring Identify failed microservice instances. Load balancing won’t work well if traffic is going to failed instances.

    Resilience Automatically recover from failures. If we don’t have this ability, a chain of failures could kill our application.

    These requirements come into play only when we are running containers on multiple servers. It’s a different problem from just packaging up and running a single container. To address these needs, we require a container orchestration environment. A container orchestration environment such as Kubernetes allows us to treat multiple servers as a single set of resources to run containers, dynamically allocating containers to available servers and providing distributed communication and storage.

    Running Containers

    By now, hopefully you’re excited by the possibilities of building an application using containerized microservices and Kubernetes. Let’s walk through the basics so that you can see what these ideas look like in practice, providing a foundation for the deeper dive into container technology that you’ll find in the rest of this book.

    What Containers Look Like

    In Chapter 2, we’ll look at the difference between a container platform and a container runtime, and we’ll run containers using multiple container runtimes. For now, let’s begin with a simple example running in the most popular container platform, Docker. Our goal is to learn the basic Docker commands, which align to universal container concepts.

    Running a Container

    The first command is run, which creates a container and runs a command inside it. We will tell Docker the name of the container image to use. We discuss container images more in Chapter 5; for now, it’s enough to know that it provides a unique name and version so that Docker knows exactly what to run. Let’s get started using the example for this chapter.

    NOTE

    The example repository for this book is at https://github.com/book-of-kubernetes/examples. See Running Examples on page xx for details on getting set up.

    A key idea for this section is that containers look like a completely separate system. To illustrate this, before we run a container, let’s look at the host system:

    root@host01:~# cat /etc/os-release

     

    NAME=Ubuntu

    ...

    root@host01:~#

    ps -ef

     

    UID          PID    PPID  C STIME TTY          TIME CMD

    root          1      0  0 12:59 ?        00:00:07 /sbin/init

    ...

    root@host01:~#

    uname -v

     

    #...-Ubuntu SMP ...

    root@host01:~#

    ip addr

     

    1: lo: mtu 65536 ...

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

          valid_lft forever preferred_lft forever

        inet6 ::1/128 scope host

          valid_lft forever preferred_lft forever

    ...

    3: enp0s8: mtu 1500 qdisc fq_codel ...

        link/ether 08:00:27:bf:63:1f brd ff:ff:ff:ff:ff:ff

        inet 192.168.61.11/24 brd 192.168.61.255 scope global enp0s8

          valid_lft forever preferred_lft forever

        inet6 fe80::a00:27ff:febf:631f/64 scope link

          valid_lft forever preferred_lft forever

    ...

    The first command looks at a file called /etc/os-release, which has information about the installed Linux distribution. In this case, our example virtual machine is running Ubuntu. That matches the output of the next command, in which we see an Ubuntu-based Linux kernel. Finally, we list network interfaces and see an IP address of 192.168.61.11.

    The example setup steps automatically installed Docker, so we have it ready to go. First, let’s download and start a Rocky Linux container with a single command:

    root@host01:~# docker run -ti rockylinux:8

     

    Unable to find image 'rockylinux:8' locally

    8: Pulling from library/rockylinux

    ...

    Status: Downloaded newer image for rockylinux:8

    We use -ti in our docker run command to tell Docker that we need an interactive terminal to run commands. The only other parameter to docker run is the container image, rockylinux:8, which specifies the name rockylinux and the version 8. Because we don’t provide a command to run, the default bash command for that container image is used.

    Now that we have a shell prompt inside the container, we can run a few commands and then use exit to leave the shell and stop the container:

    ➊ [root@18f20e2d7e49 /]# cat /etc/os-release

    NAME=Rocky Linux

      ...

    ➌ [root@18f20e2d7e49 /]# yum install -y procps iproute

     

      ...

      [root@18f20e2d7e49 /]#

    ps -ef

     

      UID          PID    PPID  C STIME TTY          TIME CMD

      root       

    1      0  0 13:30 pts/0    00:00:00 /bin/bash

      root          19      1  0 13:46 pts/0    00:00:00 ps -ef

      [root@18f20e2d7e49 /]#

    ip addr

     

      1: lo: mtu 65536 ...

      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

      inet 127.0.0.1/8 scope host lo

        valid_lft forever preferred_lft forever

    18: eth0@if19: mtu 1500 ...

      link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

      inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0

        valid_lft forever preferred_lft forever

      [root@18f20e2d7e49 /]#

    uname -v

    #...-Ubuntu SMP ...

      [root@18f20e2d7e49 /]#

    exit

    When we run commands within our container, it looks like we are running in a Rocky Linux system. Compared to the host system, there are multiple differences:

    A different hostname in the shell prompt ➊ (18f20e2d7e49 for mine, though yours will be different)

    Different filesystem contents ➋, including basic files like /etc/os-release

    The use of yum➌ to install packages, and the need to install packages even for basic commands

    A limited set of running processes, with no base system services and our bash shell ➍ as process ID (PID) 1

    Different network devices ➎, including a different MAC address and IP address

    Strangely, however, when we run uname -v, we see the exact same Ubuntu Linux kernel ➏ as when we were on the host. Clearly, a container is not a wholly separate system as we might otherwise believe.

    Images and Volume Mounts

    At first glance, a container looks like a mix between a regular process and a virtual machine. And the way we interact with Docker only deepens that impression. Let’s illustrate that by running an Alpine Linux container. We’ll start by pulling the container image, which feels a lot like downloading a virtual machine image:

    root@host01:~# docker pull alpine:3

     

    3: Pulling from library/alpine

    ...

    docker.io/library/alpine:3

    Next, we’ll run a container from the image. We’ll use a volume mount to see files from the host, a common task with a virtual machine. However, we’ll also tell Docker to specify an environment variable, which is the kind of thing we would do when running a regular process:

    root@host01:~# docker run -ti -v /:/host -e hello=world alpine:3

     

    / #

    hostname

     

    75b51510ab61

    We can print the contents of /etc/os-release inside the container, as before with Rocky Linux:

    / # cat /etc/os-release

    NAME=Alpine Linux

    ID=alpine

    ...

    However, this time we can also print the host’s /etc/os-release file because the host filesystem is mounted at /host:

    / # cat /host/etc/os-release

    NAME=Ubuntu

    ...

    And finally, within the container we also have access to the environment variable we passed in:

    / # echo $hello

     

    world

    / #

    exit

    This mix of ideas from virtual machines and regular processes sometimes leads new container users to ask questions like, Why can’t I SSH into my container? A major goal of the next few chapters is to make clear what containers really are.

    What Containers Really Are

    Despite what a container looks like, with its own hostname, filesystem, process space, and networking, a container is not a virtual machine. It does not have a separate kernel, so it cannot have separate kernel modules or device drivers. A container can have multiple processes, but they must be started explicitly by the first process (PID 1). So a container will not have an SSH server in it by default, and most containers do not have any system services running.

    In the next several chapters, we’ll look at how a container manages to look like a separate system while being a group of processes. For now, let’s try one more Docker example to see what a container looks like from the host system.

    First, we’ll download and run NGINX with a single command:

    root@host01:~# docker run -d -p 8080:80 nginx

     

    Unable to find image 'nginx:latest' locally

    latest: Pulling from library/nginx

    ...

    Status: Downloaded newer image for nginx:latest

    e9c5e87020372a23ce31ad10bd87011ed29882f65f97f3af8d32438a8340f936

    This example illustrates a couple of additional useful Docker commands. And again, we are mixing ideas from virtual machines and regular processes. By using the -d flag, we tell Docker to run this container in daemon mode (in the background), which is the kind of thing we would do for a regular process. Using -p 8080:80, however, brings in another concept from virtual machines, as it instructs Docker to forward port 8080 on the host to port 80 in the container, letting us connect to NGINX from the host even though the container has its own network interfaces.

    NGINX is now running in the background in a Docker container. To see it, run the following:

    root@host01:~# docker ps

     

    CONTAINER ID IMAGE ... PORTS                  NAMES

    e9c5e8702037 nginx ... 0.0.0.0:8080->80/tcp  funny_montalcini

    Because of the port forwarding, we can connect to it from our host system using curl:

    root@host01:~# curl http://localhost:8080/

     

    Welcome to nginx!

    ...

    With this example, we’re starting to see how containerization meets some of the needs we identified earlier in this chapter. Because NGINX is packaged into a container image, we can download and run it with a single command, with no concern for any conflict with anything else that might be installed on our host.

    Let’s run one more command to explore our NGINX server:

    root@host01:~# ps -fC nginx

     

    UID          PID    PPID  C STIME TTY          TIME CMD

    root      22812  22777  1 15:01 ?        00:00:00 nginx: master ...

    If NGINX were running in a virtual machine, we would not see it in a ps listing on the host system. Clearly, NGINX in a container is running as a regular process. At the same time, we didn’t need to install NGINX onto our host system to get it working. In other words, we are getting the benefits of a virtual machine approach without the overhead of a virtual machine.

    Deploying Containers to Kubernetes

    To have load balancing and resilience in our containerized applications, we need a container orchestration framework like Kubernetes. Our example system also has a Kubernetes cluster automatically installed, with a web application and database deployed to it. As a preparation for our deep dive into Kubernetes in Part II, let’s look at that application.

    There are many different options for installing and configuring a Kubernetes cluster, with distributions available from many companies. We discuss multiple options for Kubernetes distributions in Chapter 6. For this chapter, we’ll use a lightweight distribution called K3s from a company called Rancher.

    To use a container orchestration environment like Kubernetes, we have to give up some control over our containers. Rather than executing commands directly to run containers, we’ll tell Kubernetes what containers we want it to run, and it will decide where to run each container. Kubernetes will then monitor our containers for us and handle automatic restart, failover, updates to new versions, and even autoscaling based on load. This style of configuration is called declarative.

    Talking to the Kubernetes Cluster

    A Kubernetes cluster has an API server that we can use to get status and change the cluster configuration. We interact with the API server using the kubectl client application. K3s comes with its own embedded kubectl command that we’ll use. Let’s begin by getting some basic information about the Kubernetes cluster:

    root@host01:~# k3s kubectl version

     

    Client Version: version.Info{Major:1, ...

    Server Version: version.Info{Major:1, ...

    root@host01:~#

    k3s kubectl get nodes

     

    NAME    STATUS  ROLES            AGE  VERSION

    host01  Ready    control-plane...  2d    v1...

    As you can see, we’re working with a single-node Kubernetes cluster. Of course, this would not meet our needs for high availability. Most Kubernetes distributions, including K3s, support a multinode, highly available cluster, and we will look at how that works in detail in Part II.

    Application Overview

    Our example application provides a to-do list with a web interface, persistent storage, and tracking of item state. It will take several minutes for this to be running in Kubernetes, even after the automated scripts are finished. After it’s running, we can access it in a browser and should see something like Figure 1-1.

    Figure 1-1: An example application in Kubernetes

    This application is divided into two types of containers, one for each application component. A Node.js application serves files to the browser and provides a REST API. The Node.js application communicates with a PostgreSQL database. The Node.js component is stateless, so it is easy to scale up to as many instances as we need based on the number of users. In this case, our application’s Deployment asked Kubernetes for three Node.js containers:

    root@host01:~# k3s kubectl get pods

    Enjoying the preview?
    Page 1 of 1