Ebook KUBERNETES Essentials PDF
Ebook KUBERNETES Essentials PDF
Ebook KUBERNETES Essentials PDF
1. Introduction to Kubernetes 00
2. Key definitions and concepts 00
1. What is Kubernetes? 00
2. What therefore is Containerization 00
3. What is a docker container? 00
4. How Kubernetes differs from Docker project? 00
5. What is orchestration? 00
6. Features of orchestration 00
7. Key Features of Kubernetes 00
8. Work Units of Kubernetes 00
9. Components of Kubernetes 00
3. Kubernetes Concepts 00
1. Pods 00
2 Controllers 00
4. Deploying Kubernetes Manually 00
1. Install Docker Engine on Ubuntu 00
2 Installing Kubernetes on Ubuntu 00
3 Installing etcd 2.0 on Ubuntu 00
4 Installing Addons 00
5. Downloading Kubernetes Docker Images 00
1 Setting up Kubernetes Cluster 00
2 Dockerizing the App 00
3 Writing Kubernetes Manifest Files for Sample App 00
4 Understanding Kubectl Utility 00
LinOxide 1
5 Launching and Running Container pods with
Kubernetes 00
6 Kubernetes - App Deployment Flow 00
7 Kubernetes – Auto scaling 00
8 Destroying Kubernetes Cluster and Pods 00
6. Deploying Kubernetes with Ansible 00
7. Provisioning Storage in Kubernetes 00
1 Kubernetes Persistent Volumes 00
2 Requesting storage 00
3 Using Claim as a Volume 00
4 Kubernetes and NFS 00
5 Kubernetes and iSCSI 00
8. Troubleshooting Kubernetes and Systemd Services 00
1 Kubernetes Troubleshooting Commands 00
2 Networking Constraints 00
3 Inspecting and Debugging Kubernetes 00
4 Querying the State of Kubernetes 00
5 Checking Kubernetesyaml or json Files 00
6 Deleting Kubernetes Components 00
9. Kubernetes Maintenance 00
1 Monitoring Kubernetes Cluster 00
2 Managing Kubernetes with Dashboard 00
3 Logging Kubernetes Cluster 00
4 Upgrading Kubernetes 00
LinOxide 2
1. INTRODUCTION TO KUBERNETES
This chapter will give a brief overview of containers and Kubernetes
and how the two technologies play a key role in shifting to DevOps
methodologies and CI/CD (continuous integration/continuous
deployment) strategies to accelerate innovation and service delivery.
LinOxide 3
either interfering with each other. The same is applied in computing
Containerization. Creating a container is basically putting everything
you need for your application to work be it libraries, operating system
or any other technology. What has been created can be replicated and
will work in any environment which saves time and makes it easy for
other processes to continue without re-installing the same components
of the container every time you spin a virtual machine. It is a type of a
strategy in virtualization that has come about as an alternative to the
native or initial hypervisor-based virtualization. Containerization involves
creating separate containers at the operating system level which makes
it possible to share libraries, file systems and other important components
hence saving a lot of space compared to native virtualization where
each virtual machine had its own components in isolation. There are few
containerization technologies that offer containerization tools and API
such as Docker Engine, Rkt, LXC, OpenVZ, runC and LXD. After understanding
the key concepts, Kubernetes can thus be easily defined. Below are few
similarities and differences between these container technologies.
Kubernetes is an active open source project founded by Google to assist
system developers/administrators orchestrate and manage containers
in different kind of environments such as virtual, physical, and cloud
infrastructure. Currently, Kubernetes project is hosted by Cloud Native
Computing Foundation (CNCF).
LinOxide 4
HOW KUBERNETES DIFFERS FROM DOCKER PROJECT?
WHAT IS ORCHESTRATION?
FEATURES OF ORCHESTRATION
After building the container image you want with Docker, you can use
Kubernetes or others to automate deployment on one or more compute
nodes in the cluster. In Kubernetes, interconnections between a set of
containers are managed by defining Kubernetes services. As demand for
individual containers increases or decreases, Kubernetes can start more
or stop some container pods as needed using its feature called replication
controller.
LinOxide 5
framework because it is highly portable and provides a smooth migration
path for legacy applications. Although containers will never be and are not
designed to be the single solution to all enterprise workloads, they are a
smart way to accelerate development, deployment, and scaling of cloud-
native workloads with the help of tools like Kubernetes.
• Extensibility
This is the ability of a tool to allow an extension of its capacity/capabilities
without serious infrastructure changes. Users can freely extend and add
services. This means users can easily add their own features such as
security updates, conduct server hardening or other custom features.
• Portability
In its broadest sense, this means, the ability of an application to be
moved from one machine to the other. This means package can run
anywhere. Additionally, you could be running your application on google
cloud computer and later along the way get interested in using IBM
watson services or you use a cluster of raspberry PI in your backyard. The
application-centric nature of Kubernetes allows you to package your app
once and enjoy seamless migration from one platform to the other.
• Self-healing
Kubernetes offers application resilience through operations it initiates such
as auto start, useful when an app crash, auto-replication of containers
and scales automatically depending on traffic. Through service discovery,
Kubernetes can learn the health of application process by evaluating the
main process and exit codes among others. Kubernetes healing property
allows it to respond effectively.
• Load balancing
Kubernetes optimizes the tasks on demand by making them available
and avoids undue strain on the resources. In the context of Kubernetes, we
have two types of Load balancers – Internal and external load balancer.
The creation of a load balancer is asynchronous process, information
about provisioned load balancer is published in the Service’s status.
loadBalancer.
LinOxide 6
Traffic coming from the external load balancer is directed at the backend
pods. In most cases, external load balancer is created with user-specified
load balancer IP address. If no IP address is specified, an ephemeral IP will
be assigned to the load balancer.
• Cluster
These are the nodes or the collection of virtual machines or bare-
metal servers which provide the resources that Kubernetes uses to run
applications.
• Pods
Pods are the smallest units of Kubernetes. A pod can be a single or a group
of containers that work together. Generally, pods are relatively tightly
coupled. A canonical example is pulling and serving some files as shown in
the picture below.
It doesn’t make sense to pull the files if you’re not serving them and it
doesn’t make sense to serve them if you haven’t pulled them.
LinOxide 7
Application containers in a pod are in an isolated environment with
resource constraints. They all share network space, volumes, cgroups and
Linux namespaces. All containers within a pod share an IP address and
port space, hence they can find each other via 127.0.0.1 (localhost). They
can as well communicate with each other via standard inter-process
communications, e.g. SystemV semaphores/POSIX shared memory. Since
they are co-located, they are always scheduled together.
When pods are being created, they are assigned a unique ID (UID), and
scheduled to run on nodes until they are terminated or deleted. If a node
dies, pods that were scheduled to that node are deleted after a timeout
period.
• Labels
These are key/value pairs attached to objects like pods. When containers
need to be managed as a group, they are given tags called labels.
This can allow them to be exposed to the outside to offer services. A
replication controller defined next gives the same label to all containers
developed from its templates. Labels make it easy for administration and
management of services.
The client or user identifies a set of objects using a label selector. The label
selector can be defined as the core grouping primitive in Kubernetes.
• Services
LinOxide 8
it optimum and fast. Service can be defined as an abstraction on top of a
number of pods.
• Replication Controller
A Replication Controller ensures that a specified number of pod replicas
are running at any one time. It defines pods that are to be scaled
horizontally. Pods that have been completely defined are provided as
templates which are then added with what the new replication should
have. It is the responsibility of Replication controller to make sure that a
pod or a homogeneous set of pods is always up and available.
COMPONENTS OF KUBERNETES
LinOxide 9
Each of these Kubernetes components and how they work is covered in
the next table. Note that it’s broken into two parts – Kubernetes Master and
Kubernetes Node. For Kubernetes Node to function in coordination with
master services, there exist control plane within the Master Node.
LinOxide 10
kube-apiserver • This service provides an API for
orchestrating Kubernetes cluster
• It provides the frontend to the shared
state of the cluster and services all REST
operations against the cluster.
LinOxide 11
Table 1: Kubernetes Master Services
Kubernetes Node Role
Component
Kube-proxy • This service acts as a network proxy and does
load balancing of service running on a single
worker node.
• Kube-proxy usually runs on each node in the
cluster.
• It watches the master for Service and
Endpoints addition/removal and does load
balancing through simple UDP, TCP stream
forwarding and round-robin across a set of
backend services without the clients knowing
anything about Kubernetes or Services or
Pods.
• Kube-proxy is also responsible for
implementing a form of virtual IP for services
of types other than ExternalName.
• Pods
• Controllers
Kubernetes Concepts
To fully understand Kubernetes operations, you’ll need a good foundation
on the basics of pods and controllers. We’ll refer to the diagram below
while explaining these concepts.
LinOxide 13
Pods
I
n Kubernetes, a Pod is the smallest deployable object. It is the smallest
building unit representing a running process on your cluster. A Pod can
run a single container or multiple containers that need to run together.
• One container per pod - This is the most common model used in
Kubernetes. In this case, a pod is a wrapper around a single container.
Kubernetes then manage pods instead of directly interacting with
individual containers.
LinOxide 14
Sidecar containers
From this diagram, the sidebar container does pulling of updates from git
and application controller then serve these files on application server.
Ambassador containers:
As shown above, the ambassador container runs Redis proxy service. This
connects to application container via localhost, but the proxy make the
application accessible from outside.
Adapter containers:
LinOxide 15
How Pods manage multiple Containers
Storage
Networking
LinOxide 16
Pods creation
A controller can make your task easier by creating and managing multiple
Pods for you using a Pod Template you provide. Kubernetes controller also
manage replication, rollout and self-healing features. If a Node fails, the
Controller can schedule the creation of identical Pod on a different Node.
Pods Lifetime
Pods are mostly managed using a Controller and they are ephemeral
entities. When a Pod is created, it is scheduled to run on a Node in the
cluster. The Pod will keep running in that specific Node until the parent
process is terminated – end of lifecycle, then the Pod object is deleted
Controllers
T
he major controller component is ReplicationController which work
to ensure that a specified number of pod replicas are running at
any one time. It makes sure that a pod or a homogeneous set of
pods is always up and available.
When there are too many pods, the ReplicationController will terminate
the extra pods. If the number of pods is too low, ReplicationController will
start more pods. It does this using application-provided metrics like CPU
utilization. Note that Horizontal Pod Autoscaling does not apply to objects
that can’t be scaled, for example, DaemonSet. Advantage of having pods
maintained by ReplicationController is that if a pod fails for any reason, it
will automatically create another pod to replace failing one. For this reason,
it is always recommended to use ReplicationController even if you have
only one pod.
LinOxide 17
$ cat replication.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: caddy
spec:
replicas: 4
selector:
app: caddy
template:
metadata:
name: caddy
labels:
app: caddy
spec:
containers:
- name: caddy
image: caddy
ports:
- containerPort: 80
From above code snippet, you can see that we have specified that four
copies of caddy web server be created. Container image to be used is
caddy and port exposed on the container is 80
Give it some seconds to pull image and create container, then check for
status:
LinOxide 18
$ kubectl describe replicationcontrollers/caddy
---
Pods Status: 4 Running / 0 Waiting / 0 Succeeded / 0 Failed
If you would like to list all the pods that belong to the ReplicationController
in a machine readable form, run:
$ echo $pods
Rescheduling
Scaling
You can easily scale the number of replicas up and down using auto-
scaling control agent or through manual process. The only change
required is on the number of replicas. Please note that Horizontal Pod
Autoscaling does not apply to objects that can’t be scaled, for example,
DaemonSet.
Rolling updates
In our Lab, we’ll setup one Kubernetes master and two Kubernetes nodes.
This Lab is done on VirtualBox. These three virtual machines will be created
using vagrant. Vagrant is a software applications available on Windows,
Linux and Mac which allows you to easily build and maintain portable
virtual software development environments.
Prerequisites:
1. Install VirtualBox
2. Install Vagrant
3. Spin up three VMs
Install VirtualBox:
LinOxide 20
# echo deb http://download.virtualbox.org/virtualbox/debian yakkety
contrib > /etc/apt/sources.list.d/virtualbox.list
# wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O-
| sudo apt-key add -
# wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo
apt-key add -
$ sudo apt-get update
$ sudo apt-get install virtualbox
Install Vagrant
$ mkdir kubernetes_lab
$ cd kubernetes_lab
$ vim Vagrantfile
If you don’t have ssh key, you can generate using command below:
LinOxide 21
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jmutai/.ssh/id_rsa): id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_rsa.
Your public key has been saved in id_rsa.pub.
The key fingerprint is:
SHA256:8F2ObfrwvIa4/rn3oCHjnx5FgEsxVH/MJP1pf17mgt4 jmutai@
dev.jmtai.com
The key's randomart image is:
+---[RSA 2048]----+
| .++o ... |
| o. o =. |
| .. . + +..|
| o.. * . o.|
| S o = . .|
| + =|
| o.=... +o|
| ..o.@+o. o|
| .+=O=*oE. |
+----[SHA256]-----+
Now that you have ssh keys that we’ll use to ssh to the VMs, it is time to
write Vagrantfile used to automatically bring the three VMs up. Vagrantfile
uses ruby programming language syntax to define parameters. Below is a
sample Vagrantfile contents used for this Lab.
LinOxide 22
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know
what
# you're doing.
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.define "kubernetes-master" do |web|
web.vm.network "public_network", ip: "192.168.60.2"
web.vm.hostname="kubernetes-master"
end
config.vm.define "kubernetes-node-01" do |web|
web.vm.network "public_network", ip: "192.168.60.3"
web.vm.hostname="kubernetes-node-01"
end
config.vm.define "kubernetes-node-02" do |web|
web.vm.network "public_network", ip: "192.168.60.4"
web.vm.hostname="kubernetes-node-02"
end
end
Once you have the file saved as Vagrantfile. Now create virtual machines
using from the Vagrantfile. Note that you need to be on same directory as
Vagrantfile before running the command shown below:
$ vagrant up
Bringing machine 'kubernetes-master' up with 'virtualbox' provider...
Bringing machine 'kubernetes-node-01' up with 'virtualbox'
provider...
Bringing machine 'kubernetes-node-02' up with 'virtualbox'
provider...
==> kubernetes-master: Importing base box 'ubuntu/xenial64'...
==> kubernetes-master: Matching MAC address for NAT networking...
==> kubernetes-master: Checking if box 'ubuntu/xenial64' is up to
date...
LinOxide 23
==> kubernetes-master: Setting the name of the VM: kubernetes_lab_
kubernetes-master_1509819157682_272
==> kubernetes-master: Clearing any previously set network
interfaces...
==> kubernetes-master: Preparing network interfaces based on
configuration...
kubernetes-master: Adapter 1: nat
kubernetes-master: Adapter 2: bridged
==> kubernetes-master: Forwarding ports...
kubernetes-master: 22 (guest) => 2222 (host) (adapter 1)
==> kubernetes-master: Running 'pre-boot' VM customizations...
==> kubernetes-master: Booting VM...
==> kubernetes-master: Waiting for machine to boot. This may take a
few minutes...
kubernetes-master: SSH address: 127.0.0.1:2222
kubernetes-master: SSH username: ubuntu
kubernetes-master: SSH auth method: password
kubernetes-master:
kubernetes-master: Inserting generated public key within guest...
kubernetes-master: Removing insecure key from the guest if it's
present...
kubernetes-master: Key inserted! Disconnecting and reconnecting
using new SSH key...
==> kubernetes-master: Machine booted and ready!
==> kubernetes-master: Checking for guest additions in VM...
kubernetes-master: The guest additions on this VM do not match
the installed version of
kubernetes-master: VirtualBox! In most cases this is fine, but in
rare cases it can
kubernetes-master: prevent things such as shared folders from
working properly. If you see
kubernetes-master: shared folder errors, please make sure the
guest additions within the
kubernetes-master: virtual machine match the version of
VirtualBox you have installed on
LinOxide 24
kubernetes-master: your host and reload your VM.
kubernetes-master:
kubernetes-master: Guest Additions Version: 5.0.40
kubernetes-master: VirtualBox Version: 5.2
==> kubernetes-master: Setting hostname...
==> kubernetes-master: Configuring and enabling network
interfaces...
==> kubernetes-master: Mounting shared folders...
kubernetes-master: /vagrant => /home/jmutai/kubernetes_lab
==> kubernetes-node-01: Importing base box 'ubuntu/xenial64'...
==> kubernetes-node-01: Matching MAC address for NAT
networking...
==> kubernetes-node-01: Checking if box 'ubuntu/xenial64' is up to
date...
==> kubernetes-node-01: Setting the name of the VM: kubernetes_
lab_kubernetes-node-01_1509819210676_63689
==> kubernetes-node-01: Fixed port collision for 22 => 2222. Now on
port 2200.
==> kubernetes-node-01: Clearing any previously set network
interfaces...
==> kubernetes-node-01: Preparing network interfaces based on
configuration...
kubernetes-node-01: Adapter 1: nat
kubernetes-node-01: Adapter 2: bridged
==> kubernetes-node-01: Forwarding ports...
kubernetes-node-01: 22 (guest) => 2200 (host) (adapter 1)
==> kubernetes-node-01: Running 'pre-boot' VM customizations...
==> kubernetes-node-01: Booting VM...
==> kubernetes-node-01: Waiting for machine to boot. This may take
a few minutes...
kubernetes-node-01: SSH address: 127.0.0.1:2200
kubernetes-node-01: SSH username: ubuntu
kubernetes-node-01: SSH auth method: password
kubernetes-node-01: Warning: Connection reset. Retrying...
kubernetes-node-01: Warning: Authentication failure. Retrying...
LinOxide 25
kubernetes-node-01:
kubernetes-node-01: Inserting generated public key within guest...
kubernetes-node-01: Removing insecure key from the guest if it's
present...
kubernetes-node-01: Key inserted! Disconnecting and reconnecting
using new SSH key...
==> kubernetes-node-01: Machine booted and ready!
==> kubernetes-node-01: Checking for guest additions in VM...
kubernetes-node-01: The guest additions on this VM do not match
the installed version of
kubernetes-node-01: VirtualBox! In most cases this is fine, but in
rare cases it can
kubernetes-node-01: prevent things such as shared folders from
working properly. If you see
kubernetes-node-01: shared folder errors, please make sure the
guest additions within the
kubernetes-node-01: virtual machine match the version of
VirtualBox you have installed on
kubernetes-node-01: your host and reload your VM.
kubernetes-node-01:
kubernetes-node-01: Guest Additions Version: 5.0.40
kubernetes-node-01: VirtualBox Version: 5.2
==> kubernetes-node-01: Setting hostname...
==> kubernetes-node-01: Configuring and enabling network
interfaces...
==> kubernetes-node-01: Mounting shared folders...
kubernetes-node-01: /vagrant => /home/jmutai/kubernetes_lab
==> kubernetes-node-02: Importing base box 'ubuntu/xenial64'...
==> kubernetes-node-02: Matching MAC address for NAT
networking...
==> kubernetes-node-02: Checking if box 'ubuntu/xenial64' is up to
date...
==> kubernetes-node-02: Setting the name of the VM: kubernetes_
lab_kubernetes-node-02_1509819267475_56994
==> kubernetes-node-02: Fixed port collision for 22 => 2222. Now on
port 2201.
LinOxide 26
==> kubernetes-node-02: Clearing any previously set network
interfaces...
==> kubernetes-node-02: Preparing network interfaces based on
configuration...
kubernetes-node-02: Adapter 1: nat
kubernetes-node-02: Adapter 2: bridged
==> kubernetes-node-02: Forwarding ports...
kubernetes-node-02: 22 (guest) => 2201 (host) (adapter 1)
==> kubernetes-node-02: Running 'pre-boot' VM customizations...
==> kubernetes-node-02: Booting VM...
==> kubernetes-node-02: Waiting for machine to boot. This may take
a few minutes...
kubernetes-node-02: SSH address: 127.0.0.1:2201
kubernetes-node-02: SSH username: ubuntu
kubernetes-node-02: SSH auth method: password
kubernetes-node-02:
kubernetes-node-02: Inserting generated public key within guest...
kubernetes-node-02: Removing insecure key from the guest if it's
present...
kubernetes-node-02: Key inserted! Disconnecting and reconnecting
using new SSH key...
==> kubernetes-node-02: Machine booted and ready!
==> kubernetes-node-02: Checking for guest additions in VM...
kubernetes-node-02: The guest additions on this VM do not match
the installed version of
kubernetes-node-02: VirtualBox! In most cases this is fine, but in
rare cases it can
kubernetes-node-02: prevent things such as shared folders from
working properly. If you see
kubernetes-node-02: shared folder errors, please make sure the
guest additions within the
kubernetes-node-02: virtual machine match the version of
VirtualBox you have installed on
kubernetes-node-02: your host and reload your VM.
kubernetes-node-02:
kubernetes-node-02: Guest Additions Version: 5.0.40
LinOxide 27
kubernetes-node-02: VirtualBox Version: 5.2
==> kubernetes-node-02: Setting hostname...
==> kubernetes-node-02: Configuring and enabling network
interfaces...
==> kubernetes-node-02: Mounting shared folders...
kubernetes-node-02: /vagrant => /home/jmutai/kubernetes_lab
The command above will download Ubuntu Xenial vagrant image and
create three Virtual Machiness with specified names - kubernetes-master,
kubernetes-node-01 and kubernetes-node-02. All these VMs will be on the
same subnet 192.168.60.0/24.
$ vagrant status
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
Now ssh to the Kubernetes master node and update apt cache, then do
system upgrade:
LinOxide 28
Do the same on both Kubernetes nodes - Perform system update and
upgrade.
Now that Kubernetes master node is ready, let’s proceed to install docker
engine community edition on this VM.
LinOxide 29
2. Install packages to allow apt to use a repository over HTTPS:
LinOxide 30
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:
$ docker version
Client:
Version: 17.10.0-ce
API version: 1.33
Go version: go1.8.3
Git commit: f4ffd25
Built: Tue Oct 17 19:04:16 2017
OS/Arch: linux/amd64
Server:
Version: 17.10.0-ce
API version: 1.33 (minimum version 1.12)
Go version: go1.8.3
Git commit: f4ffd25
Built: Tue Oct 17 19:02:56 2017
OS/Arch: linux/amd64
Experimental: false
LinOxide 31
Install Docker Engine on Ubuntu
Install dependencies:
LinOxide 32
Short description of installed packages:
Kubernetes-cni: This enables cni network on your machine. CNI stands for
Container Networking Interface which is a spec that defines how network
drivers should interact with Kubernetes
Once you have all the master components installed, the next step is
initialize cluster on the master node. The master is the machine where the
“control plane” components run, including etcd (the cluster database)
and the API server (which the kubectl CLI communicates with). All of these
components run in pods started by kubelet.
kubeadm init
This will download and install the cluster database and “control plane”
components. This may take several minutes depending on your internet
connection speed. The output from this command will give you the exact
command you need to join the nodes to the master, take note of this
command:
...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
LinOxide 33
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following
on each node
as root:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
On the two Kubernetes nodes, run the following command to join them to
the cluster:
LinOxide 34
Installing etcd 2.0 on Ubuntu
Etcd 2.0 is a Distributed reliable key-value store for the most critical data
of a distributed system. It focuses mainly on being Secure, Simple, Fast and
Reliable. It is written in Go programming language and Raft consensus
algorithm that manages a highly-available replication log,
There are two ways to install etcd on Ubuntu – One being building it form
source, next being using pre-built binary available for download. If you’re
interested in getting the latest release, consider building it from source.
To build it from source, you’ll need installed git and golang, then run:
Installing Addons
Here I’ll show a number of plugins that you can install to extend Kubernetes
functionalities. This list is not exclusive, feel free to add what you feel might
help.
LinOxide 35
Deploy CoreDNS
CoreDNS is a flexible and extensible DNS server written in Go which you can
use on Kubernetes setup to manage Pod DNS records. It chains plugins
which manage DNS functions such as Kubernetes service discovery,
Prometheus metrics or rewriting queries. CoreDNS is able to integrate with
Kubernetes via the Kubernetes plugin, or directly with etcd plugin. Mostly
you’ll either server zone
2. Verify Installation
# go version
go version go1.6.2 linux/amd64
mkdir ~/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$PATH
go get github.com/coredns/coredns
LinOxide 36
Another way to install CoreDNS is from a binary of latest release. To check
latest release visit > https://github.com/coredns/coredns/releases
Test that the binary is copied and working by checking coredns version:
# /usr/local/bin/coredns -version
CoreDNS-0.9.10
linux/amd64, go1.9.1, d272e525
For more information about CoreDNS, visit official project page on github:
LinOxide 37
Before you can start using Kubernetes Dashboard, you’ll need to configure
the proxy server using kubectl command. This will set url for you which
you can use to access the dashboard. Run command below to get proxy
configured:
$ kubectl proxy
Now that you have ready environment, let’s look at how to download
docker images for use with Kubernetes. A docker image is an inert,
immutable file that's essentially a snapshot of a container. This contains
OS utilities and basic tools required to run an application. In this example,
I’ll show you how to download busybox docker image:
LinOxide 38
Confirm that the image has been downloaded:
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu xenial dd6f76d9cc90 12 days ago 122MB
busybox latest 6ad733544a63 12 days ago 1.13MB
passenger- latest c3f873600e95 5 months ago 640MB
ruby24
phusion/ latest c3f873600e95 5 months ago 640MB
passenger-
ruby24
As you can see above, the image has been downloaded successfully. We’ll
use this image in the next section.
Testing:
Now that we have ready Kubernetes cluster, let’s create a simple pod on
this cluster. As an example, consider below simple Pod template manifest
for a Pod with a container to print a message. Pod configuration file is
defined using YAML syntax:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testapp-pod
labels:
app: testapp
spec:
containers:
- name: testapp-container
image: busybox
command: ['sh', '-c', 'echo This is Kubernetes!! && sleep 3600']
LinOxide 39
Options used:
LinOxide 40
Setting up Kubernetes Cluster:
There are many solutions available for setting up Kubernetes cluster for
the different environment. To get started with the Kubernetes, Minikube is
one of the most preferred options.
If Linux is used then, it will require Virtual Machine Driver to run minikube.
Following are steps to install Minikube on Linux:
LinOxide 41
Install a Hypervisor:
Download VirtualBox:
$ sudodpkg -i virtualbox-5.2_5.2.0-118431-Ubuntu-xenial_amd64.deb
Install Kubectl:
Kubectl is the command line utility which interacts with API Server of the
Kubernetes.
LinOxide 42
Make binary executable:
$ chmod +x ./kubectl
$kubectl
kubectl controls the Kubernetes cluster manager.
LinOxide 43
Install Minikube:
Verify Installation:
$minikube
Minikube is a CLI tool that provisions and manages single-node
Kubernetes clusters optimized for development workflows.
Usage:
minikube [command]
Available Commands:
addons Modify minikube'skubernetesaddons
completion Outputs minikube shell completion for the given
shell (bash)
config Modify minikubeconfig
dashboard Opens/displays the kubernetes dashboard URL for
your local cluster
delete Deletes a local kubernetes cluster
docker-env Sets up dockerenv variables; similar to '$(docker-
machine env)'
get-k8s-versions Gets the list of available kubernetes versions
available for minikube
ip Retrieves the IP address of the running cluster
$minikube start
$minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$kubectlconfig get-clusters
It will list the clusters. You should get the result like:
$kubectlconfig get-clusters
NAME
minikube
LinOxide 45
Dockerizing the App:
Here we will use sample NodeJs application. The structure of the basic app
will be:
app/
-- server.js
-- package.json
server.js:
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
LinOxide 46
Package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "Hello world app taken from: https://expressjs.com/en/
starter/hello-world.html",
"main": "server.js",
"scripts": {
"test": "",
"start": ""
},
"repository": {
"type": "git",
"url": "git+https://github.com/borderguru/hello-world.git"
},
"author": "",
"license": "ISC",
"bugs": {
"url": "https://github.com/borderguru/hello-world/issues"
},
"homepage": "https://github.com/borderguru/hello-world#readme",
"dependencies": {
"chai": "^4.1.2",
"express": "^4.15.3",
"mocha": "^4.0.1",
"request": "^2.83.0"
}
}
LinOxide 47
To Dockerize the app, we need to create the Dockerfile:
Dockerfile:
RUNnpm install
LinOxide 48
The Dockerfile consists set of commands which includes making work
directory where necessary files will be copied. Installing dependencies
using npm. Note the base image node:boron is an official image from
NodeJS and it consists stable npm version. After copying all files and
installing dependencies exposing 3000 port of the container and the first
command will run npm start.
This is sample Dockerfile but it is not limited to just given commands. For
more in-depth information please check official guide of the Docker for
creating Dockerfile.
To create the images from the Dockerfiledockercli is used. Now the app
structure is:
app/
--- server.js
--- package.json
--- Dockerfile
It will build the image and tag it with helloworld:1.0 here 1.0 is the version of
the image. If nothing is specified then, the latest version will be chosen.
This will download all dependencies to run the app. After a successful build,
check the images.
sudodocker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloworld 1.0 c812ebca7a95 About a 678
minute ago
node boron c0cea7b613ca 11 days ago 661 MB
LinOxide 49
To run the container docker run command is used. It’s necessary to bind
the node port to container port.
$curl localhost:3000
Hello World!
$sudodocker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
helloworld 1.0 c812ebca7a95 3 hours ago
678 MB
node boron c0cea7b613ca 11 days ago
661 MB
$sudodocker tag helloworld:1.0 helloworld:latest
REPOSITORY TAG IMAGE ID CREATED
SIZE
helloworld 1.0 c812ebca7a95 3 hours ago
678 MB
helloworld 1.0 c812ebca7a95 3 hours ago
678 MB
helloworld 1.0 c812ebca7a95 3 hours ago
678 MB
node boron c0cea7b613ca 11 days ago
661 MB
LinOxide 50
Tagging image to latest to indicate this will be the most recent version
of the image. Dockerhub is the central registry for storing docker images.
However, there are many other registries available like JFROG, Quay and
Amazon ECR.
Login to dockerhub:
(Notice: If you don't know dockerhub then please visithttps://hub.docker.
com and create account)
$ sudodocker login with your Docker ID to push and pull images from
Docker Hub. If you don't have a Docker ID, head over to https://hub.
docker.com to create one.
Username (kubejack): kubejack
Password:
Login Succeeded
For now, the sample app is containerized but to run the app over
Kubernetes it will require the Kubernetes Objects.
DaemonSets : Daemon sets are used for running the pods on each
Kubernetes node. This usually used for monitoring and logging.
mkdirkubernetes
LinOxide 52
Create deployment.yml and service.yml files
cd kubernetes
touch deployment.yml
touch service.yml
app/
--- server.js
--- package.json
--- Dockerfile
kubernetes/
--- deployment.yml
--- service.yml
Kubernetes manifests are plain yaml files which will define the desired
state of the cluster.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
ver: v1
LinOxide 53
template:
metadata:
labels:
app: hello-world
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
Let's understand the deployments.
Deployment Object:
deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
Labels are important for service discovery. Labels are attached to the
kubernetes object. And all operations on API server are with Labels.
LinOxide 54
Replica Set Object Specification:
deployment.yml
spec:
replicas: 10
selector:
matchLabels:
app: hello-world
ver: v1
template:
metadata:
labels:
app: hello-world
ver: v1
LinOxide 55
Pod Object Specification:
deployment.yml
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
The Pod specification will consist the container specification. It means this
specification will give information about image name, container name,
container ports to be exposed and what will be pull policy for the image.
imagePullPolicy indicates when the image should get pulled and can be
“Always” to make sure updated image is pulled each time.
LinOxide 56
service.yml
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
labels:
name: hello-world-svc
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello-world
ver: v1
In the spec, it’s possible to expose the port of the node. The exposed port
will be mapped to the container port. Here targetPort is the port of the
container which is mapped with node port 80.TCP and UDP protocols are
supported.
This service is not exposed publicly. To expose it with cloud provider type:
Loadbalanceris used.
The most important is Selector. It will select the group of pods which will
match all labels. Even if any pod consist more than, specified labels then it
should match. It is not the same case with the selector, if there is any label
missed in selector but not in the pod then service will just ignore it.
Labels and Selectors are good ways to maintain version and to rollback
and rollout the updates.
LinOxide 57
Understanding the Kubectl Utility
Everything can be done using API Server only. There are different options
for calling REST API’s, the User interface of the Kubernetes and Kubectl.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/ut/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/ut/.minikube/client.crt
client-key: /home/ut/.minikube/client.key
LinOxide 58
Kubectl is authenticated to interact with Kubernetes Cluster. Kubectl can
manage everything for Kubernetes Cluster.
Kubectl can,
Create the Kubernetes Object:
$kubectl get rs
LinOxide 59
Delete the Kubernetes Object
All these are few examples of the Kubectl utility. But there are more
advanced use cases like scaling with kubectl. All these are imperative
methods.
LinOxide 60
Launching and Running Container pods with Kubernetes
To create the kubernetes object Kubectl cli will be used for the
communication with API Server.
Get deployment:
kubectl get rs
NAME DESIRED CURRENT UP-TO- AVAILABLE AGE
DATE
hello- 10 10 10 10 1h
world-493621601
493621601 is the hash value added over deployment
LinOxide 61
Get pods:
$kubectl get po
NAME READY STATUS RESTARTS AGE
hello-world-493621601 1/1 Running 0 1h
hello-world-493621601-2dq3c 1/1 Running 0 1h
hello-world-493621601-8jnjw 1/1 Running 0 1h
hello-world-493621601-g5sfl 1/1 Running 0 1h
hello-world-493621601-j3jjt 1/1 Running 0 1h
hello-world-493621601-rb7gr 1/1 Running 0 1h
hello-world-493621601-s5qp9 1/1 Running 0 1h
hello-world-493621601-v086d 1/1 Running 0 1h
hello-world-493621601-xkhhs 1/1 Running 0 1h
hello-world-493621601-ztp32 1/1 Running 0 1h
Again the more hash values are added over the replica set. The status
indicates all pods are in running state. That means desired state of the
cluster is met.
Get the service object and describe it:
Describe service:
LinOxide 62
configuration={"apiVersion":"v1","kind":"Service","metadata":{"anno
tations":{},"labels":{"name":"hello-world-svc"},"name":"hello-world-
svc","namespace":"default"},"s...
Selector: app=hello-world,ver=v1
Type: LoadBalancer
IP: 10.0.0.131
Port: <unset> 80/TCP
NodePort: <unset> 30912/TCP
Endpoints: 172.17.0.10:3000,172.17.0.12:3000,172.17.0.2:300
0+7
more...
Session Affinity: None
Events: <none>
Endpoints are unique IP’s of the Pods. Services group all Pods with the IP’s.
Kubernetes Services plays the role of Proxy.
To access the app it’s necessary to get the IP and port from the Minikube.
LinOxide 63
Understanding the Kubernetes App Flow:
LinOxide 64
When
The API Server communicates with Kubelet. If the connection is broken then
that node is treated as unhealthy. It will help and maintain the desired
state of the cluster by load balancing and autoscaling.
LinOxide 65
Kubernetes Auto Scaling:
It’s possible to scale up and scale down the replicas of the Kubernetes
deployment object. To scale up by 20 replicas.
Scaling up:
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
spec:
replicas: 20
selector:
matchLabels:
app: hello-world
ver: v1
template:
metadata:
labels:
app: hello-world
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
LinOxide 66
Update the deployment with,
LinOxide 67
hello-world-493621601- 1/1 Running 0 38s
pjbdn
hello-world-493621601- 1/1 Running 0 38s
q6xlg
hello-world-493621601- 1/1 Running 0 2h
rb7gr
hello-world-493621601- 1/1 Running 0 2hs
s5qp9
hello-world-493621601- 1/1 Running 0 38s
shl7v
hello-world-493621601- 1/1 Running 0 2hs
v086d
hello-world-493621601- 1/1 Running 0 2hs
xkhhs
hello-world-493621601- 1/1 Running 0 2hs
ztp32
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
spec:
replicas: 5
selector:
matchLabels:
LinOxide 68
app: hello-world
ver: v1
template:
metadata:
labels:
app: hello-world
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
LinOxide 69
hello-world-493621601- 1/1 Terminating 0 2h
g5sfl
hello-world-493621601- 1/1 Terminating 0 2h
j3rrj
hello-world-493621601- 1/1 Terminating 0 2m
kgj7z
hello-world-493621601- 1/1 Terminating 0 2m
lqvk4
hello-world-493621601- 1/1 Terminating 0 2m
mrktj
hello-world-493621601- 1/1 Terminating 0 2m
nfd5d
hello-world-493621601- 1/1 Terminating 0 2m
pjbdn
hello-world-493621601- 1/1 Terminating 0 2m
q6xlg
hello-world-493621601- 1/1 Running 0 2h
rb7gr
hello-world-493621601- 1/1 Running 0 2h
s5qp9
hello-world-493621601- 1/1 Terminating 0 2m
shl7v
hello-world-493621601- 1/1 Terminating 0 2h
v086d
hello-world-493621601- 1/1 Terminating 0 2h
xkhhs
hello-world-493621601- 1/1 Running 0 2h
ztp32
LinOxide 70
Destroying Kubernetes Cluster and Pods:
$minikube stop
Stopping local Kubernetes cluster...
Machine stopped.
$minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
LinOxide 71
6. Deploying Kubernetes with Ansible
Ansible is the configuration management tool used for the deploying the
different types of the infrastructure. It is possible to deploy the Kubernetes
with Ansible. Kubespray is a tool used for deploying the Kubernetes cluster
with Ansible.
LinOxide 72
Execute below commands to install latest ansible on debian based
distributions.
$ easy_install pip
$ pip2 install jinja2 --upgrade
Your machine ssh key must be copied to all the servers part of your
inventory.The firewalls should not be managed and The target servers
must have access to the Internet.
For SSH Authentication between machines. On the source machine,
LinOxide 73
Step 1: Generate Keys
$ ssh-keygen -t rsa
Note that your key pair is id_rsa and id_rsa.pub files in shown directories.
Your id_rsa is the private key which will reside on the source machine. id_
rsa.pub is the public key which resides on the destination machine. When
SSH attempt is made from source to destination, protocol checks these
both keys from source and destination. If they match then the connection
will be established without asking password.
LinOxide 74
Step 2: Copy Keys
If you are trying from source machine using ssh then use below
commands:
Repeat the copying steps for each Node of the Kubernetes Cluster.
LinOxide 75
Step 3: Test the Connection
$ ssh [email protected]
Last login: Tue Oct 6 21:59:00 2015 from 10.10.4.11
[user4@server2 ~]$
You can also use kubespray without CLI by directory cloning its git
repository. We will use it using CLI. Execute below step to install kubespray.
$ kubespray -v
$ vi ~/.kubespray/inventory/inventory.cfg
LinOxide 76
[kube-master]
machine-01
machine-02
[etcd]
machine-01
machine-02
machine-03
[kube-node]
machine-02
machine-03
[k8s-cluster:children]
kube-node
kube-master
Here the 3 Nodes of the server with the proxy.Let's start the cluster
deployment.
Here Inventory is the term specifically related to Ansible. Ansible works with
different systems in the infrastructure. Ansible communicates over ssh
using the configuration file called as Inventory. Inventory is a configuration
file with the .cfg extension which lists the information about the systems.
Look following information about the Kubernetes Nodes is stored inside
inventory.
LinOxide 77
Kubernetes Cluster Deployment Using Kubespray
Before, starting actual Deployment, Let's see what will be going behind the
scenes and how painful manual installation task is executed smoothly.
Kubespray will install kubernetes-api-server, etcd (key-value store),
controller, Scheduler will be installed on master machines and kubelet,
kube-proxy and Docker (or rkt) will be installed on node machines.
These all components will be installed and configured by ansible roles in
kubespray. All, We need to do is to execute one command.
$ kubespray deploy
Based on the number of master and minions, It will take time to deploy the
complete cluster. At the end of execution, you will get output something
like shown below. If there are no failed task, Your deployment is successful.
To check that Everything went good and deployment was successful, you
can login to master node and get all the worker node.
LinOxide 78
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dnsmasq- 1/1 Running 0 5m
7yk3n
kube-system dnsmasq- 1/1 Running 0 5m
5vfh0j
kube-system flannel- 2/2 Running 0 4m
machine-02
kube-system flannel- 2/2 Running 0 4m
machine-03
kube-system kube- 1/1 Running 0 5m
apiserver-
machine-01
kube-system kube- 1/1 Running 0 5m
controller-
manager-
machine-01
kube-system kube-proxy- 1/1 Running 0 4m
machine-02
kube-system kube-proxy- 1/1 Running 0 4m
machine-03
kube-system kube- 1/1 Running 0 5m
scheduler-
machine-02
kube-system kubedns- 3/3 Running 0 4m
p8mk7
kube-system nginx-proxy- 1/1 Running 0 2m
machine-02
kube-system nginx-proxy- 1/1 Running 0 2m
machine-03
LinOxide 79
7. Provisioning Storage in Kubernetes
Storage plays the important role of storing the data. Kubernetes consists
pods which are ephemeral. The important aspect of the having persistent
storage is to maintain the state of the application. The most useful method
is to have persistent storage attached with containers. The state of the
database will be maintained with the Persistent Storage. Kubernetes
provides the PersistentVolume object.
LinOxide 80
Kubernetes Persistent Volumes
1. PersistentVolume (PV)
2. PersistentVolumeClaim (PVC)
1. Provisioning :
There are two types of the Persistent Volume provisioning methods. Those
LinOxide 81
are Static and Dynamic.In the static method, the administrator creates the
PV’s. But if PV’s does not match with users PVC (Persistent Volume Claim)
then the dynamic method is used. The cluster will try to generate the PV
dynamically for the PVC.
2. Binding:
A control loop on the master watches the PVC’s and binds matched
PV with PVC. If no PV found matching PVC then the PVC will remain
unbounded.
3. Using:
Pods use the Claim as the Volume. Once the PV matches with required
PVC, the cluster inspects the claim to find the bound volume and mounts
the volume for a pod.
4.Reclaiming
The reclaim policy decides what to do with the PersistentVolume once
it has been released. Currently, volumes can be Retained, Recycled or
Deleted.
In Recycling the volume again available for new claim. The Recycling
performs the basic scrub on the volume if it is supported.
The Deleting will delete the PersistentVolume object from the Kubernetes
Cluster. It will also delete the associated external infrastructure Volumes
like AWS EBS, GCE etc.
LinOxide 82
Requesting Storage
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
LinOxide 83
The selector can consist of two fields:
A Persistent Volume can be mounted on the host. The access modes play
the important role for PV’s. Its possible to handle the Persistent Volume with
different Access Modes.
The access modes are:
• ReadWriteOnce – the volume can be mounted as read-write by a single
node
• ReadOnlyMany – the volume can be mounted read-only by many
nodes
• ReadWriteMany – the volume can be mounted as read-write by many
nodes
• RWO - ReadWriteOnce
• ROX - ReadOnlyMany
• RWX - ReadWriteMany
•
Pods are ephemeral and require the storage. Pod uses the claim for the
volume. The claim must be in the namespace of the Pod. PersistentVolume
is used for the pod which is backing the claim. The volume is mounted to
the host and then into the Pod.
LinOxide 84
kind: Pod
apiVersion: v1
metadata:
name: production-pv
spec:
containers:
- name: frontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: storage-pv
PersistentVolume:
LinOxide 85
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/"
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
LinOxide 86
Kubernetes and iSCSI
SCSI and IP (iSCSI) can be mounted with Kubernetes using an iscsi volume.
When the pod dies, the volume preserves the data and the volume is
merely unmounted. The feature of iSCSI is that it can be mounted read-
only by multiple consumers simultaneously. iSCSI does not allow write with
simultaneous users.
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
volumes:
- name: iscsipd-rw
iscsi:
targetPortal: 10.0.2.15:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260']
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
lun: 0
fsType: ext4
readOnly: true
LinOxide 87
8. Troubleshooting Kubernetes and Systemd
Services
It will give the status of each pod. Also mentioning the namespace is al-
ways the best practice. If you don’t mention the namespace, then it will
list the pods from the default namespace.
LinOxide 88
$ kubectl describe pods kube-dns-910330662-dp7xt --namespace
kube-system
Name: kube-dns-910330662-dp7xt
Namespace: kube-system
Node: minikube/192.168.99.100
Start Time: Sun, 12 Nov 2017 17:15:19 +0530
Labels: k8s-app=kube-dns
pod-template-hash=910330662
Annotations: kubernetes.io/created-by={“kind”:”SerializedReferenc
e”,”apiVersion”:”v1”,”reference”:{“kind”:”ReplicaSet”,”namespace”:”ku
be-system”,”name”:”kube-dns-910330662”,”uid”:”f5509e26-c79e-11e7-
a6a9-080027646...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 172.17.0.3
Created By: ReplicaSet/kube-dns-910330662
Controlled By: ReplicaSet/kube-dns-910330662
Containers
kubedns:
Container ID:
docker://76de2b5156ec2a4e2a7f385d700eb96c91874137495b9d715ddf-
5cd4c819452b
Image: gcr.io/google_containers/k8s-dns-kube-dns-
amd64:1.14.4
Image ID: docker://sha256:a8e00546bcf3fc9ae1f33302c16a6d4c-
717d0a47a444581b5bcabc4757bcd79c
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-map=kube-dns
--v=2
State: Running
LinOxide 89
Started: Sun, 12 Nov 2017 17:15:20 +0530
Ready: True
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s
timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s
period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-
vdlgs (ro)
dnsmasq:
Container ID:
docker://8dbf3c31a8074e566875e04b66055b1a96dcb4f192acb-
c1a8a083e789bf39a79
Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-
amd64:1.14.4
Image ID:
docker://sha256:f7f45b9cb733af946532240cf7e6cde1278b687cd7094cf-
043b768c800cfdafd
Ports: 53/UDP, 53/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
LinOxide 90
--
-k
--cache-size=1000
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Running
Started: Sun, 12 Nov 2017 17:15:20 +0530
Ready: True
Restart Count: 0
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s
timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-
vdlgs (ro)
sidecar:
Container ID:
docker:// sha256:38bac66034a6217abfd44b4a8a763b1a4c-
973045cae2763f2cc857baa5c9a872
Image: gcr.io/google_containers/k8s-dns-sidecar-
amd64:1.14.4
Image ID:
docker://sha256:38bac66034a6217abfd44b4a8a763b1a4c-
973045cae2763f2cc857baa5c9a872
Port: 10054/TCP
Args:
--v=2
--logtostderr
LinOxide 91
--
probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.
local.,5,A
--
probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
State: Running
Started: Sun, 12 Nov 2017 17:15:19 +0530
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s
timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-
vdlgs (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
default-token-vdlgs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vdlgs
Optional: false
QoS Class: Burstable
LinOxide 92
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
Events:
FirstSeen LastSeen Count From SubObjectPath
Type Reason Message
--------- -------- ----- ---- -------------
-------- ------ -------
40m 40m 1 default-scheduler
Normal Scheduled Successfully assigned kube-dns-
910330662-dp7xt to minikube
40m 40m 1 kubelet, minikube
Normal SuccessfulMountVolume MountVolume.SetUp
succeeded for volume “kube-dns-config”
40m 40m 1 kubelet, minikube
Normal SuccessfulMountVolume MountVolume.SetUp
succeeded for volume “default-token-vdlgs”
40m 40m 1 kubelet, minikube spec.
containers{sidecar}
Normal Pulled Container image “gcr.io/
google_containers/k8s-dns-sidecar-amd64:1.14.4” already present on
machine
40m 40m 1 kubelet, minikube spec.
containers{sidecar}
Normal Created Created container
40m 40m 1 kubelet, minikube spec.
containers{sidecar}
Normal Started Started container
40m 40m 1 kubelet, minikube spec.
containers{kubedns}
Normal Pulled Container image “gcr.io/google_
containers/k8s-dns-kube-dns-amd64:1.14.4” already present on
machine
LinOxide 93
40m 40m 1 kubelet, minikube spec.
containers{kubedns}
Normal Created Created container
40m 40m 1 kubelet, minikube spec.
containers{kubedns}
Normal Started Started container
If some node fails, then the the STATUS will be not ready.
It’s possible to get the information about Kubernetes Objects using kubectl
get
LinOxide 94
$ kubectl get
You must specify the type of resource to get. Valid resource types
include:
* all
* certificatesigningrequests (aka 'csr')
* clusterrolebindings
* clusterroles
* clusters (valid only for federation apiservers)
* componentstatuses (aka 'cs')
* configmaps (aka 'cm')
* controllerrevisions
* cronjobs
* daemonsets (aka 'ds')
* deployments (aka 'deploy')
* endpoints (aka 'ep')
* events (aka 'ev')
* horizontalpodautoscalers (aka 'hpa')
* ingresses (aka 'ing')
* jobs
* limitranges (aka 'limits')
* namespaces (aka 'ns')
* networkpolicies (aka 'netpol')
* nodes (aka 'no')
* persistentvolumeclaims (aka 'pvc')
* persistentvolumes (aka 'pv')
* poddisruptionbudgets (aka 'pdb')
* podpreset
* pods (aka 'po')
* podsecuritypolicies (aka 'psp')
* podtemplates
LinOxide 95
* replicasets (aka 'rs')
* replicationcontrollers (aka 'rc')
* resourcequotas (aka 'quota')
* rolebindings
* roles
* secrets
* serviceaccounts (aka 'sa')
* services (aka 'svc')
* statefulsets
* storageclasses
* thirdpartyresources
Additionally you can specify the namespace to get the Kubernetes objects
from specific namespace. In commands (aka ‘shortform’) shows the short
name for the commands. I.e. kubectl get pods and kubectl get po will re-
turn same.
For example:
LinOxide 96
Check if your cluster process is running with ps:
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES57ce7b913026
gcr.io/google_containers/heapster_grafana:v2.6.0-2 "/bin/sh -c /
run.sh" 28 seconds ago Up 27 se
conds k8s_grafana.eb42e400_influxdb-grafana-qkps6_
kube-system_f421d2cf-c7a8-11e7-909a-0242ac11005e_130df7e5278222
b93e3c kubernetes/heapster_influxdb:v0.6 "influxd
--config ..." 29 seconds ago Up 27 seconds k8s_
influxdb.78c83de7_influxdb-grafana-qkps6_kube-system_f421d2cf-
c7a8-11e7-909a-0242ac11005e_9a71b614
5b430ca4827c gcr.io/google_containers/pause-amd64:3.0
"/pause" 29 seconds ago Up 28 seconds k8s_
POD.d8dbe16c_influxdb-grafana-qkps6_kube-system_f421d2cf-c7a8-
11e7-909a-0242ac11005e_6364783d72155910afc7 gcr.io/google_
containers/pause-amd64:3.0 "/pause" 29 seconds
ago Up 28 seconds k8s_POD.d8dbe16c_heapster-gfrzl_
kube-system_f3f8a7da-c7a8-11e7-909a-0242ac11005e_f63e1e28
e18ecf4006e0 gcr.io/google_containers/kubernetes-dashboard-
amd64:v1.5.1 "/dashboard --port..." 29 seconds ago Up 28 seconds
k8s_kubernetes-dashboard.5374e7ba_kubernetes-dashboard-znmh3_
kube-system_f3cd40bc-c7a8-11e7-909a-0242ac11005e_61857f68
LinOxide 97
d08cad36b80f gcr.io/google_containers/pause-amd64:3.0
"/pause" 29 seconds ago Up 28 seconds k8s_
POD.d8dbe16c_kubernetes-dashboard-znmh3_kube-system_
f3cd40bc-c7a8-11e7-909a-0242ac11005e_01475695d02d0db563c7
gcr.io/google-containers/kube-addon-manager-amd64:v2 "/opt/
kube-addons.sh" 31 seconds ago Up 30 se
conds k8s_kube-addon-manager.67196b82_kube-addon-
manager-host01_kube-system_389bfd83002ddc85e82b52309ae3a3c2_
af0dcba7d1db6bebbefb gcr.io/google_containers/pause-amd64:3.0
"/pause" 31 seconds ago Up 30 se
conds k8s_POD.d8dbe16c_kube-addon-manager-
host01_kube-system_389bfd83002ddc85e82b52309ae3a3c2_
a9942f3c107da6b3c71a gcr.io/google_containers/localkube-
amd64:v1.5.2 "/localkube start ..." 37 seconds ago Up 36 se
conds minikube
This command will return the all containers which are running on Nodes
and Masters.
Networking Constraints
Kubernetes uses the service object to expose the application to the out-
side world. The service act as the proxy for the Kubernetes set of pods. To
the kubernize application, it is necessary to keep in mind that the applica-
tion will support Network Address Translation (NAT) and does not requires
forward and reverse proxy.
To inspect and debug the Kubernetes it’s recommended to have the good
understanding of the high-level architecture of the Kubernetes. Kuberne-
tes architecture consists masters and nodes. Each master is having mas-
ter-components. Also for the nodes, node-components are in-place.
LinOxide 98
$ kubectl get nodes
NAME STATUS AGE VERSION
minikube Ready 1h v1.7.5
If the status is ready for all Nodes that means the all necessary system
processes are running on the Kubernetes Nodes.
Check the necessary pods running inside Kubernetes master and Kuber-
netes Nodes.
The result of above command will vary on the type of installation. In this
case the cluster is running through minikube.
Use:
LinOxide 99
$ docker ps
If any of the service is failed to launch on the Kubernetes then check the
setup (YAML) file of the service (usually at /etc/kubernetes) and try to re-
start the service.
Example:
# journalctl -l -u kubelet
# journalctl -l -u kube-apiserver
LinOxide 100
Querying the State of Kubernetes
Kubectl command line tool is used for getting the state of the Kubernetes.
Kubectl configured with the API server. API server is the key component of
the master. Kubectl utility is used to get the all information about the Ku-
bernetes Objects. It includes Pods, Deployments, ReplicationControllers,
PersistentStorage etc.
Following is the example of the few objects retrieved through the Kubectl
command line.
LinOxide 101
$ kubectl get rc --namespace kube-system
NAME DESIRED CURRENT READY AGE
heapster 1 1 0 44s
influxdb-grafana 1 1 1 44s
kubernetes-dash- 1 1 1 44s
board
$ kubectl get deployment --namespace kube-system
No resources found.
This is just the querying for getting the information of the Kubernetes
objects. To describe the specific object kubectl describe command is
used. Let’s describe the information about the pod.
Status: Running
IP: 172.17.0.101
LinOxide 102
Created By: ReplicationController/influxdb-grafana
Controlled By: ReplicationController/influxdb-grafana
Containers:
influxdb:
Container ID:
docker://d43465cc2b130d736b83f465666c652a71d05a2a169eb72f2369c-
3d96723726c
Image: kubernetes/heapster_influxdb:v0.6
Image ID: docker-pullable://kubernetes/heap-
ster_influxdb@sha256:70b34b65def36fd-
0f54af570d5e72ac41c3c82e086dace9e6a-
977bab7750c147
Port: <none>
State: Running
Started: Mon, 13 Nov 2017 14:11:52 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts: /data from influxdb-storage (rw)
/var/run/secrets/kubernetes.io/
serviceaccount from default-token-3rdpl (ro)
grafana:
Container ID: docker://9808d5b4815bd87e55de203df5ce2b-
7503d7a4a9b6499b466ad70f86f5123047
Image: gcr.io/google_containers/heapster_
grafana:v2.6.0-2
Image ID: docker-pullable://gcr.io/google_contain-
ers/heapster_grafana@sha256:208c98b-
77d4e18ad7759c0958bf87d467a3243bf75b-
76f1240a577002e9de277
Port: <none>
State: Running
Started: Mon, 13 Nov 2017 14:11:52 +0000
LinOxide 103
Ready: True
Restart Count: 0
Environment:
INFLUXDB_SERVICE_ http://localhost:8086
URL:
GF_AUTH_BASIC_EN- false
ABLED:
GF_AUTH_ANONY- : true
MOUS_ENABLED:
GF_AUTH_ANONY- Admin
MOUS_ORG_ROLE:
GF_SERVER_ROOT_URL: /
Mounts: /var from grafana-storage (rw)
/var/run/secrets/kubernetes.io/
serviceaccount from default-token-3rdpl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
influxdb-storage:
Type: : EmptyDir (a temporary directory that
shares a pod’s lifetime)
Medium:
grafana-storage:
Type: EmptyDir (a temporary directory that shares
a pod’s lifetime)
Medium:
default-token-3rdpl:
Type: Secret (a volume populated by a Secret)
LinOxide 104
SecretName: default-token-3rdpl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
LinOxide 105
Checking Kubernetes yaml or json Files
The first thing to do with the declarative approach is to validate the YAML
or JSON file which contains the specifications. Online validators are avail-
able like http://www.yamllint.com/ and https://jsonlint.com/ . This will re-
move the syntax and parsing error from the specification file.
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
spec:
replicas: 20
selector:
matchLabels:
app: hello-world
ver: v1
template:
metadata:
labels:
app: hello-world
LinOxide 106
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
ver: v1
spec:
replicas: 20
selector:
matchLabels:
LinOxide 107
app: hello-world
ver: v1
template:
metadata:
labels:
app: hello-world
ver: v1
spec:
containers:
- name: hello-world
image: kubejack/helloworld:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
kubectl delete command will be used for the deleting the kubernetes com-
ponents. In imperative method there are different way of deleting the ku-
bernetes components.
# Delete pods and services with same names "baz" and "foo"
kubectl delete pod,service baz foo
LinOxide 108
9. Kubernetes Maintenance
Heapster aggregator is used for monitoring and event the logs. Heapster
stores the information in storage backend. Currently, it supports Google
Cloud Monitoring and InfluxDB as the storage backends. Heapster runs as
a pod in the Kubernetes Cluster. It communicates with each node of the
Kubernetes Cluster. Kubelet agent is responsible to provide the monitoring
information to Heapster. Kubelet itself collects the data from cAdvisor.
cAdvisor:
Kubelet:
Kubelet bridge the gap between Kubernetes Master and Kubernetes Node.
It manages the Pods and Containers on each machine.
InfluxDB and Grafana are used for storing the data and visualizing it.
Google cloud monitoring provides the hosted solution for monitoring
Kubernetes Cluster. Heapster can be set up to send the metrics to Google
Cloud monitoring.
LinOxide 109
$ minikube addons list
- addon-manager: enabled
- dashboard: enabled
- kube-dns: enabled
- default-storageclass: enabled
- heapster: disabled
- ingress: disabled
- registry: disabled
- registry-creds: disabled
LinOxide 110
In Minikube addons are helping for monitoring but it’s also possible to add
heapster as Kubernetes deployment. This will be the manual installation of
heapster, grafana and influxdb.
LinOxide 111
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: gcr.io/google_containers/heapster-amd64:v1.4.0
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb.kube-system.
svc:8086
---
apiVersion: v1
kind: Service
LinOxide 112
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/ku-
bernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
Using Kubectl:
LinOxide 113
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana ac-
cessible via
LinOxide 114
# the kubernetes api-server proxy. On production clusters, we
recommend
# removing these env variables, setup auth for grafana, and ex-
pose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitor-
ing-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/ku-
bernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
LinOxide 115
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through
an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a random-
ly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
If influxdb is the storage backend, then use following YAML for deploying
influxdb in Kubernetes Cluster:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
LinOxide 116
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/ku-
bernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
LinOxide 117
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
Using Kubectl:
To access grafana dashboard with the manual setup, describe the grafa-
na service and check endpoint of the service.
LinOxide 118
IP: 10.0.0.62
Port: <unset> 80/TCP
NodePort: <unset> 30943/TCP
Endpoints: 172.17.0.9:3000
Session Affinity: None
Events: <none>
Prometheus and Data Dog are also good tools for monitoring the
Kubernetes Cluster.
With Minikube, it will open the dashboard in the browser. The dashboard
consists the Cluster infromation, Namespace information and information
about the objects associated with cluster like workloads etc.The
dashboard consists the three different informative sections including
workloads, discovery and load balancing, config and storage.
In the cluster tab, Kubernetes dashboard shows the information about the
Namespaces, Nodes, Persistent Volumes and Storage Classes.
LinOxide 119
It gives the brief information about the cluster. In namespaces, it shows the
all available namespaces in Kubernetes Cluster. It’s possible to select all
namespaces or specific namespace.
LinOxide 120
The Dashboard also gives the information of each workload. Following
screenshot describes the Kubernetes Pod in detail.
But the dashboard is just not limited to information, it's even possible to ex-
ecute in the pod, get the logs of the pod, edit the pod and delete the pod.
LinOxide 121
Logs of the influxdb-grafana pod:
This is only for Pod. For other objects like deployments, it's possible to scale,
edit and delete the deployment.
LinOxide 122
Discovery and Load Balancing consists the information about the
Ingresses and Services.
The dashboard also provides the detail information about Ingresses and
Services. It's possible to edit and delete the ingress or service through the
dashboard.
LinOxide 123
Config and Storage consist the information about the Config Maps,
Persistent Volume Claims and Secrets.
LinOxide 124
In following example, Minikube addon is used for the Kubernetes
dashboard.For manual installation following YAML should be used:
kubectl proxy
If the username and password are configured and unknown to you then
use,
LinOxide 125
Logging Kubernetes Cluster
Application and system level logs are useful to understand the problem
with the system. It helps with troubleshooting and finding the root cause
of the problem. Like application and system level logs containerized
application also requires logs to be recorded and stored somewhere. The
most standard method used for the logging is to write it to standard output
and standard error streams. When the logs are recorded with separate
storage then the mechanism is called as Cluster Level Logging.
In the most basic logging it’s possible to write the logs to the standard
output using the Pod specification.
For Example:
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
LinOxide 126
$ kubectl create -f log-example-pod.yaml
pod "counter" created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 8s
hello-world-493621601- 1/1 Running 0 19d
1ffrp
hello-world-493621601- 1/1 Running 0 19d
mmnzw
hello-world-493621601- 1/1 Running 0 19d
nqd67
hello-world- 1/1 Running 0 19d
493621601-qkfcx
$ kubectl logs counter
0: Fri Dec 1 16:37:36 UTC 2017
1: Fri Dec 1 16:37:37 UTC 2017
2: Fri Dec 1 16:37:38 UTC 2017
3: Fri Dec 1 16:37:39 UTC 2017
4: Fri Dec 1 16:37:40 UTC 2017
5: Fri Dec 1 16:37:41 UTC 2017
6: Fri Dec 1 16:37:42 UTC 2017
7: Fri Dec 1 16:37:43 UTC 2017
8: Fri Dec 1 16:37:44 UTC 2017
The most important part of Node level logging is log rotation with the
Kubernetes. With the help of log rotation, it ensures the logs will not
consume all storage space of the nodes.
LinOxide 127
Cluster Level Logging with Kubernetes:
Kubernetes does not provide the native cluster level logging. But the clus-
ter level logging is possible with following approaches:
The most used and recommended method is using the node agent for log
collection and storing the logs in log storage.
kubectl apply \
--filename https://raw.githubusercontent.com/giantswarm/kuberne-
tes-elastic-stack/master/manifests-all.yaml
LinOxide 128
The logging will be enabled and you can check it through Kibana
dashboard. If you are using Google Kubernetes Engine, then stackdriver is
a default logging option for GKE.
Upgrading Kubernetes
LinOxide 129
cluster/gce/upgrade.sh -M v1.0.2
cluster/gce/upgrade.sh release/stable
upgrade-cluster.yml
---
- hosts: localhost
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
LinOxide 130
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have
requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it
can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type,
vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
LinOxide 131
- { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_man-
agement == 'vault'" }
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'"}
- { role: vault, tags: vault, when: "cert_management == 'vault'"}
LinOxide 132
- { role: network_plugin, tags: network }
- { role: upgrade/post-upgrade, tags: post-upgrade }
- hosts: kube-master[0]
any_errors_fatal: true
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when:
"secret_changed|default(false)" }
- hosts: kube-master
any_errors_fatal: true
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
LinOxide 133
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags:
dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and re-
solvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
LinOxide 134
9. Kubernetes Maintenance
• Monitoring Kubernetes Cluster
• Managing Kubernetes with Dashboard
• Logging Kubernetes Cluster
• Upgrading Kubernetes
Heapster aggregator is used for monitoring and event the logs. Heapster
stores the information in storage backend. Currently, it supports Google
Cloud Monitoring and InfluxDB as the storage backends. Heapster runs as
a pod in the Kubernetes Cluster. It communicates with each node of the
Kubernetes Cluster. Kubelet agent is responsible to provide the monitoring
information to Heapster. Kubelet itself collects the data from cAdvisor.
cAdvisor:
cAdvisor is an open source container usage and performance analysis
agent. In Kubernetes cAdvisor is included into Kubelet binary.cAdvisor
auto-discovers all containers. It collects the information of CPU, memory,
network and file system usage statistics.
Kubelet:
Kubelet bridge the gap between Kubernetes Master and Kubernetes Node.
It manages the Pods and Containers on each machine.
InfluxDB and Grafana are used for storing the data and visualizing it.
Google cloud monitoring provides the hosted solution for monitoring
Kubernetes Cluster. Heapster can be set up to send the metrics to Google
Cloud monitoring.
LinOxide 135
$ minikube addons list
- addon-manager: enabled
- dashboard: enabled
- kube-dns: enabled
- default-storageclass: enabled
- heapster: disabled
- ingress: disabled
- registry: disabled
- registry-creds: disabled
LinOxide 136
In Minikube addons are helping for monitoring but it’s also possible to add
heapster as Kubernetes deployment. This will be the manual installation of
heapster, grafana and influxdb.
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
LinOxide 137
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: gcr.io/google_containers/heapster-amd64:v1.4.0
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb.kube-system.
svc:8086
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/
kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
LinOxide 138
You can get the latest version of the heapster at https://github.com/
kubernetes/heapster/ .
Using Kubectl:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
LinOxide 139
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana
accessible via
# the kubernetes api-server proxy. On production clusters, we
recommend
# removing these env variables, setup auth for grafana, and
expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-
grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
LinOxide 140
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/
kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through
an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-
generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
LinOxide 141
If influxdb is the storage backend, then use following YAML for deploying
influxdb in Kubernetes Cluster:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/
kubernetes/tree/master/cluster/addons)
LinOxide 142
# If you are NOT using this as an addon, you should comment out
this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
minikube-addons...
Selector:
addonmanager.kubernetes.io/mode=Recon-
cile,name=influxGrafana
Type: NodePort
IP: 10.0.0.62
Port: <unset> 80/TCP
LinOxide 143
NodePort: <unset> 30943/TCP
Endpoints: 172.17.0.9:3000
Session Affinity:
Events: <none>
Prometheus and Data Dog are also good tools for monitoring the
Kubernetes Cluster.
With Minikube, it will open the dashboard in the browser. The dashboard
consists the Cluster infromation, Namespace information and information
about the objects associated with cluster like workloads etc.The
dashboard consists the three different informative sections including
workloads, discovery and load balancing, config and storage.
In the cluster tab, Kubernetes dashboard shows the information about the
Namespaces, Nodes, Persistent Volumes and Storage Classes.
LinOxide 144
It gives the brief information about the cluster. In namespaces, it shows the
all available namespaces in Kubernetes Cluster. It’s possible to select all
namespaces or specific namespace
LinOxide 145
Depending on the namespace selected previously, further tabs show
in depth information of associated Kubernetes object within selected
namespace. It consists the workloads of Kubernetes which includes
Deployments, Replica Sets, Replication Controller, Daemon Sets, Jobs, Pods
and Stateful Sets. Each of this workload is important to run an application
over the Kubernetes Cluster.
LinOxide 146
But the dashboard is just not limited to information, it's even possible to
execute in the pod, get the logs of the pod, edit the pod and delete the pod.
LinOxide 147
This is only for Pod. For other objects like deployments, it's possible to scale,
edit and delete the deployment.
The dashboard also provides the detail information about Ingresses and
Services. It's possible to edit and delete the ingress or service through the
dashboard.
LinOxide 148
Config and Storage consist the information about the Config Maps,
Persistent Volume Claims and Secrets.
kubectl proxy
If the username and password are configured and unknown to you then
use,
LinOxide 150
Logging Kubernetes Cluster
Application and system level logs are useful to understand the problem
with the system. It helps with troubleshooting and finding the root cause
of the problem. Like application and system level logs containerized
application also requires logs to be recorded and stored somewhere.
The most standard method used for the logging is to write it to standard
output and standard error streams. When the logs are recorded with
separate storage then the mechanism is called as Cluster Level Logging.
In the most basic logging it’s possible to write the logs to the standard
output using the Pod specification.
For Example:
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
LinOxide 151
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 8s
hello-world-493621601- 1/1 Running 3 19d
1ffrp
hello-world-493621601- 1/1 Running 3 19d
mmnzw
hello-world-493621601- 1/1 Running 3 19d
nqd67
hello-world-493621601- 1/1 Running 3 19d
qkfcx
hello-world-493621601- 1/1 Running 3 19d
xbf6s
$ kubectl logs counter
0: Fri Dec 1 16:37:36 UTC 2017
1: Fri Dec 1 16:37:37 UTC 2017
2: Fri Dec 1 16:37:38 UTC 2017
3: Fri Dec 1 16:37:39 UTC 2017
4: Fri Dec 1 16:37:40 UTC 2017
5: Fri Dec 1 16:37:41 UTC 2017
6: Fri Dec 1 16:37:42 UTC 2017
7: Fri Dec 1 16:37:43 UTC 2017
8: Fri Dec 1 16:37:44 UTC 2017
The most important part of Node level logging is log rotation with the
Kubernetes. With the help of log rotation, it ensures the logs will not
consume all storage space of the nodes.
LinOxide 152
Cluster Level Logging with Kubernetes:
Kubernetes does not provide the native cluster level logging. But the
cluster level logging is possible with following approaches:
The most used and recommended method is using the node agent for log
collection and storing the logs in log storage.
kubectl apply \
--filename https://raw.githubusercontent.com/giantswarm/
kubernetes-elastic-stack/master/manifests-all.yaml
LinOxide 153
The logging will be enabled and you can check it through Kibana
dashboard. If you are using Google Kubernetes Engine, then stackdriver is
a default logging option for GKE.
LinOxide 154
Upgrading Kubernetes
cluster/gce/upgrade.sh -M v1.0.2
cluster/gce/upgrade.sh release/stable
LinOxide 155
---
- hosts: localhost
gather_facts: False
roles:
- { role: kubespray-defaults}
- { role: bastion-ssh-config, tags: ["localhost", "bastion"]}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
gather_facts: false
vars:
# Need to disable pipelining for bootstrap-os as some systems have
requiretty in sudoers set, which makes pipelining
# fail. bootstrap-os fixes this on these systems, so in later plays it
can be enabled.
ansible_ssh_pipelining: false
roles:
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
vars:
ansible_ssh_pipelining: true
gather_facts: true
- hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: "{{ serial | default('20%') }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/preinstall, tags: preinstall }
- { role: docker, tags: docker }
LinOxide 156
- role: rkt
tags: rkt
when: "'rkt' in [etcd_deployment_type, kubelet_deployment_type,
vault_deployment_type]"
- { role: download, tags: download, skip_downloads: false }
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'" }
- { role: vault, tags: vault, vault_bootstrap: true, when: "cert_
management == 'vault'" }
- hosts: etcd
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: true }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- hosts: etcd:k8s-cluster:vault
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "cert_management == 'vault'"}
- { role: vault, tags: vault, when: "cert_management == 'vault'"}
LinOxide 157
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
serial: 1
roles:
- { role: kubespray-defaults}
- { role: upgrade/pre-upgrade, tags: pre-upgrade }
- { role: kubernetes/node, tags: node }
- { role: kubernetes/master, tags: master }
- { role: kubernetes/client, tags: client }
- { role: kubernetes-apps/cluster_roles, tags: cluster-roles }
- { role: network_plugin, tags: network }
- { role: upgrade/post-upgrade, tags: post-upgrade }
- hosts: kube-master[0]
any_errors_fatal: true
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/rotate_tokens, tags: rotate_tokens, when:
"secret_changed|default(false)" }
- hosts: kube-master
LinOxide 158
any_errors_fatal: true
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps/network_plugin, tags: network }
- { role: kubernetes-apps/policy_controller, tags: policy-controller }
- { role: kubernetes/client, tags: client }
- hosts: calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: network_plugin/calico/rr, tags: network }
- hosts: k8s-cluster
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: dnsmasq, when: "dns_mode == 'dnsmasq_kubedns'", tags:
dnsmasq }
- { role: kubernetes/preinstall, when: "dns_mode != 'none' and
resolvconf_mode == 'host_resolvconf'", tags: resolvconf }
- hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes-apps, tags: apps }
LinOxide 159