Kubernetes
Kubernetes
Kubernetes
þÿIn case you re searching for KubernetesInterview Questions and answers for Experienced or
Freshers, you are at the correct place. There is parcel of chances from many presumed
organizations on the planet. The Kubernetes advertise is relied upon to develop to more than $5
billion by 2020, from just $180 million, as per Kubernetes industry gauges. In this way, despite
everything you have the chance to push forward in your vocation in Kubernetes Development.
Gangboard offers Advanced Kubernetes Interview Questions and Answers that assist you in
splitting your Kubernetes interview and procure dream vocation as Kubernetes Developer.
Do you believe that you have the right stuff to be a section in the advancement of future
Kubernetes, the GangBoard is here to control you to sustain your vocation. Various fortune 1000
organizations around the world are utilizing the innovation of Kubernetes to meet the necessities
of their customers. Kubernetes is being utilized as a part of numerous businesses. To have a
great development in Kubernetes work, our page furnishes you with nitty-gritty data as
Kubernetes prospective employee meeting questions and answers. Kubernetes Interview
Questions and answers are prepared by 10+ years experienced industry experts. Kubernetes
Interview Questions and answers are very useful to the Fresher or Experienced person who is
looking for the new challenging job from the reputed company. Our Kubernetes Questions and
answers are very simple and have more examples for your better understanding.
Kubernetes is the booming open-source platform in the tech world. It is used for automating
application management, scaling, and deploying. Kubernetes platform attracts many of the
professionals who are very interested in building their career in a particular niche. Here, in this
blog, we are providing Kubernetes interview questions for both experienced and freshers. Many
organizations ask the frequent Kubernetes interview questions which every individual must know.
Now we are going to see the top 30 Kubernetes interview questions which are more important to
become a skilled Kubernetes professional.
In this page, we are providing Kubernetes interview questions which include various concepts
such as Docker Swarm, Kubernetes commands, Nodes, Pods, and many more. If you want to
expertise yourself in Kubernetes or want to get placed in the top MNCs you are at the right place;
where you have most frequently asked Kubernetes interview questions in the interviews. So,
have a look at these questions to gain in-depth knowledge of various concepts related to
Kubernetes.
Answer: Kubernetes is an extensible, portable, and an open-source platform used for managing
services and containerized workloads. It is a large and fast-growing ecosystem as its services,
tools, and supports that are frequently and widely available.
Answer: Kubernetes word has been originated from Greek with a meaning pilot or helmsman. It
was foremostly designed by Google in 2014. It has become the building block for running the
workload productions at scale for Google. Later it has been maintained by Cloud Native
Computing Foundation.
Q3) What are the major differences between Kubernetes and Docker
Swarm?
Answer:
Traditional Deployment
Virtualized Deployment
Container Deployment
These three are the most crucial aspects that are useful by going back in time
Traditional Deployment:
Earlier in this era, applications can run on the physical servers by various organizations. This
causes allocation issues related to resources, which can be solved by running each and every
application on the different servers.
Virtualized Deployment:
The introduction of virtualization was done so, that it allows us to run many numbers of virtual
machines on only one server CPU.
Container Deployment:
Container deployment has flexible isolation properties in order to share an operating system
among applications.
Answer: Kubernetes is the container that provides a good way to run and bundle your
applications. We need to effectively manage the containers in the production environment that
allows us to run applications. It also provides a framework to run distributed systems resiliently.
Storage orchestration
self-healing
Configuration management
Answer: There are mainly three components to deliver a functioning Kubernetes cluster. They
are:
Addons
Node components
Master components
Answer: Kubernetes is a container for the Docker which is more comprehensive than Docker
Swarm and is designed to counterpart clusters of the nodes at a level in a well-defined manner.
Whereas, Docker is the platform tool for building and running the Docker containers.
Answer: It is the important aspect of the master node which notices the newly created pods with
no assigned node for it, and selects one of the nodes to run on them.
Get Kubernetes Training with Real Time Live Projects
Q10) What are the benefits of Kubernetes?
Answer: Namespaces are used in environments where there are multiple users in a respective
team or project. It is mainly designed to provide scops for the names and the assigned names
must be unique within the namespace. Moreover, they provide a way to divide cluster resources
within the existing namespace itself.
Answer: Initially the Kubernetes starts with three namespaces, and they are:
kube-public: This is created automatically and can be read by all the users and it is the most
reserved for cluster usage.
kube-system: It is for the objects which are created by the Kubernetes system.
Answer: Pods are defined as the group of the containers that are set up on the same host.
Applications within the pod also have access to shared volumes.
Answer: It is the node agent that runs on each node. It works based on PodSpec, which JSON
object in terms of a pod. The Kubelet logs take a set of PodSpecs that provides various
mechanisms and ensures that the PodSpecs are running effectively.
Answer: It is defined as a CLI (command-line interface) for performing and running commands
against Kubernetes clusters.
Answer:
The master is the one who is responsible for managing clusters. Kubernetes automates the
scheduling and distribution of application containers across the cluster in a more effective
manner. In Kubernetes, Minikubes is used to create clusters. The Kubernetes cluster consists of
mainly two important sources, and they are:
Answer: This is a network-proxy that runs on each and every node and also reflects as defined in
the Kubernetes API. This proxy can also perform stream forwarding across a set of backends. It
is one of the optional add-ons that provide the DNS cluster for the cluster APIs.
kube-proxy [flags]
Answer: Kubernetes master is mainly designed to control the nodes and the nodes mainly consist
of a crucial part called containers. Now, here comes the pods these pods are made up of a group
of containers based upon the requirements and configurations. Every container which we utilize
is present inside a pod so, if the set-up for the pod is made then the ca deploy using CUI
(Command Line Interface). Scheduling of the pods is done based on the node and relevant
requirements. The connection between the node and the master components in the Kubernetes
is made using the Kube-apiserver.
Answer: This API server of Kubernetes is mainly used to configure and validate API objects that
include replication controllers, services, pods, and many more. Kube-apiserver services the
þÿREST operations and provides the frontend to the cluster s shared region through which
interaction takes place between the components.
kube-apiserver [flags]
kube-scheduler [flags]
Answer: Kube-controller-manager is a divinity that embeds the crucial core control loops shipped
with the Kubernetes. In most of the robotic and automation applications, control loops are the
non-terminating loops that regulate the state of the particular system. In Kubernetes, the
controller itself is the control loop that watches the shared state of the cluster using the apiserver.
Examples of the controllers that ships today with Kubernetes are namespaces, replications, and
many more.
Answer: Kubernetes uses etcd to store all its data. The reason behind it is that Kubernetes is a
distributed system so as to store distributed data it uses etcd. Etcd s a distributed, most reliable
key-value for storing the most critical data.
Answer: To easily learn Kubernetes locally minikube tools is used. This runs on the single-node
Kubernetes cluster inside a virtual machine.
Answer: The process of load balancing lets you show or display the services. There are two
types of load balancing in kubernetes, and they are:
This balancing is used to balance the loads automatically and allocates the pods within the
necessary configuration.
It transfers or drags the entire traffic from the external loads to backend pods.
Q25) List out the components that interact with the node interface of
Kubernetes?
Answer: The following are the components that interact with the node interface of Kubernetes,
and they are:
Node Controller
Kubelet
Kubectl
Answer: The process that runs on Kubernetes Master Node is called the Kube-apiserver process.
Answer: Node in the Kubernetes is called as minions previously, it is a work machine in the
Kubernetes. Each and every node in the Kuberntes contains the services to run the pods.
Answer: Heapster is a metrics collection and performance management system for the
Kubernetes versions. It allows us for a collection of workloads, pods, and many more.
Answer:
Q30) What is the future scope for Kubernetes?
Answer: Kubernetes will become one of the most used operating system (OS) for the cloud in the
future. The future of Kubernetes mostly lies in virtual machines (VM) than n containers.
ETCD: ETCD is used to store the configuration data of every node present in the cluster. It can
store good amount of key values which can be shared with several nodes in the cluster. Because
of its sensitivity, Kubernetes API Server can only access ETCD. But, it contain a shared key
value store which can be accessed by everyone.
API Server: Kubernetes itself is an API server controls and manages the operations in cluster
through API Server. This server provides an interface to access various system libraries and
tools to communicate with it.
Process Planner(Scheduler): Scheduling is the major component of Kubernetes Master machine.
Scheduler shares the workload. Scheduler is responsible to monitor the amount of workload
distributed and used in the cluster nodes. It also keeps the workload after monitoring on the
available resources to receive the workload.
Control Manager: This component is responsible to administer the current position of cluster. It is
equivalent to a daemon process continuously runs in a unending loop which collects and sends
the collected data to the API server. It handles and controls various controllers.
The following are the major components of a server node to exchange information with
Kubernetes.
Docker: Every node contain Docker to run the containers smoothly and effectively. Docker is the
basic component of every node in a cluster.
Proxy service of Kubernetes: Proxy service is responsible to establish communication to the host.
Every node communicates with the host through proxy. Proxy service helps nodes to transmit
data to the containers upon its request and is also responsible for load balancing. It is also
responsible to control pods present in node, data volumes, creation of new containers, secrets
etc.,
Service of Kubelet: Kubelet service helps every node to share information from the control pane
and vice versa. Kubelet is responsible to read the details of node configuration and the write
values which were present in the ETCD store. This service administers the port forwarding,
protocols of network etc.,
Namespaces are given to provide an identity to the user to differentiate them from the other
users. Namespace assigned to a user must be unique. Through namespaces, cluster resources
can be separated and shared within the assigned namespace itself.
They are considered as virtual clusters which will be present on the same cluster.
Namespaces are used to deliver logical segregation of team and their corresponding
environments.
If you want to delete a namespace from the list, use the following command:
kubet deletenamespace<xyz>
Note: xyz is given for example. You can give any name in the namespace region.
Q40) Explain how will you setup Kubernetes.
Virtual Data Center is the basic setup before installing Kubernetes. Virtual Data center is actually
believed to be set of machines which can interact with each of them through a network. If the
user does not have any existing infrastructure for cloud, he can go for setting up Virtual Data
Center in the PROFITBRICKS. Once completing this setup, the user has to setup and configure
the master and node. For an instance, we can consider the setup in Linux Ubuntu. Same setup
can be followed in other Linux machines.
Installation of Docker is the basic setup to run Kubernetes. But, there are some prerequisites
needed before installing Kubernetes. We shall install Docker initially to start with. Following steps
should be followed to install Docker.
Install the apt package and update it if necessary. If update is needed, use the commands:
Q41) Once update is installed, add new key for GPG using the
command:
Further, update the image of the API package using the command:
Install Docker Engine. Check whether the kernel version you are using is the right one.
After installing Docker Engine, install etcd. Now, install Kubernetes on the machines.
Pods are actually contain a class of containers which are installed and run on the same host.
Containers were present on pods and therefore configuring the pods as per the specifications is
important. As per the requirement of the nodes in a cluster, scheduling of pods can be
established.
Q44) What are the types of Kubernetes pods? How do you create
them?
Single Container Pod: User has to give Kubectl run command where he defined the image in
Docker registry to create a single container pod. The following command is used to create a
single container pod:
Multicontainer pods: To create multicontainer pods, we need to create a yaml file including the
details of the containers. User has to define the complete specifications of the containers such as
its name, image, port details, image pull policy, database name, etc.,
The API server is responsible to provide a front end to the clusters that are shared. Through this
interface, the master and node communicate with one another. The primary function of API
server is to substantiate and configure the API objects which includes pods, associated services,
controllers etc.,
Q46) What do you mean by Kubernetes images?
There is no specific support to Kubernetes images as on date and Docker images actually
support Kubernetes. To create an infrastructure for Containers, Docker images are the primary
elements to form it. Every container present inside a pod will contain a Docker image running on
it.
The important function of Kubernetes job is to form a single or multiple pods and to monitor, log
how well they are running. Jobs reflect the running of pods and they assure how many pods
finished successfully. A job is said to be complete if the specified number of pods successfully
run and complete.
Keys will contain some values. Labels contain pair of key values connected to pods, associated
services and the replication controllers. Generally, labels were added to some object during
creation. During run time, they can be modified.
Q49) What do you know about Selectors and what are the types of
selectors in Kubernetes API?
Since multiple objects have the possibility of same labels, selectors are used in Kubernetes.
Label Selectors are unique and users use it to choose a set of objects. Till date, Kubernetes API
allows two kinds of Label selectors. They are:
Selectors based on Set: This kind of selector permits to filter the keys as per the set of values.
Selectors based on Equality: This kind of selector permit filter as per key and by value. If there
is any matching object found, it should meet the expectations of the specified labels.
Minion is nothing but a node present in the Kubernetes cluster on a working machine. Minions
can either be a virtual machine, a physical one or a cloud sample. Every node present in a cluster
should meet the configuration specifications to run a pod on the node. Two prime services such
as kubelet and proxy services along with Docker were needed to establish interface and
communication with the nodes which runs the Docker containers present in the pod which were
created on the node. Minions were not actually formed by Kubernetes but could be formed by a
cluster manager present in virtual or physical machines or by a service provider for a cloud.
Node controller are the group of services which were running in the Kubernetes Master. Node
controllers are responsible to observe the activities of the nodes present on a cluster. They do
this as per the identity of metadata name assigned to a node. Node controller checks for the
validity of a node. If the node is found valid, it assigns a fresh created pod to the valid node. If the
node is invalid, node controller will wait till the node becomes valid so as to assign a pod.
Google container Engine is available open source and is a Kubernetes based Engine which
supports for clusters which can run within the public cloud services of Google. This engine
services as a platform for Docker containers and clusters.
Ingress network provides set of rules to enter into the Kubernetes cluster. This network is
responsible to provide the incoming connections further This allows inbound connections, further
configured according to the required specifications so as to offer give services through URLs
which are available externally,through load balance traffic, or by providing virtual hosting which is
name based. Therefore, Ingress network can be defines as an API object that controls and
administers external access to the services present in a cluster, through HTTP.
Kubernetes service is defined as analytical pairs of pods. As per the information present on top of
the pod, it will contain a DNS name and one IP address through which pods can be accessed.
Kubernetes service is very useful to regulate and administer load balancing as per specific
requirements. Kubernetes service also supports pods in scaling them too easily.
Q55) What are the types of Kubernetes services?
Node port: Node port helps to fetch the details of a static port of the node deployed currently.
With the assistance of Cluster IP, Node port routing can be established automatically. User can
access this node port service away from the cluster through the following command:
NodeIP:nodePort.
It is responsible to monitor and verify whether the allowed number of pod replicas were running
Replication controller lets the user to alter a particular pod. The user can drag its position to the
top or to the bottom.
Replica set is considered as a substitute to the replication controller. The prime function of replica
set is to assure the number of pod replicas running. There are two types of Label selectors
supported by Kubernet API. They are: Equality based selectors and Set based selectors. The
primary difference between the replication controller and replica set is that, replication controller
supports equality based selector alone whereas the replica set allows both the types of selectors.
Q58) How do you update, delete and rollback in a Deployment
strategy?
Update: Through this feature, user could be able to update the existing deployment during
runtime and before its completion. Through update, the ongoing deployment will end and a fresh
deployment will be created.
Delete: Through this feature, the user could be able to cancel or pause the ongoing deployment
by deleting the deployment before its completion. Creating similar deployment will resume the
deployment.
Rollback: User can restore a database or program to a previously defined state. This process is
called as Rollback. User could be able to rollback the ongoing deployment through this feature.
With the aid of Deployment strategies, user could be able to replace the existing replication
controller to a new replication controller. Recreate is used to kill all the running (existing)
replication controllers and creates newer replication controllers. Recreate helps the user in faster
þÿdeployment whereas it increases the downtime, if in case the new pods haven t replace the down
old pods.
Rolling update also helps the user to replace the existing replica controller to newer ones. But,
the deployment time is slow and in fact, we could say, there is no deployment at all. Here, some
old pods and some new pods were readily available to the user to process any time.
Volumes can be considered as directories through which the containers in a pod can be
accessed. The differences between Kubernetes volumes and Docker volumes are:
The following are some of the Kubernetes volume which are widely used:
NFS: Network File System lets an ongoing NFS to let you mount on your pod. Though you
remove the pod from the node, NFS volume will not be erased but only the volume is unmounted.
Flocker: Flocker is available open source and is used to control and administer data volumes. It is
a manager for data volume for a clustered container. Through Flocker volume, user can create a
Flocker dataset and mount the same to the pod. If in case, there is no such dataset available in
Flocker, the user has to create the same through Flocker API.
EmptyDIR: Once a pod is assigned to a node, EmptyDIR is created. This volume stay active till
the pod is alive and running on that particular node. EmptyDIR volume does not contain anything
in the initial state and is empty; the user can read or write files from this volume. The data
present in the volume gets erased once the pod is removed from that particular node.
AWS Elastic Block Store: This volume mounts Amazon Web Services Elastic Block Store on to
your pod. Though you remove the pod from the node, data in the volume remains.
GCE Persistent Disk: This volume mounts Google Compute Engine Persistent Disk on to your
pod. Similar to AWS Elastic Block store, the data in the volume remains even after removing the
pod from the node.
Host path: Host path mounts a directory or file from the file system of the host on to your pod.
RBD: Rados Block Device volume lets a Rados Block device to be mounted on to your pod.
Similar to AWS Elastic Block store and GCE Persistent Disk Volumes, even after removing the
pod from the node, the data in the volume remain.
Persistent Volume is a network storage unit controlled by the administrator. PV is a strategy used
to control an individual pod present in a cluster.
Persistent Volume Claim is actually the storage provided to the pods in Kubernetes after the
request from Kubernetes. User is not expected to have knowledge in the provisioning and the
claims has to be created where the pod is created and in the same namespace.
As the name implies, secrets are sensitive information and in this context, they are login
credentials of the user. Secrets are objects in Kubernetes which stores sensitive information
namely the user name and the passwords after encrypting them.
Network policy contains a set of protocol to achieve information transfer between the pods and
defines how those pods present in the same name space transfers information with one another.
It also defines data transfer with the network endpoint. User has to enable the network policy in
the API server while configuring it in run time. Through the resources available in the network
policy, select pods using labels and set the rules to permit the data traffic to a particular pod.
If you add a fresh API to Kubernetes, the same will provide extra features to Kubernetes. So,
adding a new API will improve the functioning ability of Kubernetes. But, this will increase the
cost and maintenance of the entire system. So, there is a need to maintain the cost and
complexity of the system. This can be achieved by defining some sets for the new API.
Changes in the API server has to be done by the team members of Kubernetes. They are
responsible to add a new API without affecting the functions in the existing system.
Kubernetes supports several versions of API in order to provide support to multiple structures.
Versioning is available at Alpha level, Beta level and Stable level. All these version features are in
multiple standards.
Alpha level versions have alpha values. This version is prone to errors but the user can drop for
support to rectify errors at any time. But, this version is limited to test in short time alone.
Beta level versions contain beta values. Scripts present in this version will be firm since because
they are completely tested. User can look for support any time in case of any errors. This version
is not recommended to use in commercial applications.
Stable level versions get many updates often. User has to get the recent version. Generally the
þÿversion name will be vX, where v refers to the version and x refers to an integer.
Kubectl commands provides an interface to establish communication between pods. They are
also used to control and administer the pods present in the Kubernetes cluster. To communicate
with the Kubernetes cluster, user has to declare kubectl command locally. These commands are
also used to communicate and control the cluster and the Kubernetes objects.
Q73) What are the kubectl commands you are aware of?
kubectl apply
kubectl annotate
kubectl attach
kubectl api-versions
kubectl autoscale
kubectl config
kubectl cluster-info
kubectl set-credentials
Q74) Using create command along with kubectl, what are the things
possible?
User can create several things using the create command with kubectl. They are:
Creating deployment
Creating secrets
Creating quota
Creating Cluster IP
kubectl drain command is used to drain a specific node during maintenance. Once this command
is given, the node goes for maintenance and is made unavailable to any user. This is done to
avoid assigning this node to a new container. The node will be made available once it completes
maintenance.
To create a new application using Docker file, user has to create a Docker file initially. Once
creating an image, the same can be transferred to the container after testing it completely.
Deployment is the process of transferring images to the container and assigning the images to
pods present in Kubernetes cluster. Application deployment automatically sets up the application
cluster thereby setting the pod, replication controller, replica set and the deployment of service.
Cluster set up is organized properly so as to ensure proper communication between the pods.
This set up also sets up a load balancer to divert traffic between pods. Pods exchange
information between one another through objects in Kubernetes.
One of the important feature of Kubernetes is Autoscaling. Autoscaling can be defined as scaling
the nodes according to the demand for service response. Through this feature, cluster increases
the number of nodes as per the service response demand and decreases the nodes in case of
the decrease in service response requirement. This feature is supported currently in Google
Container Engine and Google Cloud Engine and AWS is expected to provide this feature at the
earliest.
To manage larger clusters, monitoring is needed. Monitoring is yet another important support in
Kubernetes. To do monitoring, we have several tools. Monitoring through Promotheus is a
famous and widely used tool. This tool not monitors, but also comes with an alert system. It is
available as open source. Promotheus is developed at Sound Cloud. This method has the
capability to handle multi-dimensional data more accurately than other methods. Promotheus
needs some more components to do monitoring. They are
Grafana
Ranch-eye
Infux DB
Kubernetes container logs are much similar to Docker container logs. But, Kubernetes allows
users to view logs of deployed pods i.e running pods. Through the following functions in
Kubernetes, we can get even specific information as well.
Sematext Docker agent is more famous among the recent day developers. It is a log collection
agent with metrics and events. Sematext Docker agent runs as a small container in each Docker
host and gathers metrics, events and logs for all the containers and cluster nodes. If core
services are deployed in Docker containers,it observes every container inclusive of a container
for Kubernetes core services.
Share on Facebook
Share on Twitter
Share on Linkedin
Share on Pinterest