Kubernetes Interview Questions and Answers

Posted by

There are various important Kubernetes Interview Questions and Answers which are asked by different companies during their interview process. All the Kubernetes Interview Questions that we will discuss in this post are going to cover almost all of the topics of Kubernetes. Some of the important Kubernetes Interview Questions and Answers based on different types are as follows:

Basic Kubernetes Interview Questions and Answers

This section will consist of basic Kubernetes Interview Questions and Answers that are more of knowledge and frequently asked in the interviews:

Q.1. How is Kubernetes different from Docker Swarm?
Ans:

Differences based on
Features
                Kubernetes                 Docker Swarm
Installation
& Cluster
Config
Kubernetes setup is very complicated, but once the set up is installed cluster is robust.Installation process for Docker Swarm is very simple, but the cluster is not so robust.
GUIIt is Kubernetes Dashboard.Docker Swarm has no GUI.
ScalabilityHighly scalable as well as scales
fast.
Highly scalable and also scales 5
times faster than Kubernetes.
Auto-scalingIn kubernetes, auto-scaling is
possible.
In docker swarm, auto-scaling is not
possible.
Load
Balancing
We need manual intervention for
load balancing traffic between
different pods and containers.
Docker swarm performs automatic
load balancing of traffic between
the containers in the cluster.
Rolling
Updates &
Rollbacks
Can deploy rolling updates and
performs automatic rollbacks.
Can deploy rolling updates, but does
not perform automatic rollback.
DATA
Volumes
Can share storage volumes only with
the other containers residing in
the same pod.
Can share storage volumes with any
other container no matter they
reside in the same pod or not.
Logging &
Monitoring
There are inbuilt tools in Kubernetes
for logging and monitoring.
Third party tools (example: ELK
stack) are used for logging and
monitoring.


Q.2. What is Kubernetes?
Ans: Definition of Kubernetes:

Kubernetes Interview Questions

Kubernetes is an open-source tool that is used for container management, which holds all the responsibilities of container starting from deployment, scaling & descaling of containers & till load balancing. Being the Google’s brainchild, Kubernetes offers excellent community as well as works brilliantly with all the cloud providers. So, we can conclude and say that Kubernetes is not just a containerization platform but it’s a multi-container management solution.

Q.3. How is Kubernetes related to Docker?
Ans: It’s a well known fact that Docker provides the lifecycle management of containers and a Docker image builds the runtime containers. But, since these individual containers have to communicate and for this purpose Kubernetes is used. So, Docker builds the containers (known as docker containers) and these containers communicate with each other via Kubernetes. So, containers running on multiple hosts may be manually linked and orchestrated using Kubernetes.

Q.4. What is the difference between deploying applications on containers and hosts?
Ans: The difference between deploying applications on containers and hosts is explained with the help of diagram below:

Kubernetes Interview Questions

Refer to the above figure. The left side drawn architecture represents deployment of applications on hosts. So, this kind of architecture will have an operating system and then this operating system will have a kernel in it which will have various kind libraries installed on the operating system that are needed for the application. So, in such kind of framework you can have n number of applications and all of these applications will share the libraries present in that operating system whereas in the deployment of applications in the containers, there will be a little difference in the architecture.

Such kind of architecture will have a kernel and which is the only thing which is going to be common between all the applications. So, if we talk about a particular application that needs Java then that particular application will get access to Java and if there exists another application that needs Python then only that particular application will have access to Python.

The individual blocks drawn that you can see on the right side of the diagram are basically containerized and these containerized blocks are isolated from all other applications. So, the applications have the necessary libraries as well as binaries isolated from the rest of the system, and which cannot be encroached by any other application.

Q.5. What is Container Orchestration?

Ans: Consider a scenario where you have five-six microservices for an application. Now, these microservices are put in individual containers, but these microservices won’t be able to communicate without container orchestration. So, as the word orchestration means the amalgamation of all the instruments playing together in harmony in music, similarly the process of container orchestration means all the services in individual containers working together to fulfill the needs of a single server.

Q.6. What is the need for Container Orchestration?

Ans: Consider you have five-six microservices for a single application performing various different tasks, and all of these microservices are kept inside containers. Now, to be sure that these containers communicate with each other we need the container orchestration.
There were so many challenges that came into picture without the use of this process of container orchestration. So, for overcoming these challenges the container orchestration came into picture.

Q.7. What are the features of Kubernetes?

Ans: Features of Kubernetes:

  • Automation of manual processes: It automates various manual processes: for example, Kubernetes will control which server will host the container, how it will be launched etc.
  • Interacts with several groups of containers: Kubernetes enables management of more cluster at the same time
  • Provides additional services: Kubernetes provides additional services and the management of containers, offers security, networking as well as storage services
  • Self-monitoring: Kubernetes keeps on monitoring constantly the health of nodes and containers
  • Horizontal scaling: Kubernetes allows you scaling resources to scale not only vertically but also horizontally, easily and quickly
  • Storage orchestration: Kubernetes mounts and add storage system of our choice to run applications
  • Automates rollouts and rollbacks: if after making a particular change to your application something goes wrong, Kubernetes will roll it back for you.
  • Container balancing: Kubernetes is always well aware of where to place containers, by calculating the best location for the containers.
  • Run everywhere: As we know, kubernetes is an open source tool and provides you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, it lets you move workloads to anywhere wherever you want.

Q.8. How does Kubernetes simplify containerized Deployment?
Ans: As a typical application would have a cluster of containers running across multiple hosts, all of these containers would need to communicate to each other. So, to perform this task, you need something big that would balance the load, scale & monitor the containers. Since Kubernetes is cloud-agnostic and can run on any public or private cloud providers it must be your choice to simplify containerized deployment.

Q.9. What do you know about clusters in Kubernetes?

Ans:

Kubernetes Interview Questions and Answers

The fundamental behind Kubernetes is that we can force the desired or you can say required state management, which means that we can feed the cluster services of a specific configuration, and it will be up to the cluster services to go out and run that configuration within the infrastructure.

So, the deployment file will have all the configurations that are required to be fed into the cluster services. Now, the deployment file will be fed to the API and then it will be totally up to the cluster services to figure it out how to schedule these pods in the environment and to ensure that the right number of pods are running.

So, the API which sits in front of services, the worker nodes & the Kubelet process that the nodes run, and these nodes all together make up the Kubernetes Cluster.

Q.10. What is Google Container Engine?

Ans: Google Kubernetes Engine (GKE) is a orchestration system as well as the management system for the container clusters that run within its public cloud services and the Docker containers. Google Kubernetes Engine is based on Kubernetes, And Kubernetes is it’s open source container management system.
Organizations typically use Google Kubernetes Engine to:

  • Create/resize Docker container clusters
  • Create container pods, replication controllers, jobs, services or the load balancers
  • Resize the application controllers
  • Update as well as upgrade container clusters
  • To debug container clusters
  • Users can interact with Google Kubernetes Engine with the use of gcloud command line interface or the Google Cloud Platform Console.

Google Kubernetes Engine (GKE) is frequently used by the software developers in creating and testing new enterprise applications. Containers are also used by administrators to meet the scalability and performance demands of enterprise applications in a better way, such as web servers.

Google Kubernetes Engine (GKE) is comprised of a group of Google Compute Engine instances, that run Kubernetes. A master node manages a cluster of the Docker containers. A master node also runs a Kubernetes API server to interact with the cluster and performs the tasks, such as providing services to API requests and scheduling containers. Beyond the master node, a cluster can also include one or more than one nodes, each node running a Docker runtime and kubelet agent which are needed to manage Docker containers.

Q.11. What is Heapster?

Ans: Heapster is a cluster-wide aggregator of the data provided by the Kubelet running on each node. This container management tool is supported natively on Kubernetes cluster and this runs as a pod, same as the way any other pod in the cluster runs. So, the heapster basically discovers all the nodes in the cluster and makes query on the usage information from the Kubernetes nodes in the cluster, via on-machine Kubernetes agent.

Q.12.  What is Minikube?

Ans: Minikube is a tool which makes it easy to locally run Kubernetes. Minikube runs a single-node Kubernetes cluster within a virtual machine.

Q.13.  What is Kubectl?

Ans: Kubectl is a platform. You can make use of Kubectl to pass commands to the cluster. So, Kubectl basically provides the Command Line Interface(CLI) for running the commands against the Kubernetes cluster with various different ways to create as well as manage the Kubernetes component.

Q.14.  What is Kubelet?

Ans: Kubelet is an agent service that runs on each node and makes the slave able to communicate with the master. So, Kubelet works as per the description of containers which is provided to it in the PodSpec and it makes sure that the containers that are described in the PodSpec are up and running.

Q.15. What do you understand by a node in Kubernetes?
Ans:
A node in the Kubernetes cluster can be defined as:

  • A node in Kubernetes cluster is the main worker machine
  • They are also known as minions
  • It could run on a physical machine or a VM
  • Nodes provide all the necessary services to the pods
  • Node in the kubernetes system is managed by the master

In the above section, we discussed Basic Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Architecture-Based Kubernetes Interview Questions and Answers.

Architecture-Based Kubernetes Interview Questions and Answers

This section will consist of Architecture based Kubernetes Interview Questions and Answers that are frequently asked in the interviews:

Q1. What are the different components of Kubernetes Architecture?
Q2. What do you understand by Kube-proxy?
Q3.  Can you brief on the working of the master node in Kubernetes?
Q4.  What is the role of kube-scheduler and kube-apiserver?
Q5.  Can you brief about the Kubernetes controller manager?
Q6.  What is ETCD?
Q7. What are the different types of services in Kubernetes?
Q8. What do you understand by load balancer in Kubernetes?
Q9. What is Ingress network, and how does it work?
Q10.  What do you understand by Cloud controller manager?
Q11. What is Container resource monitoring?
Q12. What is the difference between a replica set and replication controller?
Q13. What is a Headless Service?
Q14. What are the best security measures that you can take while using Kubernetes?
Q15. What are federated clusters?

In the above section, we discussed Architecture-Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Scenario-Based Kubernetes Interview Questions and Answers.

Scenario-Based Kubernetes Interview Questions and Answers

This section of questions will consist of various Scenario Based Kubernetes Interview Questions and Answers that you may face during your interviews.

Scenario 1: Suppose a company built on monolithic architecture handles numerous products. Now, in today’s scaling industry, as the company expands, their monolithic architecture started causing problems.

  • How do you think the company shifted from monolithic to microservices and deploy their services containers?

Scenario 2: Consider a multinational company with a very much distributed system, with a large number of data centers, virtual machines, and many employees working on various tasks.

  • How do you think can such a company manage all the tasks in a consistent way with Kubernetes?

Scenario 3: Consider a situation, where a company wants to increase its efficiency and the speed of its technical operations by maintaining minimal costs. 

  • How do you think the company will try to achieve this?

Scenario 4:  Suppose a company wants to revise it’s deployment methods and wants to build a platform which is much more scalable and responsive.

  • How do you think this company can achieve this to satisfy their customers?

Scenario 5: Consider a multinational company with a very much distributed system, looking forward to solving the monolithic code base problem.

  • How do you think the company can solve their problem?

Scenario 6: All of us know that the shift from monolithic to microservices solves the problem from the development side, but increases the problem at the deployment side.

  • How can the company solve the problem occured on the deployment side?

Scenario 7:  Suppose a company wants to optimize the distribution of its workloads, by adopting new technologies.

  • How can the company achieve this distribution of resources efficiently?

Scenario 8: Consider a carpooling company wants to increase their number of servers by simultaneously scaling their platform.

  • How do you think will the company deal with the servers and their installation?

Scenario 9: Consider a scenario where a company wants to provide all the required hand-outs to its customers having various environments.

  • How do you think they can achieve this critical target in a dynamic manner?

Scenario 10: Suppose a company wants to run various workloads on different cloud infrastructure from bare metal to a public cloud.

  • How will the company achieve this in the presence of different interfaces?

In the above section, we discussed Scenario-Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Multiple Choice Kubernetes Interview Questions and Answers.

Multiple Choice Kubernetes Interview Questions and Answers

This section of questions will consist of Multiple Choice Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Q.1. What are minions in Kubernetes cluster? 

  1. They are components of the master node.
  2. They are the work-horse / worker node of the kubernetes cluster.
  3. They are monitoring engine used widely in kubernetes.
  4. They are docker container service.

Ans: They are the work-horse / worker node of the kubernetes cluster.

Q.2. Kubernetes cluster data is stored in which of the following?

  1. Kube-apiserver
  2. Kubelet
  3. Etcd
  4. None of the above

Ans: Etcd.

Q.3. Which of them is a Kubernetes Controller?

  1. ReplicaSet
  2. Deployment
  3. Rolling Updates
  4. Both ReplicaSet and Deployment

Ans: Both ReplicaSet and Deployment.

Q.4. Which of the following are core Kubernetes objects?

  1. Pods
  2. Services
  3. Volumes
  4. All of the above

Ans: All of the above.

Q.5. The Kubernetes Network proxy runs on which node?

  1. Master Node
  2. Worker Node
  3. All the nodes
  4. None of the above

Ans: All the nodes.

Q.6. What are the responsibilities of a node controller?

  1. To assign a CIDR block to the nodes
  2. To maintain the list of nodes
  3. To monitor the health of the nodes
  4. All of the above

Ans: All of the above.

Q.7. What are the responsibilities of Replication Controller?

  1. Update or delete multiple pods with a single command
  2. Helps to achieve the desired state
  3. If the existing pod crashes, creates a new pod
  4. All of the above

Ans: All of the above.

Q.8. How to define a service without a selector?

  1. Specify the external name
  2. Specify an endpoint with IP Address and port
  3. Just by specifying the IP address
  4. Specifying the label and api-version

Ans: Specify the external name.

Q.9. What did the 1.8 version of Kubernetes introduce?

  1. Taints and Tolerations
  2. Cluster level Logging
  3. Secrets
  4. Federated Clusters

Ans: Taints and Tolerations.

Q.10. The handler invoked by Kubelet to check if a container’s IP address is open or not is?

  1. HTTPGetAction
  2. ExecAction
  3. TCPSocketAction
  4. None of the above

Ans: TCPSocketAction.

In the above section, we discussed Multiple Choice Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Kubernetes Interview Questions and Answers based on different Sections.

Kubernetes Interview Questions and Answers based on Different Sections

This section of questions will consist of Section based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Now we have Divided Kubernetes interview questions and answers into following sections. We all know that these sections are overlapping sections it’s difficult to distinguish between them. These different sections namely are:

  • Administration
  • Compute
  • Storage
  • Network
  • Security
  • Monitoring
  • Logging

And, Kubernetes Interview Questions and Answers based on these are as follows:

Administration Based Kubernetes Interview Questions and Answers

This section of questions will consist of Administration Based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.1. How to do maintenance activity on K8 node?
Ans: The maintenance activity is inevitable part of administration, you might need to do the patching or may need to apply some security fixes on K8. Firstly, mark the node as unschedulable and then drain the PODs that are present on K8 node.

  • kubectl cordon
  • kubectl drain –ignore-daemonsets

It is really very important to include the –ignore-daemonsets for any daemonset that are running on this node. If in case, any statefulset is running on this node, and also if no more node is available to maintain the count of statesful set then statesfulset POD will remain in pending status.

Que.2. What is role of a pause container?
Ans: The main role of Pause container is that it servers as the parent container for all the containers present in your POD.

  • It also serves as the basis of Linux namespace sharing in the POD.
  • PID 1 (Process Id 1)for each POD to reap the zombie processes.

Que.3. Why we need service mesh?
Ans: A service mesh makes sure that communication among containerized and often ephemeral application infrastructure services is fast, secure, and reliable. The mesh service provides critical capabilities including service discovery, load balancing, observability, traceability, encryption, authorization and authentication, as well as support for the circuit breaker pattern.

Que.4. How to control the resource usage of a POD?
Ans: The resource usage of a POD can be controlled with requests and limits:

Request: Request is the amount of resources being requested for a container. If a container exceeds its request for resources, it may be choked back down to it’s request.

Limit: Limit is an upper cap on the resources which a container is able to use. If a container tries to exceed this limit then it may be terminated if Kubernetes decides that some other container needs the resources. If you are sensitive to pod restarts, it makes sense to have the count of all container resource limits equal or less than the total resource capacity for your kubernetes cluster.

Que.5. What are the units of CPU and memory in POD definition?
Ans: The units of CPU is in milicores and that of memory is in bytes. CPU can be easily choked but not the memory.

Que.6. Where else we can set a resource limit?
Ans: You can also set resource limit on a namespace which is helpful in scenarios where people have a habit of not defining the resource limits in the POD definition.

Que.7. How will you update the version of K8?
Ans: Before updating the version of K8, it is crucial to read the release notes to get proper understanding of the changes introduced in newer version than the older version and whether version update will also update the etcd.

Que.8. Difference between helm and K8 operator?
Ans: A K8 Operator is a controller specific to application that extends the Kubernetes API to create, configure and manage instances of complex stateful applications as per a Kubernetes user. K8 Operator builds upon the basic Kubernetes resource and the controller concepts, but also includes domain or knowledge specific to some application to automate common tasks which are better managed by computers. On the other side, helm is a package manager like apt-get or yum.

Que.9. Explain the role of Custom Resource Definition(CRD) in K8?
Ans: A custom resource is an extension of the Kubernetes API which is necessarily not available in a default Kubernetes installation process. It shows a customization of a particular Kubernetes installation. Regardless of how, many of the core Kubernetes functions are now built using custom resources that makes Kubernetes more modular.

Que.10. What are the various K8 related services running on nodes and role of each service?
Ans: Mainly K8 cluster consists of two type of node services : master and executor seervices which are explained as follows:

Master services: 

  • kube-apiserver: Master API service which behaves like a door to K8 cluster.
  • kube-scheduler: It schedules PODs according to the available resources on the executor nodes.
  • kube-controller-manager: It’s a type of control loop that keeps count of the shared state of the cluster through the kube-apiserver and makes changes that attempts to move the current state towards the desired state

Executor node: (These nodes also runs on master node) 

  • kube-proxy: The Kubernetes network proxy that is kube-proxy runs on each node which reflects services as defined in the Kubernetes API on each node and can also do simple TCP, SCTP, and UDP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
  • kubelet: kubelet takes a set of PodSpecs which are provided through various mechanisms (primarily through the apiserver mechanisms) and ensures that the containers described in those PodSpecs are up and running.

Que.11. Mention the recommended way of managing the access to multiple clusters?
Ans: kubectl looks for the config file, multiple clusters access information that can be specified in this config file. Kubectl config command can be used to manage the access to these multiple clusters.

Que.12. What is Pod Disruption Budget (PDB)?
Ans: A Pod Disruption Budget specifies the no. of replicas that an application can tolerate having, relative to how many replicas it is intended to have. For example, a Deployment which has a .spec.replicas: 6 is supposed to have 6 pods at any given time. If its Pod Disruption Budget allows for there to be 5 at a time, then an API called as the Eviction API will allow voluntary disruption of 1, but not 2 pods, at a time. Pod Disruption Budget is applicable for voluntary disruptions.

Que.13. In which situations daemonsets are normally used?
Ans: Daemonsets are used to start the PODs on every node in cluster. They are generally used to run the monitoring or logging agents that are supposed to run on every executor node present in the cluster.

Que.14. When stateful sets are preferred?
Ans: When you are running the applications that require quorum basically the applications that are not truely stateless so for such applications stateful sets are required.

Que.15. What is init container and when it can be used?
Ans: init containers are the containers that will set a stage for you before running the actual POD.

  • Wait for some time before starting the application Container with a command like sleep 60.
  • Clone a git repository into a volume.

Que.16. What are the application deployment strategies?
Ans: In this agile methodology world, there is continuous demand of upgrading the applications, we have multiple strategies for deploying the new version of application:

  • Recreate: This is the old style, existing application version is destroyed and a newly created version is deployed that results in significant amount of downtime. 
  • Rolling update: Gradually brings down the existing application deployment and introducing the new versions of the application. You may decide how many instances can be upgraded at single instance of time. 
  • Shadow: Traffic going to existing deployed version of application is replicated to the new version to check whether it’s working. Istio provides this pattern. 
  • A/B Testing using Istio: Runs multiple variants of the application together and determines the best one based on the user traffic. It is more for management decisions. 
  • Blue/Green: Blue is mainly about switching the traffic from one version of application to another version. 
  • Canary deployment: It is the deployment process In which certain percentage of traffic is shifted from one version of application to another version. If things go well we keep on increasing the traffic shift. It is different from the rolling update in which existing application version count is reduced gradually.

In the above section, we discussed Administration Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Compute Based Kubernetes Interview Questions and Answers based on different Sections.

Compute Based Kubernetes Interview Questions and Answers

This section of questions will consist of Compute based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.17. How to troubleshoot if the POD is not getting scheduled?
Ans: There are many factors which may lead to unstartable POD. Most common factor is: POD is running out of resources then, use the commands like kubectl desribe <POD> -n <Namespace> to check the reason why POD is not started. Also, keep on watching that kubectl get events to see all events coming from the kubernetes cluster.

Que.18. How to run a POD on particular node?
Ans: There are various methods available to achieve it.

  • nodeName: specify the node name in POD spec and then, it will try to run the POD on specific node.
  • nodeSelector: you may assign a specific label to node which have special resources and use the same label (node) in POD spec so that POD will run only on that node.
  • nodeaffinities: requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard, soft requirements for running the POD on specific nodes. IT will be replacing nodeSelector in future and depends on the node labels.

Que.19. How to make sure that PODs are collocated to get performance benefits?
Ans: podAntiAffinity and podAffinity are the two affinity concept to not keep and keep the PODs on the same node. Key point to be noted is that it depends on the POD labels.

Que.20. What are the taints and toleration?
Ans: Taint allows a node to repel a set of pods. You can set the taints on the node and only the POD which have toleration matching to the taints condition will be able to run on those nodes. It is useful in case, when you have allocated node for one user and do not want to run the PODs from other users on that node.

In the above section, we discussed Compute Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Storage Based Kubernetes Interview Questions and Answers based on different Sections.

Storage Based Kubernetes Interview Questions and Answers

This section of questions will consist of Storage based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.21. How to provide persistent storage for POD?
Ans: For persistent POD storage, persistent volumes are used. They can be provision statically or dynamically.
Static : A cluster administrator creates a number of PVs (Persistent Volumes) which carry the details of the real storage which is available for use by cluster users.
Dynamically : Administrator creates a Persistent Volume Claim(PVC) specifying the existing storage class and volume created dynamically based on PVC.

In the above section, we discussed Storage Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Logging Based Kubernetes Interview Questions and Answers based on different Sections.

Logging Based Kubernetes Interview Questions and Answers

This section of questions will consist of Logging based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.22. How to get the central logs from POD?
Ans: This architecture depends upon the application as well as many other factors. The common logging patterns are as follows:

  • Node level logging agent
  • Streaming sidecar container
  • Sidecar container with logging agent
  • Export logs directly from the application

In our setup,journalbeat and filebeat are running as daemonset. Logs collected by them are dumped to kafka topic which are eventually dumped to ELK (Elasticsearch, Logstash, and Kibana) stack.
Same can be achieved using EFK (Elasticsearch, Logstash, and Kibana) stack and fluentd-bit.

In the above section, we discussed Logging Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Monitoring Based Kubernetes Interview Questions and Answers based on different Sections.

Monitoring Based Kubernetes Interview Questions and Answers

This section of questions will consist of Monitoring based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.23. How to monitor K8 cluster?
Ans: Prometheus is used for K8 monitoring. Prometheus ecosystem consists of multiple components which are explained as follows:

  • main Prometheus server that scrapes and stores time series data
  • client libraries for instrumenting app code
  • a push gateway for supporting short lived jobs
  • special purpose exporters for services like Graphite, HAProxy, StatsD, etc.
  • an alertmanager which handle alerts
  • and, various support tools

Que.24. How to make prometheus HA?

Ans: You can run multiple instances of prometheus HA but grafana can only use one of them as a datasource. You may put load balancer in front of multiple prometheus instances, use sticky sessions and failover if one of the prometheus HA instance dies. This makes things complicated. Thanos is another project which solves these challenges.

Que.25. What are other challenges with prometheus?

Ans: Despite of being very good at K8 monitoring, prometheus still have some of the issues:

  • Prometheus HA support.
  • No downsampling is available for the collected metrics over a period of time.
  • No support for the object storage for long term metric retention.

All of the above challenges are again overcome by Thanos.

Que.26. What’s prometheus operator?
Ans: The mission of the Prometheus Operator is to make running Prometheus on top of Kubernetes as easy as it can be possible, while preserving configurability as well as making the configuration Kubernetes native.

In the above section, we discussed Monitoring Based Kubernetes Interview Questions and Answers. And in the next section, we will discuss the different catergory that is, Network Based Kubernetes Interview Questions and Answers based on different Sections.

Network Based Kubernetes Interview Questions and Answers

This section of questions will consist of Network based Kubernetes Interview Questions and Answers that are frequently asked in interviews.

Que.27. How two containers running in a single POD have single IP address?
Ans: Kubernetes implements this by creating a special container for each pod whose only purpose is to provide a network interface for all the other containers. There is one container namely, pause container which is responsible for namespace sharing in the POD. In general, people ignore the existence of this pause container but actually the pause container is the heart of network and other functionalities of POD. Pause container provides a single virtual interface which is used by all containers running in a POD.

Que.28. What are the various ways to provide external world connectivity to K8?
Ans: By default POD should be able to reach the external world but for vice-versa case, we need to do some work. To connect with POD from outer world, following options are available:

  • Nodeport: (it will expose one port on each node to communicate with itself)
  • Load balancers: (L4 layer of TCP/IP protocol)
  • Ingress (L7 layer of TCP/IP Protocol)

One another method is there which is called as kube-proxy which can be used to expose a service with only cluster IP on local system port.

$ kubectl proxy –port=8080 $
http://localhost:8080/api/v1/proxy/namespaces//services/:/
When should kubernetes nodport, load balancer and ingress should be used?

Que.29. What is the difference between nodeport and load balancer?
Ans: The nodport relies on the IP address of the node. Also, you can use the node ports only from the range 30000 to 32767, on the other hand, load balancer will have its own IP address. All the major cloud providers support creating the LB for you if you specify LB type while creation of the service. On baremetal based clusters, metal LB is promising.

Que.30. When we need ingress instead of a LB?
Ans: You have one LB for each service. You can have a single ingress for multiple services. This will allow you do both path based as well as subdomain based routing to backend services. You can do the SSL termination process at ingress.

Que.31. How POD to POD communication works?
Ans: For POD to POD communication, it is always recommended to use the K8 service DNS instead of POD IP because PODs are ephemeral and their IPs can get changed after the redeployment.

If the two PODs are running on the same host then physical interface will not come into the picture.

  • Packet will leave POD1 virtual network interface and will go to docker bridge (cbr0).
  • Docker bridge will forward the packet to right POD2 which is running on the same host.

If two PODs are running on the different hosts then physical interface of both host machines will come into the picture. Let us consider a scenario in which CNI is not used.

POD1 = 192.168.2.10/24 (node1, cbr0 192.168.2.1)
POD2 = 192.168.3.10/24 (node2, cbr1 192.168.3.1)

  • POD1 will send the traffic destined for POD2 to it’s GW (cbr0) because both are on different subnet.
  • GW does not know about 192.168.3.0/24 network hence it will forward the traffic to physical interface of node1.
  • node1 will forward the traffic to its own physical router/gateway.
  • And its own physical router/GW should have the route for 192.168.3.0/24 network to route the traffic to node2.
  • Once traffic reaches node2, it passes that traffic to POD2 through cbr1

If the Calico CNI it is responsible for adding the routes for cbr (docker bridge IP address) in all nodes.

Que.32. How POD to service communication works?
Ans: PODs are ephemeral their IP address can get change hence to communicate with POD in reliable way, service is used as a proxy or load balancer. A service is a kubernetes resource that causes a proxy to be configured to forward requests to a set of pods. The set of pods that will receive the traffic is determined by the selector, and the selector matches labels assigned to the pods when they were created. K8 provides an internal cluster DNS which resolves the service name.

Service is making use of different internal network than POD network. netfilter rules that are injected by the kube-proxy are used to redirect the request actually destined for service IP to right POD.

Que.33. How does service knows about healthy endpoints?

Ans: Kubelet running on worker node is responsible for detecting the unhealthy endpoints and it passes that information to API server then eventually this information is passed to kube-proxy and kube-proxy will adjust the netfilter rules accordingly.

In the above section, we discussed Network Based Kubernetes Interview Questions and Answers. Please write in comments of something else needs to be added or you guys have any doubt regarding this.

All the above Kubernetes interview questions and answers are really helpful. Please write in comments for any suggestions as well as ask on this topic: Kubernetes Interview Questions and Answers. We will surely come up with solutions to all your queries regarding this topic.

To know more read AWS Cloud and Cloud Computing.

Leave a Reply

Your email address will not be published. Required fields are marked *