-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

The Kubernetes Bible
By :

Now, why is Kubernetes such a good fit for DevOps teams? Here’s the connection: Kubernetes shines as a container orchestration platform, managing the deployment, scaling, and networking of containerized applications. Containers are lightweight packages that bundle an application with its dependencies, allowing faster and more reliable deployments across different environments. Users leverage Kubernetes for several reasons:
You can imagine that launching containers on your local machine or a development environment is not going to require the same level of planning as launching these same containers on remote machines, which could face millions of users. Problems specific to production will arise, and Kubernetes is a great way to address these problems when using containers in production:
High availability is the central principle of production. This means that your application should always remain accessible and should never be down. Of course, it’s utopian. Even the biggest companies experience service outages. However, you should always bear in mind that this is your goal. Kubernetes includes a whole battery of functionality to make your containers highly available by replicating them on several host machines and monitoring their health on a regular and frequent basis.
When you deploy containers, the accessibility of your application will directly depend on the health of your containers. Let’s imagine that for some reason, a container containing one of your microservices becomes inaccessible; with Docker alone, you cannot automatically guarantee that the container is terminated and recreated to ensure the service restoration. With Kubernetes, it becomes possible as Kubernetes will help you design applications that can automatically repair themselves by performing automated tasks such as health checking and container replacement.
If one machine in your cluster were to fail, all the containers running on it would disappear. Kubernetes would immediately notice that and reschedule all the containers on another machine. In this way, your applications will become highly available and fault tolerant as well.
Deployment management is another of these production-specific problems that Kubernetes solves. The process of deployment consists of updating your application in production to replace an old version of a given microservice with a new version.
Deployments in production are always complex because you have to update the containers that are responding to requests from end users. If you miss them, the consequences could be severe for your application because it could become unstable or inaccessible, which is why you should always be able to quickly revert to the previous version of your application by running a rollback. The challenge of deployment is that it needs to be performed in the least visible way to the end user, with as little friction as possible.
Whenever you release a new version of the application, there are multiple processes involved, as follows:
Dockerfile
or Containerfile
with the latest application info (if any).Refer to the following image to understand the high-level flow in a typical scenario (please note that this is an ideal scenario because, in an actual environment, you might be using different and isolated container registries for development, staging, and production environments).
Figure 1.9: High-level workflow of container management
IMPORTANT NOTE
The container build process has absolutely nothing to do with Kubernetes: it’s purely a container image management part. Kubernetes will come into play later when you have to deploy new containers based on a newly built image.
Without Kubernetes, you’ll have to run all these operations including docker pull
, docker stop
, docker delete
, and docker run
on the machine where you want to deploy a new version of the container. Then, you will have to repeat this operation on each server that runs a copy of the container. It should work, but it is extremely tedious since it is not automated. And guess what? Kubernetes can automate this for you.
Kubernetes has features that allow it to manage deployments and rollbacks of Docker containers, and this will make your life a lot easier when responding to this problem. With a single command, you can ask Kubernetes to update your containers on all of your machines as follows:
$ kubectl set image deploy/myapp myapp_container=myapp:1.0.0
On a real Kubernetes cluster, this command will update the container called myapp_container
, which is running as part of the application deployment called myapp
, on every single machine where myapp_container
runs to the 1.0.0
tag.
Whether it must update one container running on one machine or millions over multiple datacenters, this command works the same. Even better, it ensures high availability.
Remember that the goal is always to meet the requirement of high availability; a deployment should not cause your application to crash or cause a service disruption. Kubernetes is natively capable of managing deployment strategies such as rolling updates, which aim to prevent service interruptions.
Additionally, Kubernetes keeps in memory all the revisions of a specific deployment and allows you to revert to a previous version with just one command. It’s an incredibly powerful tool that allows you to update a cluster of Docker containers with just one command.
Scaling is another production-specific problem that has been widely democratized using public clouds such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). Scaling is the ability to adapt your computing power to the load you are facing, again to meet the requirement of high availability and load balancing. Never forget that the goal is to prevent outages and downtime.
When your production machines are facing a traffic spike and one of your containers is no longer able to cope with the load, you need to find a way to scale the container workloads efficiently. There are two scaling methods:
Docker is not able to respond to this problem alone; however, when you manage Docker with Kubernetes, it becomes possible.
Figure 1.10: Vertical scaling versus horizontal scaling for pods
Kubernetes can manage both vertical and horizontal scaling automatically. It does this by letting your containers consume more computing power from the host or by creating additional containers that can be deployed on the same or another node in the cluster. And if your Kubernetes cluster is not capable of handling more containers because all your nodes are full, Kubernetes will even be able to launch new virtual machines by interfacing with your cloud provider in a fully automated and transparent manner by using a component called a cluster autoscaler.
IMPORTANT NOTE
The cluster autoscaler only works if the Kubernetes cluster is deployed on a supported cloud provider (a private or public cloud).
These goals cannot be achieved without using a container orchestrator. The reason for this is simple. You can’t afford to do these tasks; you need to think about DevOps’ culture and agility and seek to automate these tasks so that your applications can repair themselves, be fault-tolerant, and be highly available.
Contrary to scaling out your containers or cluster, you must also be able to decrease the number of containers if the load starts to decrease to adapt your resources to the load, whether it is rising or falling. Again, Kubernetes can do this, too.
In a world of millions of users, ensuring secure communication between containers is paramount. Traditional approaches can involve complex manual configuration. This is where Kubernetes shines:
Managing access to container resources in a production environment with multiple users is crucial. Here’s how Kubernetes empowers secure access control:
This multi-layered approach of using user roles and service accounts strengthens security and governance in production deployments.
While containers are typically stateless (their data doesn’t persist after they stop), some applications require persistent storage. Kubernetes provides solutions to manage stateful workloads: Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). Kubernetes introduces the concept of PVs, which are persistent storage resources provisioned by the administrator (e.g., host directory, cloud storage). Applications can then request storage using PVCs. This abstraction decouples storage management from the application, allowing containers to leverage persistent storage without worrying about the underlying details.
Efficiently allocating resources to containers becomes critical in production to optimize performance and avoid resource bottlenecks. Kubernetes provides functionalities for managing resources:
We will learn about all of these features in the upcoming chapters.
Should we use Kubernetes everywhere? Let’s discuss that in the next section.
Kubernetes has undeniable benefits; however, it is not always advisable to use it as a solution. Here, we have listed several cases where another solution might be more appropriate: