Skip to content

A quick introduction to the concept

Kubernetes is an open source program that helps orchestrate and manage containers. It was first developed by Google in 2015 and has since been maintained by the Cloud Native Computing Foundation(CNCF). It has since been growing at a rapid pace and has been gaining interests from developers.

It is a common misconception that developers need to choose between using Docker or Kubernetes. However, the most beneficial combination is when both are used in tandem.

While containers by themselves are already extremely useful, they become challenging to manage or scale at a fast rate. This is where Kubernetes comes in.

Kubernetes is a smart program that automatically monitors, scales, and deploys containers based on user specifications. First, here are some of the common terminology used for Kubernetes:

  • Nodes: The host that the container runs on. It is a worker machine that can either be a virtual machine (VM) or a physical machine.
  • Pod: A management unit in kubernetes that can contain multiple containers along with a YAML file that specifies the pod’s attributes.
  • Cluster: a group of nodes that the master nodes manages

Kubernetes has now become the de facto standard for deploying containerized applications at scale in private, public and hybrid cloud environments. The largest public cloud platforms AWS, Google Cloud, Azure, IBM Cloud and Oracle Cloud now provide managed services for Kubernetes.

Nodes

A node is the smallest unit of computing hardware in Kubernetes. It is a representation of a single machine in your cluster. In most production systems, a node will likely be either a physical machine in a datacenter, or virtual machine hosted on a cloud provider.

Thinking of a machine as a “node” allows us to insert a layer of abstraction. Now, instead of worrying about the unique characteristics of any individual machine, we can instead simply view each machine as a set of CPU and RAM resources that can be utilized. In this way, any machine can substitute any other machine in a Kubernetes cluster.

Pods

Unlike other systems you may have used in the past, Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level structure called a pod. Any containers in the same pod will share the same resources and local network. Containers can easily communicate with other containers in the same pod as though they were on the same machine while maintaining a degree of isolation from others.

Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and a single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of your pod to the cluster as necessary. Even when not under heavy load, it is standard to have multiple copies of a pod running at any time in a production system to allow load balancing and failure resistance.

Pods can hold multiple containers, but you should limit yourself when possible. Because pods are scaled up and down as a unit, all containers in a pod must scale together, regardless of their individual needs. This leads to wasted resources and an expensive bill. To resolve this, pods should remain as small as possible, typically holding only a main process and its tightly-coupled helper containers (these helper containers are typically referred to as “side-cars”).

Cluster

In Kubernetes, nodes pool together their resources to form a more powerful machine. When you deploy programs onto the cluster, it intelligently handles distributing work to the individual nodes for you. If any nodes are added or removed, the cluster will shift around work as necessary. It shouldn’t matter to the program, or the programmer, which individual machines are actually running the code.

Setup

The set up of of Kubernetes cluster is relatively simple and straightforward, assuming that the program has been made and conternized using a program such as docker. The combination of containers with kubernetes can create a reliable and scalable application.

Steps:

  1. Provide container from Docker and create a cluster
  2. Set deployment parameters (in a YAML file) to deploy the application
  3. Monitor or scale the application as fit

Overall, Kubernetes is an extremely useful tool when creating an application that needs to be scaled and monitored quickly. It takes in containers that isolate a program to reliably and accurately scale, monitor, and deploy an application to the masses.

Experimenting

To experiment with Kubernetes locally, Minikube will create a virtual cluster on your personal hardware. If you’re ready to try out a cloud service, Google Kubernetes Engine has a collection of tutorials to get you started.


Further reading