Kubernetes helps your team and business make changes to large scale applications with little or no downtime. You can try new ideas, optimisations, and experiments and move quickly, staying ahead of your competition.

To understand why Kubernetes matters to modern developers and the businesses they work for, let's travel back in time.

A brief history of development

Not so long ago, developers built applications that ran on physical servers they generally had access to and maintained. If multiple applications competed for resources, the machine could underperform, or potentially, crash completely. A solution was to run each application on a separate server, but this was a time when machines were expensive, and underutilised machines were a waste of money and resources.

To solve this problem came virtualisation, that allowed developers to run "virtual machines" (VM) on one physical machine. Virtualisation allows for isolation and security between each machine, and the applications running in it, and presents itself as a cluster of machines that you can create and recreate relatively easily.

Many still use virtualisation today, with tools like VMWare and Vagrant. But it's main drawbacks are the speed of recreation, and that every VM runs an entire operating system, meaning a waste of machine resources with duplicated components.

Containers are similar to VMs, but are far more lightweight, as they share standard components such as the operating system they are running on between them. Containers allow you to package applications into self-contained units with just everything needed to run. You can then distribute, recreate and scale the container more easily. Containers can still have their own virtualised hardware resources if needed, but their decoupled nature makes them portable, and great for development workflows.

Conceptually speaking, containers first emerged in the late 1970s and solidified with FreeBSD Jails and Linux VServer in the early 2000s. When Docker emerged in 2013, containers were familiar, but thanks to clever packaging and marketing, the project pushed containers rapidly into the mainstream.

Due to their design, containers brought other benefits, which suited the growing trend in microservice-based architectures that began about the same time as Docker arrived. Containers bring flexibility to developers and the companies they work for. This flexibility allows for easier collaboration between teams who work on different application components, or team members who use different operating systems. It allows for teams to take advantage of a hybrid-cloud approach to deployment, so they are not locked into the pricing or policies of one vendor, and can deploy where they want when they want.

A missing piece to the container enthusiasm was managing and automating (often called "orchestrating") them in development workflows. Orchestrating and managing containers can mean you need a mixture of services such as:

  • Service discovery: Helps each microservice you run in your application know what other services are running, and how to connect to it.
  • Load balancing: Manages demand on services and distributes traffic to keep the application stable.
  • Storage: Maintaining database or file state in a microservices-based application has always been challenging. Kubernetes handles a lot of the hard work for you by managing mounting, lifetime, and more.
  • Self-healing: Handles restarting containers that fail, or stopping containers that are not performing.
  • Secrets management: Handles sensitive information such as passwords and configuration that are essential for your application to run, and essential to manage them securely.

Kubernetes (often abbreviated to K8s) is an open-source system for automating the deployment, scaling, and management of containerised applications. It provides all these functionalities and more, replacing (or supplementing well) a myriad of alternative tools. Kubernetes blasted through other options for container orchestration and strengthened its hold on modern development techniques. What this means for anyone involved with keeping applications running is that Kubernetes can help you keep a complicated application running, no matter the demand on it as a whole, or any of its services. Google originally developed Kubernetes based on what they have learned building Borg (the system that runs their own internal infrastructure which serves everything from Google Search to Maps to Gmail), so it has a reliable history of running applications at serious scale.

When not to use Kubernetes

Kubernetes is primarily designed for large scale applications comprising lots of services. If you have a mostly monolithic application or demands on your application are predictable or low, then you probably don't need Kubernetes (yet).

Kubernetes is open by nature, and while many companies are building SaaS models on top of it, by default, it does not limit the application types you can run (if it runs in a container, it runs it). Nor does it dictate (or provide) anything in the form of middleware (message busses, etc.) or monitoring solutions. It does not build or deploy your source code for you (by default). Finally, and most importantly, it operates at a container level, not a hardware level, so does not provide any features for managing actual machines.

Disclaimers aside, Kubernetes is still a tool worth learning, as there will come a time when your application or another application you're working on will need the power and flexibility Kubernetes offers.

Key Kubernetes concepts

In the world of containers and their orchestration, you will hear a lot of new terms, and a review of the terminology helps understand the basic building blocks of Kubernetes.

A Kubernetes cluster consists of one or more nodes. In production deployments this is likely 3 or more, for local development deployments, an entire cluster may run as one instance. Within each cluster is a master that runs three processes that manage an API, scheduling, and overall management. Every other node in the cluster runs two processes, one for communicating with the other master, and one for networking services.

Abstracted on top of a cluster are four key Kubernetes resources:

  • A Pod encapsulates an application container and the storage and networking resources associated with that container. A Pod may be one container or more than one tightly coupled container.
  • A Service Exposes a set of Pods as a network service.
  • A Volume is a storage location for Pods to share between containers.
  • A Namespace provides a means for multiple users to work with virtual clusters on the same physical cluster.

On top of these core concepts are others that turn them into usable and functional services. These are typically configuration files, managed by a Kubernetes service:

  • A ReplicaSet defines a stable set of pods needed by your Service.
  • A Deployment updates Pods and ReplicaSets to update them, or create them in the first place.
  • A DaemonSet is useful for Pods you need to run on all nodes, typically monitoring type services that run in addition to your core services.
  • A StatefulSet defines Pods that are part of a service that requires state, and give you guarantees to maintain it.
  • A Job defines one or more Pods that run to completion.

Next steps

In our next post, we'll look at how to assemble the concepts outlined above into a real-world microservices-based application that uses some of the features Kubernetes provides. If you're looking for a managed solution that takes Kubernetes, and adds even more, all behind one friendly interface, then take a look at what Humanitec has to offer.