In a previous post, we took a broad look at what Kubernetes (K8s) is, and why it might be useful to your development teams and business. In this post, we take some of those concepts and learn how to architect an application with K8s.
K8s has a dizzying array of configuration options, and the combination of possibilities are near endless. You can define how every component of your application runs and how they interact with each other and external applications. You can scale services depending on your, or the needs of your application. You can upgrade services with no downtime, and switch between versions with no interruption.
You configure K8s with a series of YAML or JSON configuration files that you then use the kubectl tool to apply to your cluster. You can separate each file for each purpose, or create longer files that combine resources separated by --- between each resource.
To understand some of the possibilities, let's look at an abstracted and simplified example application configuration and architectures. We won't explain every configuration element, but give you an idea of how everything fits together, and the key points. For a full reference to the configuration API, read this guide.
This article doesn't go into how the application works or connects but focuses on the K8s integration. The application is based on the one used by the Humanitec getting started guide, and you can find the original code on GitHub.
This e-commerce example application has the following components:
There are three different resources for creating and scaling Pods with K8s, these are:
You need to create one of these resources for each of the components in our application, and then create Services that tie them together. We use the first two in this article.
Within the declarations for these resources, you use ReplicaSets to define how many instances of a Pod you want running in a cluster. These three resources and ReplicaSets are tightly connected, and typically you create a ReplicaSet indirectly from one of these resources.
At the heart of many K8s, resources is a container. Typically you pull the container image from a public or private registry such as the Docker hub. If you wanted to use the _Dockerfile_s from the Humanitec examples, you first need to build and push them to a registry.
The containers key defines the container used, and the replicas key defines the number of instances of the container K8s should run.
We pre-built and pushed images to Docker Hub for you to use:
Another useful concept is labels, which are key/value pairs attached to resources. Labels don't mean anything implicitly to K8s, but you use them to identify and relate resources.
There are no predefined labels, in the configuration examples in this article, we use labels to define an app every resource belongs to, and a tier to define the application part.
In other parts of the configuration, we use the selector key to filter other objects.
The Postgres database needs to be part of a StatefulSet that looks something like the below:
It defines many things, but most importantly among them: a name, postgres, that belongs to a service, a volumeMounts to store the data, and volumeClaimTemplates that provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. The PersistentVolume Provisioner is another resource, but you can let K8s manage for you.
As databases often need default configuration to run, the above also uses a configMapRef resource to define details such as exposed ports, user accounts, and initialization options. Here's the ConfigMap:
The StatefulSet needs to belong to a Service that allows for multiple instances that other services can connect to:
The NodePort exposes the Service on each Node's IP at a static port, and K8s automatically creates a ClusterIP Service which it routes to the NodePort Service. You connect to the NodePort Service from outside the cluster by requesting :. A ClusterIP exposes the Service on a cluster-internal IP, making the Service only reachable from within the cluster. This is the default ServiceType. K8s allows for complex internal and external proxying of traffic in a cluster, find more details in the documentation.
K8s provides a handful of methods for publishing services and service discovery. When running on cloud providers, you typically use LoadBalancer, but for local testing, you generally use NodePort.
Here's the backend Deployment that requests two replicas from K8s:
It also passes the environment variables that the backend service needs to run. Note that for the database, we use envFrom to pass the environment variables, but here we add them inline. There is no functional difference between the two methods, and this is to show you the different options:
And the backend Service, which exposes the port the container exposes in containerPort to the outside world via the targetPort key:
And the frontend Deployment, and here we connect the front end to the backend via a PRODUCT_BE_SERVER_URL environment variable:
And the frontend Service, which again exposes an external port to the container port:
With all the files in place, you can now use the kubectl command-line tool to apply them to the cluster and get your application running.
We will cover the details of kubectl in a future article, but generally, to apply configuration files to a cluster, you use kubectl apply -f to apply an individual configuration file, and kubectl apply -k to apply a directory of configuration files.
You then use the kubectl get command followed by a resource type to see the status of your cluster, for example, kubectl get pods or kubectl get services.
To open the frontend for the e-commerce application depends on how you are running K8s. For example, if you are using Minikube, you can start the service with minikube service product-fe.
Another option is to use Humanitec, and let us create your Kubernetes cluster for you with a series of simple (visual) steps. Get started by following our guide.
In this article you learned how to deploy a simple app with a frontend, a backend service, and a database to Kubernetes using kubctl. As you can see from the YAML files, Kubernetes can create a certain complexity even with a simple setup. In contrast to this Humanitec provides a simple and elegant way to master Continuous Delivery for Kubernetes-native applications. It helps teams increase development velocity by allowing engineers to easily spin up the tech they need on their own. Using Humanitec's abstractions and integrations, teams manage environments, simplify maintenance and prevent cloud vendor lock-in.
Just try it out and deploy your first application within 10 minutes.
No credit card required