Kubernetes is rapidly becoming the backbone of modern development infrastructure, allowing development teams a cost-efficient way to quickly get the resources they need. As of its fifth anniversary, it now has over 2,200 active contributors and is very popular in the open-source community.
Kubernetes is a container orchestration platform that was open-sourced by Google and used to automate the deployment, scaling, and then management of containerized applications. It has quickly become the de facto standard and is backed by companies like IBM, Amazon, Cisco, Red Hat, Microsoft, and of course Google. Are you making the most of your Kubernetes environment?
What is an environment?
Your environment is essentially everything except your software.
However, things that aren’t your software make up a much bigger list. The operating system and hardware generally don’t belong to you. Databases are often managed by other people and you need to abide by their rules. Also, the network usually doesn’t belong to you.
In addition to the database, the data also doesn’t belong to you. Data changes depending on what environment you’re in. If you’re running a database on your machine, you’ll have a small amount of test data. If you’re running in production, you might have terabytes of data. So, there are different data in different environments.
Security policies are another area that people often forget. On your laptop, you’re the admin with full control over the machine. But as soon as the software is running on production you might have no more rights than a normal user using the system.
Why you need more than one environment
There are several potential environments you may use:
Local development: This is your laptop or workstation. It’s a machine you own and you can do whatever you want with it. No one else is working on this, so it’s a private, non-shared environment.
Shared development: Most companies usually have a shared development environment. If you are building in a team, you often want to have your environment somewhere where you can all collaborate and have a common version of the truth in terms of what your software does.
Staging: A staging environment will make things look like production as much as possible so that when you deploy to production you don’t get any surprises.
Production: This is your production environment where your users are.
Local dev, shared dev, staging and production are your standard environments.
Some other environments to consider include:
Feature development: Use this in a situation where developers may break releases. You still want to collaborate, but you probably don’t want to have to shift code around and try to build everything on each other’s machines. In this case, you use a feature environment where you could work in isolation, but collaboratively. This is still a shared environment, but you’re not affecting everyone else on your team.
Testing: You also might have a testing environment, especially for quality assurance. When doing quality assurance, you need a place to test, but staging is too busy for proper testing, and probably the shared development environment is too unstable for proper testing. In this case, you might want a separate testing environment. Â
User acceptance testing (UAT): You also may need a UAT environment. This is popular in delivering internal business systems and in regulated industries such as banks. It allows your business users to say, “yes, this software does what I need it to do, so we will allow you to put it into production.”
Disaster recovery: This is an environment that is often forgotten. If your main production environment goes down, you have a backup to recover into a new environment. For example, if you run your system in your own data center and a disaster occurs in or around the data center, you want to be able to switch to another data center somewhere else.
What are common clusters challenges?
Managing external environments is challenging. Here are a few things that can prove challenging when using Kubernetes clusters:Â
Database accessibility: If we have an external database, it needs to be accessible by both environments. The challenge is that you need to potentially configure your network twice, once for each cluster.‍
Availability issues: If you want to spin up a feature cluster, you need to wait for it to spin up. Even if it’s fully scripted and automated, these problems may not occur immediately. If you spin up a cluster, it can take several minutes to become available. Then you have to potentially configure your networks so that the resources of that cluster can access your database and third-party APIs.Wasted resources: Most development environments have pretty low traffic. You probably won’t make perfect use of your compute resources. Also, you have a lot of overhead. You also have to pay for your master node for each cluster.
Strict isolation requirements: Some organizations require strict isolation requirements. In this case, it works well, but it requires a lot of extra work.
Namespaces
How can we work through these challenges when using Kubernetes clusters? Namespaces are the answer. We set up a cluster and define some namespaces. In this scenario, you have nodes that run workloads in one or multiple namespaces.
You have a cluster with two namespaces: Shared Development and Staging.
In this example, a team is pushing Service A to a Shared Development environment. In this case, Service A is at version 2. From there, we push it to the Staging environment and push the new version 3 of Service A to the Shared Development environment.
In practice, you may have some other processes running in your cluster (e.g. Cert-manager and Prometheus). These things are running for both services on a single node, which means you have better utilization of your resources. In this scenario, you don’t have to repeat Tiller for every single environment.
For the external processes, you have a database and third-party APIs so you don’t need to configure the network multiple times. You configure it once for one cluster. You can add more environments if you can do namespaces, and you don’t have to configure anything external to your system.
Basics of managing a Kubernetes environment
Environments are things that are not your software. They’re an important part of software delivery, even if they’re fully automated. In the next blog post, How to manage environments for more than one service or app in a Kubernetes cluster, we’ll show you how to better manage the environments and introduce you to the Reverse Canary concept.
‍
Learn more how to manage your environments on Humanitec, your Internal Developer Platform.
Connect with our experts in a webinar start a free trial to test it on your own.‍