The promise of the public cloud
Public cloud infrastructure comes with a big promise: to stop worrying about hardware infrastructure and rely on the infinite ad-hoc scalability of shared public resources. Gone are the days when developers needed to wait for sysadmins to configure or purchase new servers and to configure them so that they could run their applications. Public cloud providers like AWS, GCP, and Azure remove this constraint and offer almost unlimited scalability whenever needed.
This not only allows teams to cope with events like Black Friday but also enables them to spin up new environments whenever needed; e.g., to validate a super important hotfix that needs to go to production asap or to test functionality in a feature branch. As of today, this can all be done without blocking shared environments like dev and staging for the rest of the team.
Current situation: Many teams are stuck with dev and staging
Are teams prepared to leverage the potential of flexible scaling when it comes to the ad-hoc creation of environments? We talked to about 500 developer teams in Europe and the US over the last 12 months and we found a significant number of teams that are not. The main reason for this is environment-specific configs that are built into the code. We saw two different kinds of implementations.
The most problematic approach that we observed is to build different containers for each environment, with all the environment-specific config hardcoded into a specific container. First of all, this approach violates the integrity of any testing setup. For example, tests that have been performed on the container for the staging environments do not prove that the container for production is also working as expected. The test would need to be repeated for the production version; a step that is too often skipped since it adds additional complexity and takes time. Secondly, any new additional environment would require a new container with the specific environment-specific config built into it. None of the teams we talked to went through this effort. Instead, they limited themselves to 2 to 3 environments and struggled with blocked or broken environments in many situations.
Another approach we saw is to hardwire the environment-specific configuration in one large config file and switch configurations with environment variables. While this approach ensures the integrity of tests along the entire process, it also limits the number of environments to the ones that are listed in the configuration file. Spinning up new environments requires changes to the overall config file which - in practice - leads to the same result as in the approach mentioned above: teams limit themselves to a finite number of environments.
Looking at it from a theoretical point of view, this is known to be a problem for a while already. The manifesto of The Twelve Factor App (https://12factor.net/) - that many developers have at least heard of - states this problem in factor III. Config.
Environment variables: A simple solution
The solution to unlock the full potential of ad-hoc environments is straightforward: all environment-specific configs need to be externalized, e.g. through environment variables. This externalized configuration can easily be manipulated whenever a new environment is needed.
Wikipedia provides a good overview article about environment variables. Environment variables are part of the environment a specific process is running in. They can be referenced in the code and are defined on the environment level. A typical example is the definition of a logging level that defines how much information is written to standard output. The level is typically defined in the specific environments of the application with more logging in development and staging environments and less logging in production environments. Environment variables can also be used to point to specific resources (e.g., databases, DNS) that differ across environments.
When following the approach of externalizing all environment-specific configs, creating a new environments ad-hoc is very simple: just define the required environment variables and spin everything up.
We are following this approach for some time and enjoy the flexibility it brings (e.g., when working with feature branches). It is also useful for us to see how much more efficiently teams work that onboarded to Humanitec with externalized environment specific configs. Our Continuous Delivery-as-a-Service solution enables these teams to create and spin up new environments in Kubernetes with the click of a button. These teams leverage the full potential of scalable public cloud infrastructure for the one thing that really matters: a better developer experience.
Summary: Why environment variables matter
Environment variables help you keep important configuration external from your application code. This is an important step to take if your team uses or wants to use container-based applications, or continuous delivery as effectively as possible. If you’re using continuous delivery with containers, environments for developing, testing, staging, and production are in constant flux, and this separation keeps your team and application flexible.
If you’re interested in finding out how to use environment variables with Kubernetes, read our hands on guide, or find out how Humanitec can simplify managing environment variables for you.
Do you have more questions about environment variables? Humanitec's DevOps experts are happy to answer your questions during a free webinar!