Introduction
Dynamic Configuration Management (DCM) is the hot new thing in town. DCM was originally developed by platform engineers to drive standardization by design, and build golden paths that developers can consume with low cognitive load. Following the original definition of Dynamic Configuration Management by its creator Chris Stephenson,
“Dynamic Configuration Management (DCM) is a methodology used to structure the configuration of compute workloads. Developers create workload specifications, describing everything their workloads need to run successfully. The specification is then used to dynamically create the configuration, to deploy the workload in a specific environment. With DCM, developers do not need to define or maintain any environment-specific configuration for their workloads.”
This is in contrast to a static approach, where the developer needs to know and maintain the environment-specific configuration in advance of any deployment. For example before the workload is deployed, the developer would need to define and maintain a separate connection string for a database in each of the development, staging, and production environments.
Why Dynamic Configuration Management
DCM has a wide range of advantages in comparison to its static equivalent. With DCM developers only need to operate a single file per service, which heavily reduces cognitive load while allowing them to choose their level of abstraction. They can simply describe what their workload requires to run, and the optimal resource will be matched or created. As the final configs are generated with every deployment, the number of files a team needs to maintain and operate is reduced by up to 95% (For example, any app with ten services and dependencies across four environments requires 300+ config files in the static approach vs ten files in the dynamic approach). Because these files are generated with each deployment, they are highly standardized. This also enables a wide range of new features such as auditability, spinning up new environments on demand, and adding services or resources seamlessly into the architecture. Finally, it heavily reduces change failure rate and security incidents.
But let’s go beyond buzzwords and explain how to enable DCM using Score (7k+ stars on GitHub) and Humanitec’s Platform Orchestrator.
Understanding the basics
DCM simply means that developers describe how their workload relates to resources in an abstract, environment-agnostic way. The format in which they describe this relationship is called a workload specification such as Score. The specification is generalized and works across environments, which means it doesn’t provide enough information to configure workload and resources itself. To actually get to executable configurations we need to apply the workload specification to configuration baselines (for app and infra configurations), and generate them depending on the context of the deployment (for instance service A into environment B). This can be done by using a Platform Orchestrator.
In order to grasp this implementation of DCM we need to understand the following things:
- What is a workload specification and how does it work/look
- The different types of configuration baselines
- What “deployment context” means
- What is a Platform Orchestrator, and how does it actually dynamically generate and execute the final configurations
What is a workload specification and how do I use it?
The Score Specification is a developer-centric definition that describes how to run a workload. As a platform-agnostic declaration file, score.yaml presents the single source of truth on a workload's runtime requirements, and works to utilize any container orchestration platform or tooling. In a dynamic setup that uses Score as a workload specification, the repository of every workload contains the following files:
- Service code
- Docker file
- Pipeline.yaml
- Score.yaml
If we have a look at a “very simple” Score file, this would as follows:
You might observe that while this file describes the relationship of the service to a database, this relationship is abstract. It doesn’t indicate for example a specific RDS database, but the general fact that the service depends on a database of a certain type; in this case, a Postgres. The advantage here is that the file is the same, regardless of the environment it’s deployed into.
What types of configuration baselines are there?
In addition to the service repositories, an organization running a dynamic setup maintains certain default configurations. These are mostly centralized and maintained by platform teams or senior developers, and contain:
- Workload profiles: Application configuration baselines that contain things like CPU minimum allocations, labels, and annotations. They carry the information necessary to create the final application configuration. Think of them like empty Helm charts.
- Resource definitions: Baselines how to wire existing resources, or create new ones using Infrastructure as Code such as Terraform, Pulumi, Crossplane, or Humanitec Drivers.
- A list of available resources and their matching criteria: This determines what resource to create or match based on what context. Such matching criteria may look like this:
This matching will be interpreted by the Platform Orchestrator as follows: If the workload specification requires a DB of type Postgres and the context matches the criteria “Environment Type = production or development” then use the driver humanitec/postgres-cloudsql to wire the workload to an existing DB in a cloudsql instance.
What is a Platform Orchestrator
The Platform Orchestrator is responsible for interpreting the workload specification in a specific context. This happens at deployment time. The context is used to determine where the workload should be deployed, how the application configurations are created, and how to resolve/create its dependent resources.
Humanitec’s Platform Orchestrator sits post CI and is integrated with the image registry, secrets managers, and cloud or on-premise accounts. Apart from generating config files and resolving/creating infrastructure, the Platform Orchestrator can also deploy and act as a CD system. It can alternatively just be used to create/resolve and match, and hand over the executable files to a dedicated CD provider such as ArgoCD.
What does “deployment context” mean?
Humanitec’s Platform Orchestrator dynamically creates configuration files based on the deployment context. The context could be as simple as the name of the environment (e.g. "development" or "production".) or include other attributes such as Application Name, Region or Organization Name. The Platform Orchestrator can derive the context from API calls or from tags passed on by any CI system.
How does the Platform Orchestrator create and execute config files?
How are the executable config files created? Humanitec’s Platform Orchestrator follows a “RMCD” execution pattern:
- Read: Interpret workload specification and context
- Match: identify the correct configuration baselines to create application configurations and identify what resources to resolve or create based on the matching context.
- Create: Create application configurations; if necessary create (infrastructure) resources, fetch credentials and inject credentials as secrets
- Deploy: deploy the workload into the target environment wired up to its dependencies
In simple terms, we’re enabling an “asynchronous” contract between the individual developer and the platform team which looks like this:
Example requests
If we want to understand what happens in more detail, let's dissect how such a request works step by step. In a first example we deploy to an environment of type development. The Platform Orchestrator realizes the context, and looks up what resources are matched against this context. It checks whether they are already created (in this case there is an RDS DB we can wire the workload up to) and it figures out how the final app configs for this service should look in general. It then creates the files, injects the dependencies, and at this point either deploys directly or hands over to a separate CD system.
Let’s repeat the same procedure but spin up a new environment. By setting the context to “ephemeral” the Platform Orchestrator will again now interpret the workload specification. It will realize that the Postgres doesn’t exist yet, and that it should create one using a specific Driver. The Platform Orchestrator will then create the configs, inject the dependencies, and serve.
An interesting case is when the developer sends a request the system doesn’t know yet. This is where many approaches fail, except for when applying DCM. Because everything is repository-based, the approach allows developers to extend the set of available resources or customize them to their liking. Let’s play through the scenario where a developer needs a ArangoDB, but this isn’t known to the setup so far. By adding a resource definition to the general baselines of the organization, the developer can easily extend the setup in a way that can be reused by the next developer.
Summary
Dynamic Configuration Management wildly reduces the complexity of modern cloud native setups. It drives standardization by design and makes working with these setups easy for both operations as well as developers. DCM is a fully Git-based approach, this makes it relatively easy to transform your static setup into a dynamic approach. If you’re already on Helm charts, the transition simply requires you to separate out the environment variables and set the config baselines, as well as the matching criteria.
Humanitec offers Score and Platform Orchestrator as the ideal pairing to power your dynamic setup, and enable you to build a dynamic Internal Developer Platform.
To see this in action, start a free trial and check out our tutorial on how to deploy a workload with Score and Humanitec.