In software delivery, creativity and velocity suffer when developers spend their most productive hours dealing with tedious infrastructure and configuration management tooling. For example, two of the biggest developer banes are:
- Configuration management - where configuration (including secrets) should be stored, and configuration files are maintained per environment.Â
- Resource management - how and where to provision resources the workload needs.Â
While the DevOps movement helped break the wall of responsibilities between developers and operations, who does what has become somewhat muddled. If you ask some people, they may tell you that a developer should be able to provision and configure everything with complete control - and that’s fine. But most developers will not know or want to know everything there is to know about AWS and Kubernetes.
The aim was never to create a generation of infrastructure gurus; ultimately, developers and businesses want to deliver software of value quicker. This is hard to do when your developers are overwhelmed by infrastructure and configuration management tasks.
This article will discuss a workload-centric approach to deployment, which can help solve these challenges by enabling developers to deploy apps with minimal configurational complexity and maximum standardization. I’ll also cover the components that make it possible; a workload specification and a Platform Orchestrator.Â
Why take a workload-centric approach?
A workload-centric approach to deployment is a methodology used for developing and deploying apps. It uses a tightly scoped workload specification for use with a Platform Orchestrator, which has the following advantages:
- Shields developers from the configurational complexity of container orchestrators, infrastructure, and configuration management.
- Declares dependencies explicitly without worrying about how those dependencies are resolved.
- Tightly scopes the application to a workload, ensuring configuration never becomes unwieldy.
- Keeps configuration environment agnostic. The workload specification doesn’t include environment-specific configuration.
- Focuses on self-service, developer experience, and standardization by design.
- Treats every deployment like day zero. All configuration and infrastructure changes can propagate to all environments.
The above is achieved with a Platform Orchestrator, which enables developers to deploy an application and all its resource dependencies to all environments, with a single workload specification.Â
A workload-centric approach to deployment benefits two broad personas:
Developer: When coding applications and deploying workloads with a workload specification and a Platform Orchestrator, a developer only needs to specify the resources they need and the basic workload configuration. This means:
- With workload-centric deployments, infrastructure engineers are encouraged to cater to developer needs. Any developer who wishes to get involved in the infrastructure domain can also do so. Complexity is optional, not imposed.
- Developers with a low level of infrastructure skills can be very productive.
Platform engineer: Configures the Platform Orchestrator to drive standardization by design and provide a vending experience for the developers and their workloads. A platform engineer configures where the workloads are deployed, how resources are assigned or created by it, and in what context. This means:
- Infrastructure engineers deliver environment-aware ways to provision dynamic Resource Definitions rather than single static resources. This is fundamentally a self-service mechanism.
- Seasoned engineers can focus exclusively on coding and deploying. Still, if they need to think of infrastructure, it only needs to be done once vs every time a new environment is created.
- It’s easier to ensure infrastructure is compliant or replace non-compliant ways when you work with Resource Definitions. This is because every resource will be provisioned in the same compliant way you have initially defined. This is what we refer to as standardization by design.
The two main components of a workload-centric approach to deployment
When it comes to taking a workload-centric approach to deployment, there are two main components that should be understood:
Component 1: WorkloadÂ
In the Kubernetes world, a workload is broadly described as: “an application running on Kubernetes”Kubernetes provides several built-in workload resources, such as deployments, jobs, and cronjobs, that manage how your applications run. We see workloads similarly with one notable exception; we call a workload any application with a workload specification, whether running or not.Â
What is a workload specification?
A workload specification defines the configuration of a workload and all the resource dependencies it needs without needing to provide environment context. This helps mitigate configuration sprawl while reducing the cognitive load associated with provisioning infrastructure.
A workload repository contains the following:
Code + CI/CD Spec + Dockerfile + workload spec = workload repository
To make your repository a workload, it needs to contain the code of the service you want to deploy (preferably a 12-factor service), the Dockerfile that will build your container image, the continuous integration specification for your CI tool of choice and the workload specification.
The workload specification typically contains the following information:
- Container definition (required): This maps broadly to a container definition in Kubernetes; it includes variables, container image, health checks, etc. A workload can run more than one container, but generally only one. When doing dynamic configuration management, the values and secrets of the environment variables defined here are inferred by the platform orchestrator via placeholders.
- Services (optional): analogous to Kubernetes services.
- Resource dependencies (optional): what the container needs, i.e., a database, another workload, etc.
The resource dependencies are an optional part of the specification but are the crux of workload-centric deployments, as we will soon see.
Component 2: Platform Orchestrator
A Platform Orchestrator works like this:
- A platform engineer configures how resources are created and where the workloads are deployed based on context.
- A developer writes a workload specification defining the workload and the resources it needs.
- The Platform Orchestrator interprets the workload specification and provisions all the required resources, and deploys the workload based on the criteria defined by the platform engineer.
A Platform Orchestrator enables Dynamic Configuration Management (DCM). DCM lets development teams deploy workloads with their resources and configuration to all environments using a single workload specification.
How does the Platform Orchestrator infer environment context?
The developer doesn’t need to include any environment-specific configuration in the workload specification. Instead, the environment-specific information is inferred from the following:
Resource Definitions: These are configured by the platform engineer and tell the Orchestrator how to provision resources. By that, we mean connecting to or creating a new resource. When creating a Resource Definition, a platform engineer configures the following:
- Resource drivers: The platform engineer provides configuration to either create a new resource or connect to an existing resource.
- Matching criteria: The platform engineer specifies where this resource definition provisions resources. For example, the name or type of environment.
When a developer creates a workload, the Platform Orchestrator looks at the matching criteria and uses the appropriate Resource Definition to provision the resource. The resource provisioned will have configuration outputs that can be used to configure the workload.Â
Shared secrets/values:ese are configured by the developer. The majority of the configuration for most workloads usually varies with information on how to connect to resources. Then additional values or secrets are required, and shared secrets/values can be used. Shared secrets/values can be configured globally or overridden per environment. These are configured in the Platform Orchestrator by the developer teams but stored safely in a secret store like Vault.Â
ConclusionÂ
The workload-centric approach to deployment is a methodology that shields developers from infrastructure and configuration management complexities. And by enabling self-service and standardization with an Internal Developer Platform, engineers can free developers up to focus on delivering software that adds greater business value t; without being bogged down by tedious infrastructure tasks.Â
But I’ve talked about clearly delimiting the responsibilities of developers and infrastructure, you might still ask, ”Wouldn’t that create silos? And isn’t this the opposite of what DevOps has been trying to achieve all this time?” On the contrary, we are not drawing a hard line on what developers and infrastructure teams should be doing. We are simply defining clear responsibility boundaries to make the developer experience (DevEx) a priority. Especially as we strongly believe that DevEx is key to delivering great software.
Humanitec’s Platform Orchestrator can help developers and platform engineers implement this methodology and realize these benefits. If you want to streamline your software delivery process and empower your developers, consider adopting the workload-centric approach to deployment today. Get started with Humanitec by signing up for our free trial.