The problem
While Kubernetes is the new legacy, rollouts can still be very tricky and even fail. Kubernetes is a technology to build cloud native platforms and not optimized for ease of use. Throwing Kubernetes at developers without the right guardrails and abstractions will significantly slow teams down. Helm-based approaches to configuration management quickly blow up the surface area operations teams have to maintain.
Low lead times and high rollout failure rates, ticket ops and frustrated developers is often the consequence.
"Before using Score and the Orchestrator our delivery speed on Kubernetes was frankly a disaster. Now it’s smooth and 3X faster. "
Markus Schünemann - CTO Lano
How Humanitec products help you nail Kubernetes migration and usage
Shield complexity
What most teams get wrong is that they expose developers with the fully fledged complexity of Kubernetes out of the gate. While it’s vital for developers to understand the context under which their workloads run its unnecessary to expose all the complexity upfront. This is where the idea of “layered abstractions” comes into play. Humanitec’s approach is to let developers choose how much into detail they want to go.
The workload specification Score provides a unified interface that allows developers to describe their workload and its dependencies in an environment agnostic way. Score feels like docker-compose and is an approach individual contributors master in 30 minutes. Rather than having to deal with dozens of config files per workload, developers use Score as the single config format for all workloads, in any environment.

Reduce the # of config files required by 95% and drive standardization
The Score file is a single file that sits next to the workload source code in the repo. The developer describe their workload and the dependencies in an abstract way. With every single deployment and git-push the Score file finds its way through the CI pipeline to the Platform Orchestrator. The Orchestrator interprets the score file and identifies the context (I’m deploying to an environment of type staging). It then fetches the baseline configurations for the the respective workload (think of them like empty baseline helm charts) and creates “fresh” manifests for the target environment.
This is how such a “baseline helm chart” could look. You can see that this allows the organization to enforce certain labels and annotations, side-cars or things like CPU min allocation or even specific variables. The effect on config standardization keeps maintenance and error rate to the minimum.

Bottom line this methodology allows the developer to master Kubernetes by just providing the simple Score file. This simplicity make the adoption fast and easy. At the same time you don’t take context away from the. It’s still absolutely clear how the final Kubernetes manifests are created and the developer can consume the final manifests at any point in time by either downloading them or using Humanitec’s GitOps approach and placing them in a repository with every deployment.

Kubernetes is only the start
Humanitec’s rules driven approach to platform orchestration encompasses much more than Kubernetes and compute. Using the same configuration structure, developers are able to request and configure resources inside their cluster, as well as out of cluster, e.g. a managed AWS RDS, Cloudflare DNS entries or a blob storage.

Enable Continuous Delivery
In addition to creating configurations dynamically, Humanitec offers a close integrations with your pre-commit tools such as GitLab or Jenkins. This allows you to automate your deployment process, from building your container images to deploying them to your Kubernetes clusters.
Alternative interfaces
Humanitec’s product leave interface choice to the developer on a workload to workload basis. Besides the code-based approach using Score, developers are free to also use the CLI/UI or API to managed Kubernetes and adjacent resources.
