The Inner Workings of an Internal Developer Platform

Internal Developer Platforms (IDPs) are everywhere. Puppet makes it the #1 topic in their latest DevOps report. The self-service capabilities IDPs allow for developers at elite engineering organizations, which have now been operating them for years, is what sets them apart from everyone else when it comes to development and deployment performance. These platforms have such a great impact on overall productivity that the likes of Github, Zalando and Sport1 would never go back to a world without one. Ever asked a Google Engineer whether they liked Borg? The answer tells you everything. Now these platforms are becoming mainstream. Time to dive deeper. 

This article will provide insights around the inner workings of an Internal Developer Platform, using the example of IDPs built using Humanitec. To make this more relevant, we will look at the case of a developer spinning up an environment and explain what happens in detail. 

The core components 

Although every IDP is slightly different from the next, there are big similarities in the way they are structured and operated. They usually have six key areas of functionality. The first three we’ll look at are primarily used by the ops or platform team to configure the platform itself and define golden paths for developers.

Infrastructure Orchestration

  • What it does: Lets the Ops team wire up resources like  databases, clusters, file-storage and DNS setup to the platform. Allows them to define rules on what combinations of these resources are spun up by which request from an application developer. 
  • Why it matters: Enables true “you build it, you run it”, allowing developers to self-serve the tech they need. Frees ops from transactional tasks. 

Application Configuration Management

  • What it does: Allows ops to define golden-paths and standards for how changes to the application configurations are applied. Creates new manifests per deployment to make the setup auditable and easy to maintain. 
  • Why it matters: Frees developers from fiddling with scripts and makes the setup scalable and reliable. It ensures consistency between environments. It allows for functionalities such as spinning up fully provisioned environments dynamically.

Role based access control

  • What it does: Allows ops to define who has permission to ship what code, or apply certain changes to which service in any given environment.
  • Why it matters: For obvious security reasons. But it also allows teams to work with external contributors and grant them short term access to selected parts of your delivery setup. 

The remaining three components are primarily used by the application developers to operate applications:

Deployment Management 

  • What it does: It creates manifests at deployment time, fires up the infrastructure orchestration and makes sure your code gets delivered. It allows you to roll back and diff deployments and apply all sorts of automations. 
  • Why it matters: Introduces a reliable, repeatable, self-service deployment setup that leads to consistent environments.

Environment Management 

  • What it does: Allows developers to easily spin up fully provisioned environments on demand. It lets them manage these environments at scale, while making sure everything is consistent and easy to maintain. 
  • Why it matters: Eliminates waiting times due to blocked environments and allows developers to take code from idea to production with a high degree of ownership. 


  • What it does: Surfaces container logs and deployment errors. It connects applications and environments to Application Performance Monitoring (APM) services automatically. 
  • Why it matters: Drives Developer self-service and ownership by increasing visibility and surfacing issues. It reduces setup monitoring costs. 

If you look at these features you can already spot the “division of labour” that is introduced. The Ops team sets the “golden paths” and deals with the implementation details. Developers request resources, ship code and self-serve the resources they need in a standardized, scalable manner. In a way, IDPs are what makes it possible to truly follow the “you build it, you run it” paradigm. 

How they fit together 

We’ve understood the pieces, let’s see how they fit together. Below is a graphical representation on where such an platform fits into the general toolchain:

This example shows an IDP built with Humanitec, accepting containerized workloads as an input, using Kubernetes as an underlying orchestrator. You can see that the IDP sucks in the built artefacts from the CI Pipeline. It deploys them into an environment using deployment automation. Which in the simplest case means it swaps an image and updates the environment accordingly. That’s not that big of a deal, most setups can do that today. The key difference is the end to end self-service for developers. 

Developer Self-service

In the platform world, the developer is coding in an IDE, merging the code with Git and running it in an IDP. In the case where you simply push a new version of your code, the IDP deals with delivering your code in the background. For the developer it’s really git-push-done. The IDP will make sure the target-environment is updated, depending on the rules set. 

The fascinating bits of the developer-experience kick in if anything beyond the simple version of code has to be updated. This includes evaluating what changes were introduced between deployments, rolling back, debugging deployments with all workloads in one place, spinning up a new environment, adding a database and much more. All of this is now possible through self-service and whatever changes are applied can be easily maintained.

Spinning up an environment 

We covered the core elements of IDPs and how developers use them on a daily basis. To understand how everything works together, let’s explore in detail what happens if a developer spins up a new environment and hits deploy. We will use Humanitec as an example. We also have a series of video tutorials that will explain everything visually if this helps. 

Our sample app 

We’ll first need an app to deploy into a new environment. The (very) simple app for our example looks like this:

Two services, one acting as a Frontend, the other acting as a Backend. The Frontend is exposed to the public internet through DNS provided by Route 53, the Backend serves an API consumed by the Frontend and stores data in a Postgres Database. The application runs on a Kubernetes cluster in GKE. 

Configuring the application 

The app requires configuration so that the Frontend can talk to the Backend and the Backend to the Database. It also requires container configurations, In short, everything except the code and the infrastructure. As discussed above, the goal of an IDP is to keep configurations easy to maintain and dynamic. Dynamic means it has to be possible to take an app and start it up in a fully provisioned environment “dynamically”.

To make the setup easy to maintain the IDP introduces “separation of concern”. Ops set baseline configurations that developers can apply changes to in a logged manner. At deployment, the IDP creates a set of fresh manifests.

To ensure the setup is dynamic, Humanitec’s configuration management strictly separates the environment specific from the environment agnostic elements of configuration.

The application configured in this way allows us to simply select the environment agnostic elements of configuration and deploy them into a new environment. In the Humanitec UI this would look like this: 

What infrastructure in what state?

By specifying the “type of environment” you tell the information what infrastructure should be provisioned or wired up for the particular environment.

Let’s say in our example we chose an environment of type “QA-light”. When wiring up the setup, the ops team has defined a set of rules for how to provision these resources, when developers request them. They do that using the functionality “infrastructure orchestration”. For example, for the environment type “QA light” the ops team set the following rules:

  • Use a specific (existing) cluster, create a namespace and deploy to this namespace.
  • Create a new database of type “cloudsql”
  • Create a sub-domain in Route53
  • If GithubActions prompts a notification that a new image is build, update the image

This information is obviously environment specific! It is set in the background by the ops team and is the same for every environment of type “QA light”. 

Deploying into our new environment 

So we’ve spun up a new environment, specified the type and we chose which environment agnostic configuration in what version should be used. If we hit “deploy”, this is what happens: 

  • At deployment time the platform fuses the environment specific configurations for the desired environment (what resources are provisioned in the background for instance) and the environment agnostic information together. 
  • From this information it creates a fresh set of manifests representing the application configurations. 
  • It follows the “recipe” configured by the ops team using the infrastructure orchestration functionality to put the right resources into the desired state and wires them up to the application.
  • Once everything is in place it serves the application in the environment to the developer and streams observability information such as container logs and infos required for debugging. 

And voila, your app is up and running. 

In a graphical representation it looks like this:

What happens the first time we deployed an application is exactly what happens with every single deployment. A new set of manifests is created, the infrastructure is updated according to the rules Ops specified and the deployment is served. 

This introduces a wide range of new capabilities. Because we have a freshly created set of manifests for each deployment we can now apply diffs to these deployments to understand what changes were introduced between versions. We can roll-back between deployments, export them, share them, collaborate on them. We have a clear, end-to-end audit trail of every single change, in every single environment. 

Transformative Impact

Internal Developer Platforms change the way teams work. They allow you to reach a true “you build it, you run it” mindset with an entirely different ownership culture. An IDP reduces transactional conversations between ops and developers through automation. It evokes concise conversations around what everybody can contribute to improve the overall workflow. 

Internal Developer Platforms have a significant impact on the productivity of both the ops and the development teams. If you’ve more than 15 developers in your team and you want to reach true DevOps, evaluating this category is probably a must.