Internal Developer Platforms (IDPs) built with Humanitec's products are a way to enable self-service for any software engineer. It allows them to operate their applications end to end, way beyond the simple update of an image. IDPs form transparent abstractions that ensure easy maintenance and consistent documentation. The approach of transparent abstraction ensures developers can choose whether they want to use the paved road or actually go off-path, down to the level of cluster networking and manipulating the individual helm chart.
In this article we will zoom in on specific workflows and have a look at how a senior engineer uses an IDP on a daily basis. Throughout the article you can find video interviews with Eugene, a DevOps veteran and senior backend engineer who has been using Humanitec for some time now. He’ll show how senior engineers deal with things like infrastructure management, application configurations, deployments and roll-backs with and without an IDP.
Backend developers and IDPs
The involvement of the average backend developer into the operations side of things varies by company type and size. While in a large corporation the backend role is focused on coding business logic and intersecting with a more prominent platform team, the situation is totally different in a smaller team. After analyzing 1,856 organizations, we found that in 34.2 % of cases senior developers end up as the nannies of less experienced developers, basically taking over the role of operations and deploying for them. The worst of DevOps for all sides involved. This is exactly what senior engineer Eugene experienced before working with an IDP built with Humanitec.
What becomes apparent is that backend developers play a complex role, with high expectations in terms of deliverables, while load balancing with things they pick up because others in their team can’t. So when we talk about self-service for backend developers, it’s a multi-faceted topic. For some, it’s the plain ability to self-serve infrastructure that they previously had to request from other teams. For many, it’s finally being able to focus because all the mid-levels, frontends and juniors are no longer bugging them. They still change baseline YAML files and deep dive into Terraform at times, but they can actually get some code out the door.
Into the darkness: Jenkins scripts and roll-back weekends
It’s a situation so common, one almost doesn’t need to describe it. A senior backend engineer with knowledge about DevOps gets hired into team X. The “DevOps guy” left with no proper documentation. It takes the new backend weeks to understand and untangle hundreds of undocumented scripts in Jenkins. It is so screwed up they just give up and script everything from scratch. Once he is done, the nightmare starts. Hired as a backend with the task of updating an image at times, he is now spending entire weeks managing releases and has to be on-call on weekends.
In our recent survey we found a striking difference between top performers and low performers when it comes to deployment frequency.
“Over 80% of top performers deploy at least several times per day and less than 10% of them deploy less than weekly. A stunning 22% of low performers say they deploy only “a few times per year” and less than 10% of them can deploy on-demand.”
Source: DevOps Setups. A Benchmarking Study
Spoiler alert: Eugene's team wasn’t a low performer, there are many with worse setups.
Into the light
Today Eugene works with Humanitec. All tools and infrastructure are wired up with Humanitec’s platform API. Jenkins was restricted to building images. Humanitec is notified if a new image is built, automations perform the deployment into the right environment or spin up fresh PR environments.
Instead of the unstructured script mess his team was operating previously, today Eugene defines baseline YAML files, letting the platform API dynamically create manifests at deployment time.
So, what happens at deployment time? Before you deploy, Humanitec provides a diff, which shows exactly what changed compared to the previous deployment.
Once approved, Humanitec first checks resource dependencies based on the definitions you made, to make sure they are available in the new environment you want to deploy to. Those can be databases, DNS, but also secrets and certificates in your cluster. Eugene describes the painful process in his previous setup to configure the certificates for all the servers. Now Humanitec takes care of all of this in the background.
Once the resources are available, Humanitec deploys the app into the fresh environment. All this happens in seconds, without the need for you to touch your infrastructure directly.
Also, Humanitec provides a detailed deployment history and you can always roll back within seconds.
Compared to his previous setup, for Eugene it is like gluing things together in a way that ensures the next colleague who joins can hit the ground running. This ease of maintenance and documentation by default, paired with the ability of simplifying the setup for the whole team, is a huge relief. No more extra shifts for single deployments, even for more complex releases.
Every developer is able to update configs in a sustainable and consistent manner. If something goes wrong, they simply can roll back on their own, without Eugene’s help. If they need to debug their code, look at logs, get context on the infrastructure or spin up a new environment, it is a simple CLI command now.
Eugene particularly enjoys the control and flexibility he has now gained, which were previously impossible to achieve with a self-scripted setup, as he demonstrates in the video below. For instance, partial releases. By forming a meta-model of the application, encompassing all of its workloads and dependencies that can be dynamically composed on an environment to environment basis. Humanitec allows even the most complex roll-outs in seconds.
You can select your sub-set of microservices, target the environment you want to roll out to and deploy. The platform will update infrastructure and configs of the right services, create a fresh set of manifests for the entire application and inject environment specific resource dependencies as secrets into the container at run-time.
Another bit for him is infrastructure management. Adding an in-cluster Redis, adding RabbitMQ, with one command. But also adding any other cloud-service or even on-prem resources is a simple command and the default boilerplates defined by the team are executed, all through one API.
Does the Platform restrict backend engineers?
The question alone lets Eugene smile. “There is nothing you cannot change in Humanitec. I’m basically choosing what level of abstraction my team members prefer.” Be it the UI, CLI or API, low-level changing baseline YAMLs or Terraform. Because it’s API based, you can kubectl into the cluster or ssh into the DB at any time and change the resource directly. The platform can deal with any underlying shift on the fly.
This blog post is part of a whole series on how software engineering teams work with an Internal Developer Platform. Check out how Eugene's colleagues in frontend and QA are utilizing Humanitec to operate their apps in self-service.