At Humanitec, we have the luxury of sitting on tons of data from engineering teams from all over the world. Last year, we started to systematically track performance metrics and teams’ setup characteristics prior to adopting an Internal Developer Platform.
In another recent analysis, we’ve looked at teams' setups before and after implementing an Internal Developer Platform (IDP) in order to estimate the impact of IDPs on their performance. But not every team is using an IDP (yet). So we decided to look at all those teams that are currently evaluating one.
It’s a great (though biased) snapshot of the health and state (and maturity) of DevOps in 453 engineering organizations in the US and EU. It gives us a unique perspective on the technologies they are using, their average performance metrics, and their DevOps focus points.
About the data
Trying to be fair statisticians, we always ask the question of significance and bias. And of course, this data is not a random set of engineering teams selected without any bias. These are teams that signed up on Humanitec.com, interested in building an Internal Developer Platform. This intention is already an indication of a certain sophistication and openness to innovate.
This data set excludes any team with less than 20 developers. We decided not to do N/A replacements to approximate results. Instead, we deleted data sets with vital data missing, such as the number of developers. We correlated other attributes of N/A data sets to ensure deleting these data points wouldn't skew the final analysis. We ended up with a total of 453 engineering organizations.
So yes, the data is biased to a certain extent, but it’s still a decent representation of the roughly 20.000 teams that match our above-mentioned criteria. We are happy to share the data set (anonymized) upon request.
Before we dive into the results, let’s look at the distribution of team sizes. You can see that the majority of teams fall into the size of 30-250 developers — we’re talking about application developers. 31.2% are between 20-30 developers and 17.6% are above 250 developers.
This data reflects the usual distribution of teams that start thinking about Internal Developer Platforms. Jason Warner explains when to look at Internal Developer Platforms in his interview with us.
The results
Let’s turn to the results. Before actually looking at the KPIs and DevOps setup of the engineering teams in the data set, let’s look at the tools they chose to operate, starting with the configuration of their cloud environment.
Public cloud and Kubernetes beat them all
As one would expect, the majority (48.4%) are now entirely on public cloud. 16.8% are choosing a multi-cloud approach. It’s a considerable number given that many people believe this to be an edge-case. While there are 9.4% of the entire data set indicating they are “currently migrating,” 25.4% stay on-premise without even planning a migration.
When it comes to the cloud providers, pretty much everything is as expected. AWS remains the #1, GCP is following on position two. Azure is third and OpenShift is a close fourth. Only 4% of teams use none of the hyperscalers. A decent indication of just how strong the hyperscalers’ dominance already is.
When looking at Continuous Integration, Jenkins is still dominant. But AWS Codepipeline, Github Actions, and CircleCI are coming up strong too. The large “other” group tells a tale of just how segmented this market is (just think of Gitlab, Codefresh, Semaphore, Drone etc.).
If we turn to orchestration, what has been an open battlefield a few years back has now a clear, dominant winner. And it’s Kubernetes, with over 58% share. From all we observe its share is continually growing. To be frank, we were surprised to see how many workloads are still orchestrated with Docker Swarm.
Programming Languages and Technology Choices
The distribution of programming languages provides no “special news.” This is the distribution one would expect from a data set of enterprise engineering teams. JavaScript is still the clear number one.
When teams onboard with us, we also routinely ask them what other tools they are using today and that they plan to integrate into their Internal Developer Platform. Interesting here is that while it feels like the market is overcrowded with monitoring, IaaC, database, and messaging offerings, the names that actually pop up in production are the same few over and over. We will provide a more thorough look at the usage and distribution of these tools as we gather more data.
Architectural Setup and Configuration Management
Now that we’ve looked at the tools and tech, let’s dive into the architectural setup of DevOps Land. Let’s start with the application architecture. As expected, a majority of teams choose a loosely coupled architecture, while 34.6% are running monolithic applications.
Without judging what’s the right thing to do (you can find tons of arguments for both approaches), it would have been great to understand how many plan to or are migrating towards the other architecture. We are planning to gather this information going forward.
The world runs on containers (both physical and virtual). 81.1% of all the engineering organizations surveyed are either on this technology or actively migrating. This resembles well the general market trends.
Now it’s getting juicy. These are data-points we’ve always been looking for, super interesting to see this surfaced. A massive 80.9% of all teams surveyed store their application configurations in a version control system. We expected a much lower number.
The picture looks different for infrastructure configs, yet figures are still higher than we thought. 35.8% of companies use the infrastructure as code methodology to keep a record of the state, for disaster recovery or else.
Approach to DevOps
One of the most interesting things to look at when judging a team’s performance is the “degree of developer self-service.” Can every developer create a resource like a database or expose a service with a URL? Spin up a new environment? These questions are a good indicator of a setup’s health because they shed light on both the state of internal tooling and standardization level.
Teams that excel at this show much higher deployment frequencies, a higher degree of ownership from application developers, and perform better on innovation and general velocity. A stunning 30.5% of teams say they have self-service in place.
Where the self-service ability looks positive at first glance, one cannot say the same for the way Ops teams manage tasks. 19.6% follow the approach of “you build it, you run it” (although we have distinct opinions on what this should and shouldn’t mean.) 44.0% claim that some developers do DevOps tasks, the more senior ones.
They also say that these senior devs are entirely overwhelmed by requests from younger teammates and are blocked in their day-to-day deliverables. Not cool at all. And 36.4% (!) of teams say they have a dedicated DevOps team, where app devs throw code “over the fence.”
DevOps metrics
We’ve reviewed the tools and tech, the architectural setup, and the DevOps setup. So how do these teams perform? To measure that, we’ll use some of the most common DevOps metrics. If you have specific questions about what these KPIs mean or need some more background, we’ve written extensively about the four key metrics.
Let’s first look at deployment frequency. It’s looking good! Solid 35.1% of teams can deploy on-demand, 14.6% several times per day. 29.7% are still on weekly deployments. This also implies that 20.6% deploy only monthly or “a few times a year.”
Lead time is next. How long does it take for code to go from commit to “running in production?”
It’s a great indication of organizational discipline and strength.
- Are tests well written?
- Is the test-coverage decent?
- Is deployment automation in place and is the setup well trimmed?
And again, we see a picture of around half the group faster than a day (or even minutes) and others with lead times longer than one month. Super interesting distribution here.
Now the code is running in production. A product or system failure hits. How long does it take you to get everything back to normal operations?
We define this as “Mean Time To Recovery” or MTTR. 34.7% of teams turn this around in less than an hour, 48.1% in less than one day. 13.8% need between one day and a week. Those must be very, very draining and long weeks.
And finally, let’s look at the change failure rate. It tells us what percentage of deployments hit a failure requiring you to roll back. Again, this looks pretty healthy. For 78% of teams, this happens in less than 15% of deployments. We’re trying to imagine how frustrated our colleagues in the 4.5% of teams must be where 31-45% of their deployments fail. For 0.9% of teams, failed deployments happen for more than 45% of deployments.
Summary
The 453 engineering teams from our analysis give us a super interesting glance at the state of their infrastructure, architecture, and KPIs. In our interpretation, it paints the picture of a “DevOps world” that is indeed strongly converging to a set of “gold-standards.”
Containerization, Kubernetes, Config as Code, and Infrastructure as Code are going mainstream. And these things are more than buzzwords, they’ve proven effective. They are the new legacy.
The data also reveals what we believe are huge gaps in the “cultural setup.” 14 years after Werner Vogels’ (AWS) mantra “you build it, you run it”, real continuous delivery and end-to-end ownership still haven’t found their way to modern engineering organizations.
What all these teams do have in common: They see an Internal Developer Platform as a possible solution to streamline their operations and standardize their internal tooling.
What extensive impact the introduction of IDPs has, is subject to another study we conducted last year: The impact of Internal Developer Platforms.