Back to all posts

Goodbye stone age —Technical breakthroughs in enterprise software development (2/3)

This is the second article of our series on the future of enterprise software development. It focuses on the four technology trends underpinning the new revolution in enterprise software development‍.

In the first article of our series on the future of enterprise software development, we at Humanitec traced the “monolith-to-microservice” shift that the industry witnessed over the last couple decades: Drawing analogies with ancient Rome and claiming that we’re still in the midst of the Victorian Ages in enterprise software development, we argue that with APIs, we now have the digital equivalent of aqueducts and railways at our disposal

Against this background, our second article is to shed some light on the very “fabric” - i.e. the underlying technology - that these aqueducts and railways are made of. Every fundamental shift evolves from a series of changes in the technology. For example, the rise of Salesforce was only possible due to changes in the modernization of and significantly lower costs in cloud infrastructure. With more and more capital and human resources flowing into this sector, we are already seeing a significant uptake in the speed of enterprise software development. We predict that this will accelerate to a dynamic not thought possible just 12 months ago. But let’s rewind and first take a look at the trends that enable not merely a mind-shift, but a true technological revolution.

The four technology trends underpinning the new revolution in enterprise software development

There are four technology trends that make up the fabric of the rapid evolution we’re now witnessing in enterprise software development. These are cloud-infrastructures, microservices architectures, modern web-frameworks and state-of-the-art ways of documenting and versioning code and services. Let’s look at each of the four individually.


As described in our first article, cloud-infrastructure has increasingly made inroads into the enterprise market since the 2000’s, when firms like Salesforce started providing on-demand, cloud-native enterprise software. This has prompted nothing less than a paradigm shift to an industry that has long been ripe for disruption.

Yet, more recent (that is, a mere two years) developments in hosting infrastructure point to an even bigger shift yet to come. Putting and running services in docker-containers makes it much more lightweight, faster, cheaper and thus convenient and secure to package code. Performance increases as workload and size decreases because distributed containers can now tab into share operating systems and only contain the libraries necessary to run the code at hand. Containers also increase the interoperability between services of different stack and business logic while remaining simple to maintain. A fundamental plus is that we are now able to “move” containers between different infrastructure types without the need to change much in configuration.

Moreover, the biggest breakthrough in the last years is most probably Kubernetes, which is helping us to load balance, orchestrate and „govern“ services packed into containers. It enables us to run and operate containers not only on multicloud (i.e. different infrastructures), but even on-premise. It allows for scaling each container on its own, thereby adapting to increased traffic and CPU demand automatically. In a masterpiece counterattack towards AWS, Google introduced this open source, thereby fuelling adoption and making sure GCP would be the first provider to offer Kubernetes native at scale. We are now seeing all other providers following the example, with AWS shifting in the US already. Digital Ocean and Azure are behind, but steadily building up capabilities. Kubernetes native instances now allow us to allocate pre-configured containers to a hosting provider of choice, thus streamlining the DevOps process tremendously.

As a result, what used to take a team of developers weeks can now be done by a single developer - in minutes. This acceleration will not only create more time for innovation, but also bring down development and infrastructure costs significantly. One-click deployment will be the new normal, and we will see significant amounts of startups entering this space offering „deployment-automation“. Almost certainly, we will see them go mainstream as Kubernetes practices of continuous integration will reach the masses of developers. Kubernetes is basically standardising the deployment protocol, making it possible to relocate services at very low cost. For the moment, cost of database migration still poses a threshold, but there is little reason to believe that this increased ease in transfer won’t eventually result in lower prices. This again will fuel adoption of cloud based system further.. It’s yet to be seen how much that effect really kicks in as there is a tendency of consolidation in the market with Google and IBM shifting to a multi-cloud strategy (see Anthos and other developments or acquisitions) which one might see as “giving up” on establishing a cloud themselves. This and lock-in strategies like serverless setups like Lambda might still lead to a situation where we see two dominant players called AWS and Microsoft and stable prices.

Microservice architectures

Similar to the developments in cloud-infrastructures, there has been far-reaching change in microservice architectures, which may be the most important ingredient for the next shift in enterprise software. Think of microservices as little isolated programs that run on their own and form an application in a swarm of other microservices. A field force management application, for instance, might be put together by microservices such as “time tracing”, “customer-management”, “project management” and “calendar”.

Microservices offer a variety of advantages. A diverse set of services of different programming languages can be combined into one app, and be tested and scaled individually. They are faster to understand and digest as well as much more interchangeable. While microservices have been around for quite a while now, there are a couple of developments at play that currently help them enter the mainstream. We have touched docker and Kubernetes, which are the backbones of interservice communication, but a variety of additional developments make microservices now faster in development, lightweight in maintenance, more convenient in versioning and more secure in general.

Another core ingredient to efficient microservice architectures naturally are APIs and efficient API routing. Great progress has been made in optimally calling several endpoints simultaneously and reducing latency. Further vital elements are service-meshs, microservice-orchestrations, gateway-layers and messaging brokers that allow for both horizontal and vertical communication in an efficient manner. They basically ensure that if an application contains multiple microservices and the frontend-client needs to make several requests to serve a customer request (to keep in our field force example, the frontend needs to see all customers that an installer serves in a given time frame so we need to call the customer and the calendar service and several endpoints). We are now able to parallelize and optimize these calls and cache and filter regularly performed calls in order to decrease latency in output.

Modern web-frameworks

Web-frameworks now run more efficiently and in a much more modular manner than ever before, allowing us to model and reuse frontends much faster. There have been seminal advances in javascript infrastructure, which not only facilitate fusing and isolating models, but also infusing them into other frameworks. We are in the very midst of this shift, and I just want to mention Angular elements as a popular example. Frontend development today is going way beyond mere CSS/HTML manipulation, allowing us to keep business logic on the client side and to use more of the availability and speed of the local CPU. We’ve made great advances in caching, helping us to build in-browser technology, but still allowing the application to work offline and synching the data back once it can reach the server. All of these development allow for a much more modular approach in the way we retrieve data, which enables allows us to move flawlessly from desktop to mobile. Moreover, the strict separation of frontend and backend with everything designed-first helps us to reuse frontends and exchange the underlying backend at any time. We are now able to build user-centered systems that appear flawless and integrated towards the user, but in reality derive data and logic from several underlying services and even systems. This is the perfect ingredient for driving modularity, disrupting the industry of dominant players in legacy enterprise software.

State-of-the-art ways of documenting and versioning code and services

It might appear counterintuitive to name documentation and versioning as one of the key drivers of change, but I do believe it is. I’m not just referring to the normal straightforward docs section inside of an application, but specifically to versioning, version documentation and endpoint documentation. A big question in reusing code of other developers is the ability to understand it, and doing so fast. Maybe the biggest flaw of monolith and plugin-driven architectures - and the advantage of microservices, on the other hand - is that understanding and digesting the service is much more intuitive and faster.

It remains a challenge to dive into the code and framework usage of another developer, but assisting teams in doing so is much more accurate and faster is a core challenge. For example, Swagger UI enables developers to describe the structure of APIs and make them readable to machines in a way that then again can be displayed and digested in a human readable format. This is achieved in both an interactive and automatic way, as Swagger generates libraries in a number of languages and enables advanced services, such as automated testing. Sufficient API documentation can be of great help when understanding services in more complex distributed systems. Another super important, yet complex matter is versioning and version management, which we can now control in a much better and more transparent way through documentation automation and version through addition. For me, documentation also means security and anomaly detection, as well as general monitoring and logging. Several players like Elasticsearch or Instana are currently moving in this space, doing a great job in analyzing messaging patterns between and detecting anomalies if IP ranges or volume go out of the ordinary. This is vital as microservice architectures can pose a security vulnerability if not structured well.

Elementary change

By purpose and as mentioned above, the trends we outlined enable two fundamental shifts: First, we need make reusability simpler, not only in one stack or framework (as in plugin-driven architectures), but in microservice architecture in any stack. As outlined, this is possible through microservice architecture and isolation, documentation and evolution in container-infrastructure.

Second, we need to increase automation, e.g. deployment automation and autofusing of different microservice architectures, in one application. We’ve touched infrastructure of hosting providers such as  Kubernetes. As a result of these technical breakthroughs, we will enter a world where assembling an enterprise software solution individually - and tailored to the actual needs of the client - is not only getting cheaper, but also has developers sell and buy microservices on a huge scale. We thus believe that prices for custom software will drop significantly and smart automation systems will generate the majority of the structure of the applications.

Not least, this will have far-reaching implications for the composition of development teams. DevOps will basically go out of the window. The same might well hold true for the majority of Backend, while the role of Frontend will change altogether. You can find some background information on these developments in the article on testing in microservice architectures by Nils Balkow-Tychsen. Even UX will be touched significantly as my colleague Joy Mwhinia lines out in her recent article. In our next article and in light of what we have pointed out so far, we will show where we think enterprise software is heading towards, how the discussed technological developments fuel this change- and explain our 5-5-5 projection for the industry until 2023.