The Berlin-based payroll software provider experienced hypergrowth during the pandemic and could only keep up by streamlining their GKE setup with Humanitec. They now deploy 4x faster.

About Lano

Lano is on a mission to help companies everywhere grow their global teams. They provide all the tools required to compliantly hire and pay a global workforce, in one smart platform. With Lano, companies can hire full-time employees in 150+ countries, manage a network of freelancers or contractors worldwide, and consolidate payroll for all their international offices. Learn more at

Customer is currently looking for developers to join them!

Lano’s infrastructure and tooling setup

Lano’s main applications are hosted on GCP with GKE. They use Bitbucket for CI, mostly CloudSQL instances as databases and Docker Container Registry. They have Redis, Elastic and MariaDB running in cluster and use Route53 to manage DNS. Before adopting Humanitec, Lano managed their applications as configuration as code with Helm.

Lano’s key challenges

As Covid hit last year and many teams shifted to a partially or fully remote setup, Lano’s customer base exploded. Though already an established business at the time, they were suddenly growing their team at double percentage points month over month. The infrastructure and technical setup was not prepared for this growth, which led to scaling issues, a poor developer experience and slowed down the whole development process.

  • Complex deployment setup: at Lano’s scale, the GitOps-Helm combination was hard to debug, maintain and operate for developers.
  • Ops bottleneck: key person dependencies slowed down overall delivery. Waiting times for databases, environments and other resources blocked development.
  • Prod outage: deployment failure rate increased due to faulty dependencies on the test-infrastructure.
  • Slow onboarding: getting new developers up to speed used to be extremely time consuming and inefficient. At their growth rate this blocked a lot of resources.
"As we rapidly scaled, Ops came under massive pressure. Since we enabled developer self-service with Humanitec, Ops aren't a bottleneck anymore and we are shipping features 4x faster!"
Markus Schünemann, CTO

Key improvements

Introducing Humanitec’s Internal Developer Platform, Lano’s Ops team was able to connect their whole infrastructure and have a clear overview of what was deployed where and by whom. They can now set clear baseline configurations and build golden paths for the rest of the engineering team. Developers autonomously self serve the tech and tools they need in fully provisioned, dynamic environments.

  • Simplified and streamlined the deployment process across apps and environments leading to a 4X increase in deployment frequency.
  • Streamlined application configuration so developers don’t need to touch Helm charts anymore.
  • Enabled developers to spin up a fully provisioned environment or other resources such as databases, file storage, DNS, without fiddeling around in the GCP console.
“The speed at which we are deploying today would have frankly not been possible without Humanitec.”
Markus Schünemann, CTO

Humanitec erased bottlenecks and dependencies, reduced pressure on operations, simplified maintenance and reduced waiting times. Deployment frequency skyrocketed and the change failure rate dropped.

higher deploy frequency

by enabling developer self-service

reduction in waiting times

by providing what developers need in real-time.

reduced MTTR

by enabling selective roll-back.

Lower change failure rate

by testing against preview environments or roll back.

Technical deep dive

Infrastructure orchestration before and with Humanitec

Before building their Internal Developer Platform with Humanitec, Lano’s setup was static. If a developer required a new infrastructure component, they had to either understand how to navigate the GCP console directly, use Terraform or request components from the Ops team. Deployments were done against static environments. With Humanitec, the operations team codified what infrastructure was initiated at what request of a developer. As an example: if a developer requires a Google cloudsql instance for a fresh environment the Platform API  calls an open source driver that delivers the resource (including the necessary side-car proxy) and wires it up by injecting the dependency variables into the application configuration.

App config management

Before Humanitec, Lano used Helm and ArgoCD to sync changes to the cluster. This was initially stable but wasn’t meeting their needs once rolled out to the full fleet of microservices. Debugging deployments and understanding where faulty dependencies were introduced proved difficult and increased change failure rate.

With Humanitec, the Ops team sets baseline templates that contain any default Lano wants to enforce. Developers can apply changes to these templates through the CLI or UI. At deployment time, the Platform API creates a fresh set of manifests including the environment specific elements (databases, DNS, etc.), saves them to Lano’s repo in Github and executes them against the GKE API. Manifests are versioned, increasing visibility and allowing for easy rollbacks or diffs. Due to automated variable injections and strict enforcement of parameterization, faulty dependencies (which led to failed deployments) are almost eradicated.

“Our platform helps developers to self-serve, yet never restricts them. They can use the CLI or script everything in plain YAML. Devs are unblocked, Ops can focus. Game-changer.“
Markus Schünemann, CTO

Final setup

Lano leveraged the Platform API to enforce application configurations, add a RBAC layer and execute the correct drivers at the request of application developers. They used open source drivers to provision resources dynamically. Their developers can now self-serve the tech they need through the developer self-service UI and CLI.

Timeline and evaluation

  • POC: 7 working days
  • Evaluated against a self-built setup with ArgoCD. Estimated to take 6+ months to build comparable scope and an investment of approx 500k+
  • Migration: 3.5 weeks
  • Onboarding per new developer: 30 minutes