For the last few months, we have been working alongside Red Hat to create a new reference architecture integrated with their OpenShift products and focused on delivering “Everything as Code” in GitOps mode. You can read the full whitepaper exploring how it works, and breaking down its key paradigms.
These paradigms can be summarized as follows:
- Golden paths should be automated end-to-end and should have zero requirements for tickets, or manual work
- The platform must oversee resource management throughout the entire lifecycle, ensuring a high level of standardization.
- The frontend is responsible for presentation, while the backend handles logic.
- Sensible abstraction with Score for developers
- Infrastructure and operations teams follow strict standardization through well-defined resource definitions.
- Security is built-in by design, preventing any push traffic into the network from the delivery plane.
And most of all - you should embrace a philosophy of everything as code, where GitOps is prioritized, all operations are fully codified, and you adhere to industry standards.
This is what that reference architecture looks like.
Journey through the reference architecture
Let’s walk through how a user request moves through this architecture to see how everything integrates. Imagine an application developer requesting, “I need a Redis for my current workload.” This example might seem simple, but it actually demands an awful lot of backend logic. You’d need to generate new workload configurations, set up a properly configured Redis for the correct environment, pull the credentials, inject secrets, perform policy checks and sign-offs, and assemble everything before delivering it. So, let’s see how a user would initiate this request and what processes happen in the architecture.
Most likely, the user would prefer to remain "in code" and "in the editor" since that’s where they’re already working, and developers usually aren’t keen on using external interfaces. (For example, over 99% of API requests from users of the Humanitec Platform Orchestrator are code based.)
Score - your platform agnostic OSS workload spec
So, in this case, they’d probably open the Score file (which provides an abstract representation of the relationships between services and their dependent resources) and add just two lines of code.
Cache:
Type: redis
Here’s how the score file looks afterwards:
Humanitec Platform Orchestrator - the graph-based backend of the platform
All that’s left to do is commit to this change and here’s what happens next:
- GitHub Actions will start to run and forward the changes to the Platform Orchestrator.
- The Orchestrator will then read the Score file, build a diff of the changes, and analyze the meta-data to understand what the target of the deployment is (let’s say we’re deploying to an environment of type staging).
- The information resource type=redis and context = staging is sufficient to identify the correct resource definition. The resource definition is set by the platform or infra and ops team and defines how a Redis in staging should be configured. Here’s how this resource definition would look like if we used a Terraform module to create and update the state. It’s probably worth mentioning here that you don’t need to use TF. It can also be Crossplane, Pulumi, call the API directly, etc.
Once the Orchestrator has determined how to create or update all resources, it will analyze how these resources fit together and whether there are any dependencies, such as the need for a role or service account to be created first. It then constructs an acyclic resource graph (hence why Platform Orchestrators are often referred to as graph-based backends) and then will update or create all the resources.
Next, the system regenerates the workload configurations. If configured, a policy check can be triggered to allow a third-party tool to verify that no policies are violated. For production deployments, this might also include a human sign off.
If everything is successful, the Orchestrator stores the workload configurations, infrastructure configurations, and the acyclic resource graph in a Target State repository. As the name suggests, this repository holds the desired target state of the resource plane once the deployment is complete.
ArgoCD - your GitOps tool for CD
When the repository is updated, ArgoCD detects the changes and pulls them into the OpenShift cluster to continue execution in the network. ArgoCD then hands off control to the Humanitec Operator within the cluster, which reads the resource graph and begins updating or creating the resources in the right sequence. It will then also gather credentials and inject them into the containers as secrets at runtime before deploying everything.
Finally, the Orchestrator might push a notification or message to Microsoft Teams about the success or failure of the deployment, move the related Jira ticket, and update the portal.
This is where we get the connection to the frontend which is the display layer of all of this. In this example, we are using Red Hat’s new Developer Hub. It’s consuming the API of the Platform Orchestrator as the central source of truth and is thus always kept up to date. The relationship of all components in all environments is neatly documented and cataloged.
Where to start
This was a ton of information, but if you would like to go deeper, you can download the full whitepaper. You can also find the code in Github of this reference architecture and try it yourself or get started right away by speaking to one of our platform architects or joining the Humanitec Minimum Viable Platform (MVP) program.