What? Three? Lines?
Have you heard of what3words? It is a location service that lets you identify every spot on planet Earth in a 3x3m resolution by combining just three words. E.g. jams.known.bind maps to the Paternoster Square Column in London, UK.
Sounds incredible? Well, it is just a really smart setup, and great simplification.
What if you had an internal developer platform that lets your developers self-serve a complex array of storage buckets, databases, or other resources, by using just three lines of code?
And get the complete IAM based access control created automatically, with all the cloud roles, permissions, and Kubernetes workload identity wired up, ready to go?
And they will not need to know a thing about how IAM and workload identity work in your cloud.
You can build this capability using the combination of Score and the Humanitec Platform Orchestrator. This blog post explores the underlying mechanisms and guides you through the process of engineering the solution.
We will be using AWS as the sample cloud provider, Amazon S3 buckets as the sample resource type, and EKS with Pod Identity as the sample Kubernetes service. Our public guide shows the full implementation for AWS as well as the equivalent setup for Google Cloud (GCP). The mechanisms at play can be adapted to any other cloud.
The objective
At the end of the day, we want these three lines to be added by the developers into their Score file:
resources:
my-bucket:
type: s3
Score is a workload specification for containerized workloads. It aims at providing useful abstractions for things that are really more complex. An S3 bucket complete with an IAM access setup and workload identity is way more complex, because it involves all of these things:
- The S3 bucket itself
- A choice of either identity-based policies or bucket policies (we will be showing the latter)
- IAM roles
- Kubernetes service accounts
- Pod identity associations on EKS
- … and wiring it all up in the proper way 🤯
Of all these resources, only the S3 bucket appears in the Score file though. All others are modeled into the Orchestrator by the platform team, hiding behind those three lines of code. If you are a platform engineer, then that modeling is your task. We will now look at how to approach it systematically and how the mechanisms of the Orchestrator will naturally assist you.
Divide and conquer
As with most complex problems, we can break the task down into smaller pieces and tackle them one by one.
- For every S3 bucket, we also need a bucket policy
- EKS Pod Identity is anchored on Kubernetes service accounts. So for every workload, we also need a service account
- For every service account, we also need an IAM principal and a Pod Identity association linking the two
- The bucket policy refers to the bucket and to the IAM principal
These statements give us some structure to work with. In particular, they express resources and their dependencies, and we can model those into the Platform Orchestrator. The Orchestrator is made to create a dependency graph out of all the resources, and we can use its mechanisms to transform the natural language statements we just wrote down into technical statements.
A policy for every bucket
We derived this requirement:
For every S3 bucket, we also need a bucket policy
Creating any resource with the Orchestrator requires a Resource Definition that acts as the recipe for that type of resource. The one for the S3 bucket is available here. It contains some Terraform code to provision an actual S3 instance. Likewise, there is another Resource Definition for the bucket policy.
The interesting part is the "Whenever…also ..." requirement. We can model it using a so-called "co-provisioning" statement in the S3 Resource Definition:
provision:
aws-policy.s3-bucket-policy:
is_dependent: true
Every S3 bucket created will now automatically co-provision another resource of type aws-policy. The .s3-bucket-policy part assigns it a specific class to keep it apart from other policy objects in the Graph not related to S3.
Finally, the is_dependent statement lets the new policy resource depend on the S3 resource in the graph. The dependency enables the policy resource to read properties from the bucket such as the bucket name, which it needs for its own configuration.
.jpg)
The Resource Definition code for the bucket policy reads properties from the bucket like this:
s3_bucket_name: ${resources['s3.default'].outputs.bucket}
Co-provisioning is one core mechanism for handling "whenever making this… also make this". There is another one we'll look at next.
Add the service account
We derived this requirement:
For every workload, we also need a service account
This is another case of "whenever making this… also make this", but there is one difference: the workload will need to read a property from the service account. Technically, with the workload being deployed as a Kubernetes Pod, it will have to add the service account name to its own specification as the serviceAccountName property.
So graph-wise, the "for every…" Resource (the workload) depends on the "we also need a ..." resource (the service account), not the other way around.
You model this requirement in the Resource Definition using a so-called Resource Reference. It does two things:
- Create a new resource of a certain type if it does not exist already
- Lets the referencing resource read some property from that new resource
The workload's Resource Definition contains such a Resource Reference to create a service account and read its name:
value: ${resources.k8s-service-account.outputs.name}
Doing so will add the service account resource to the Graph:
.jpg)
Add the principal
On to the next requirement:
For every service account, we also need an IAM principal and a Pod Identity association linking the two
The automatic creation of the IAM principal as an IAM Role and Pod Identity association (Resource Definition here) is again handled by a co-provisioning statement in the service account, because the Role needs to read the name of the service account:
provision:
aws-role:
is_dependent: true
match_dependents: true
The match_dependents: true statement creates a dependency from the workload to the IAM Role. It enables the bucket policy to locate the IAM Role resource by following the dependencies in the Graph via the S3 bucket and its workload. That helps fulfill the last requirement:
The bucket policy refers to the bucket and to the IAM principal

The Pod identity association is created as part of the IAM Role and not as its own resource to keep the Graph simpler.
What? Three? Lines?
We will not dive deeper into every detail of the workload identity sample setup in this post. Visit our full workload identity guide to learn more. Eventually, the mechanisms we have shown will automatically create the above Resource graph for an S3 bucket.
And all it takes is adding these three lines to a Score manifest:
resources:
my-bucket:
type: s3
and deploying the manifest using a single command out of your CI/CD pipeline or local compute environment:
humctl score deploy
Having molded your specialized knowledge into the Orchestrator, you can now empower your developers to self-serve a standardized, complex cloud setup on any scale, in minutes, and without a human being involved.
Next steps
Explore and replay the public workload identity guide
See all the concepts on Resource handling in the Platform Orchestrator in our Resources 101