Workload Identity for GKE made easy with open source tools

Workload Identity for GKE made easy with open source tools

Workload Identity for GKE made easy with open source tools

Google Cloud offers a clever way of allowingGoogle Kubernetes Engine (GKE) workloads to safely and securely authenticate to Google APIs with minimal credentials exposure. I will illustrate this method using a tool called kaniko.

What is kaniko?

kaniko is an open source tool that allows you to build and push container images from Kubernetes pods when a Docker daemon is not easily accessible and you have no root access to the underlying machine. kaniko executes the build commands entirely in the userspace and has no dependency on the Docker daemon. This makes it a popular tool in continuous integration (CI) pipeline toolkits.

The dilemma

Suppose you want to access some Google Cloud – services from your GKE workload such as a secret fromSecret Manager, or in our case: build and push a container image to Google’sContainer Registry (GCR). However, it requires authorization of a Google service account (GSA) governed byCloud IAM. This is different from a Kubernetes service account (KSA) which provides an identity for pods and is dictated by its own Kubernetes Role-Based Access Control (RBAC). So how would you go about providing access to your GKE workloads to said Google Cloud services in a secure manner? 

1: Use the Compute Engine service account

The first option is to leverage the IAM service account used by the node pool(s). By default, this would be theCompute Engine default service account. The downside to this method is that the permissions of the service account is shared by all workloads, violating the principle of least privilege. Because of this, it is recommended that you use a custom service account with theleast privileged role and opt for a more granular approach when providing access to your workloads.

2: Use service account keys as Kubernetes secrets

The more secure second option is the tried, tested, and true method to generate account keys for a Google SA with the permissions that you need and mount them in your pod as aKubernetes secret. The pod manifest to build and push an image to GCR would look something like the following:

code_block[StructValue([(u’code’, u’apiVersion: v1rnkind: Podrnmetadata:rn name: kaniko-k8s-secretrnspec:rn containers:rn – name: kanikorn image: gcr.io/kaniko-project/executor:v1.9.1rn args: [“–dockerfile=Dockerfile”,rn “–context=gs://${GCS_BUCKET}/path/to/context.tar.gz”,rn “–destination=gcr.io/${PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}”,rn “–cache=true”]rn volumeMounts:rn – name: kaniko-secretrn mountPath: /secretrn env:rn – name: GOOGLE_APPLICATION_CREDENTIALSrn value: /secret/kaniko-secret.jsonrn restartPolicy: Neverrn volumes:rn – name: kaniko-secretrn secret:rn secretName: kaniko-secret’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eab96db8d50>)])]

The environment variable, GOOGLE_APPLICATION_CREDENTIALS contains the path to a Google Cloud credentials JSON file that is mounted at the path /secret inside the pod. It is through this service account key that the Kubernetes pod is able to access the build context files and push the image to GCR.

The downside to this method is you have live, non-expiring keys floating around with a constant risk of being leaked, stolen or accidentally committed to a public code repository.

3: Use Workload Identity

The third option usesWorkload Identity to provide the link between Google SA and Kubernetes SA. This grants the KSA the ability to act as the GSA when interacting with Google Cloud-native services and resources. This method still provides the granular access from IAM without requiring any service account keys to be generated and thus closing the gap.

Setup

You will need toenable Workload Identity on your GKE cluster as well asconfigure the metadata server for your node pool(s). You will also need a GSA (I called mine kaniko-wi-gsa) and assign it the proper roles it needs:

code_block[StructValue([(u’code’, u’gcloud projects add-iam-policy-binding ${PROJECT_ID} \rn –role roles/storage.admin \rn –member “serviceAccount:kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com”‘), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eaba41bce10>)])]

On the Kubernetes side, create a KSA (I called mine kaniko-wi-ksa) and assign it the the following binding which will allow it to impersonate your GSA that has the permissions to access the Google Cloud services you need:

code_block[StructValue([(u’code’, u’gcloud iam service-accounts add-iam-policy-binding kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com \rn –role roles/iam.workloadIdentityUser \rn –member “serviceAccount:${PROJECT_ID}.svc.id.goog[default/kaniko-wi-ksa]”‘), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eaba51b1610>)])]

The last thing you need to do is annotate your KSA with the full email of your GSA:

code_block[StructValue([(u’code’, u’kubectl annotate serviceaccount kaniko-wi-ksa \rn iam.gke.io/gcp-service-account=kaniko-wi-gsa@${PROJECT_ID}.iam.gserviceaccount.com’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eab96d06cd0>)])]

Here is the pod manifest for the same image build job, but using Workload Identity instead:

code_block[StructValue([(u’code’, u’apiVersion: v1rnkind: Podrnmetadata:rn name: kaniko-wirnspec:rn containers:rn – name: kanikorn image: gcr.io/kaniko-project/executor:v1.9.1rn args: [“–dockerfile=Dockerfile”,rn “–context=gs://${GCS_BUCKET}/path/to/context.tar.gz”,rn “–destination=gcr.io/${PROJECT_ID}/${IMAGE_NAME}:${IMAGE_TAG}”,rn “–cache=true”]rn restartPolicy: Neverrn serviceAccountName: kaniko-wi-ksarn nodeSelector:rn iam.gke.io/gke-metadata-server-enabled: “true”‘), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eab96d06750>)])]

Although using Workload Identity requires a little more initial setup, you no longer need to generate or rotate any security account keys.

What if you want to access services in another Google Cloud project?

Sometimes you may want to push your images to a central container registry located in a Google Cloud project that is different from the one your GKE cluster is in. Can you still use Workload Identity in this case?

Absolutely! Your GSA and necessary IAM binding are created from your external Google Cloud project, but you still reference the Workload Identity pool and KSA your GKE workload is running from.

Now what

By using kaniko, we illustrated Workload Identity and how it allows more secure access when authenticating to Google APIs. Userecommended security practices to harden your GKE cluster and stop using node service accounts or exporting service account keys as Kubernetes secrets.

Source : Data Analytics Read More