Overcoming Okta “thin tokens” with Kubeapps and Tanzu Kubernetes Grid 1.3

Dodd Pfeffer
7 min readMay 16, 2021

--

Do you know what a “thin token” is? It’s ok, I didn’t either. I was attempting to deploy Kubeapps configured for OIDC auth with Okta on Tanzu Kubernetes Grid (TKG) 1.3. Michael Nelson’s Kubeapps on Tanzu Kubernetes Grid 1.3 blog walked me through the process of configuring Kubeapps with OpenID Connect (OIDC) and Pinniped.

As a frame of reference, I previously had everything working in my lab environment was running Kubeapps on Tanzu Kubernetes Grid (TKG)1.2 with Okta as the OIDC provider. TKG 1.2 uses Dex as a federated OIDC provider. Kubeapps and the cluster it was running on were configured to have Dex as its OIDC issuer, while Dex was configured for Okta as the upstream provider.

However, TKG 1.3 introduced Pinniped as “the easy, secure way to log in to your Kubernetes clusters”. A byproduct of this, was the removal of Dex and a new method of auth negotiation for the for the workload clusters.

Michael’s blog explains how to perform the new integrations. However, when I followed the steps, I faced an issue with Kubeapps. I only had the permissions directly granted to my user account. I was unable to access (and thus deploy apps to) namespaces for which I had permission through a group membership. Troubleshooting this issue, and the pursuant learnings are the inspiration for this blog post.

Background and Context

Let’s start by introducing Kubeapps. Kubeapps says it is an application dashboard for Kubernetes. But what does that mean? As a developer or application operator, I want access to popular open-source software for use in my applications. Kubeapps provides the user a catalog of available applications to deploy. Once an app is selected, it provides detailed information and configuration options, allowing the user user to customize and deploy the application into selected namespace. Tremendous!

But how does Kubeapps know if the user has permission to deploy the application? A more traditional method of deploying an application may be to use the helm within a terminal session. Helm in-turn would leverage the local kubeconfig context to interact with the Kubernetes api server. The Kubernetes api server would validate the user’s identity and use role bindings to determine if the user has the requisite permissions.

As a browser based UI application, Kubeapps does not have access to the user’s kubeconfig. It must use other means to retrieve user context in order for the Kubernetes api server to authorize the user’s request to deploy the application. For demos and other quick environments where security is not an issue, Kubeapps allows logging in with a service account token. Although functional, this is not very user friendly. Single-sign-on is an improved user experience, and Kubeapps offers OIDC integration. In this workflow, the user is redirected to the identity provider (IdP) for authentication, then redirected back to Kubeapps with an authcode which Kubeapps can use to retrieve the id token required to communicate with the Kubernetes api server.

Where was this breaking down in my configuration? Why were my group memberships not being honored?

Findings

  1. Group information was missing from the id token provided to kubeapps. This was identified following the troubleshooting steps in Kubeapps docs.
  2. Okta provides a “thin token” even when the group scope is requested. Groups must be retrieved by calling the userinfo endpoint with the access token. This “thin token” is passed by Kubeapps to Pinniped for auth, and since it doesn’t contain group information, the Kubernetes api server can not authorize the user based upon group membership.
  3. OIDC Protocol includes an optional step to retrieve additional user information (and groups). Okta’s kb article explains a resolution to the missing groups is to orchestrate this workflow, calling the userinfo endpoint and retrieving the groups.
  4. The Kubeapps integration with TKG 1.2 and Okta worked because of in-direction. TKG 1.2 uses Dex as an OIDC federated identity provider. Kubeapps was configured with Dex as the OIDC identity provider. Dex performed the orchestration with Okta to retrieve an id-token and groups, and then generated a new “fat token” to provide Kubeapps.
  5. Kubeapps uses oauth2-proxy as a client. It can be configured to call the OIDC issuer’s userinfo endpoint, but can not enrich the token without invalidating its digital signature. It is not an OIDC federated provider like Dex or the Pinniped Supervisor (used in the TKG 1.3 management cluster).
  6. The Pinniped Supervisor does not currently allow for additional static clients to be configured, which would have meant a simple drop-in replacement for the role Dex played in TKG 1.2.

Solution

I needed a way to provide Kubeapps a “fat token” containing the Okta group information. Dex does this for TKG 1.2, so why not use Dex for this in TKG 1.3. We can. However, Dex is not included in TKG 1.3, so we will have to add Dex ourselves. The Dex project provides a helm chart for this. The following is an adaptation to the steps outlined by Michael Nelson’s Kubeapps on Tanzu Kubernetes Grid 1.3.

1. Create an OAuth2 client-id for Dex (instead of Kubeapps) to use
2. Configure and install Dex (new step)
3. Create a Pinniped JWT Authenticator referencing Dex (instead of source OIDC provider, Okta)
4. Create the configuration values for Kubeapps and install (include groups in scope configuration)
5. Last steps to enable your user access (via group membership)

Note: My TKG cluster is configured using TKG Extensions for Contour as an ingress controller, cert-manager for tls certificate life-cycle management, and external-dns for dns life-cycle management.

1. Create an OAuth2 client-id for Dex to use

Same steps as Michael’s blog, except it is for Dex, instead of Kubeapps

2. Create the configuration values for Dex and helm deploy install

The following is my dex helm chart values file. You will have to replace host names to match your environment. As a OIDC federated provider, the configuration contains Okta upstream info along with kubeapps as a downstream client.

ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt-contour-cluster-issuer # Your issuer may be different
ingress.kubernetes.io/force-ssl-redirect: “true”
kubernetes.io/ingress.class: contour
kubernetes.io/tls-acme: “true”
hosts:
— host: dex.whiteharbor.tkg-vsphere-lab.winterfell.life
paths:
— path: /
pathType: ImplementationSpecific
tls:
— secretName: dex-cert
hosts:
— dex.whiteharbor.tkg-vsphere-lab.winterfell.life
config:
issuer: https://dex.whiteharbor.tkg-vsphere-lab.winterfell.life
staticClients:
— redirectURIs:
https://kubeapps.whiteharbor.tkg-vsphere-lab.winterfell.life/oauth2/callback
id: kubeapps
name: kubeapps
secret: # YOUR RANDOMLY GENERATED SECRET. Must match what is configured in Kubepps.
connectors:
— type: oidc
id: oidc
name: oidc
config:
issuer: # YOUR OKTA ISSUER URL e.g. https://dev-######.okta.com
clientID: # YOUR OKTA CLIENT ID
clientSecret: # YOUR OKTA CLIENT SECRET
redirectURI: https://dex.whiteharbor.tkg-vsphere-lab.winterfell.life/callback
scopes:
— openid
— profile
— email
— groups
— offline_access
insecureEnableGroups: true
getUserInfo: true
userNameKey: email
claimMapping:
email: “”
email_verified: email_verified
groups: groups
preferred_username: “”
insecureSkipVerify: false
oauth2:
skipApprovalScreen: true
responseTypes:
— code
storage:
type: kubernetes
config:
inCluster: true
enablePasswordDB: false

Then deploy dex.

helm repo add dex https://charts.dexidp.iohelm upgrade dex dex/dex \
--install \
--namespace dex \
--values dex-values.yaml

3. Create a Pinniped JWT Authenticator referencing Dex

This is almost exactly the same as Michael’s, except it references Dex.

apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: JWTAuthenticator
metadata:
name: kubeapps-jwt-authenticator
namespace: pinniped-concierge
spec:
audience: kubeapps
claims:
groups: “groups”
username: “email”
issuer: https://dex.whiteharbor.tkg-vsphere-lab.winterfell.life

4. Create the configuration values for Kubeapps and install

Again, this configuration file is nearly the same as Michael’s except I’m using an ingress and configure authProxy to target Dex. I’ve removed the scope additional flag because the default is openid email groups which is what I want.

useHelm3: true
allowNamespaceDiscovery: true
ingress:
enabled: true
certManager: true
hostname: kubeapps.whiteharbor.tkg-vsphere-lab.winterfell.life
tls: true
annotations:
ingress.kubernetes.io/force-ssl-redirect: “true”
ingress.kubernetes.io/proxy-body-size: “0”
kubernetes.io/ingress.class: “contour”
cert-manager.io/cluster-issuer: “letsencrypt-contour-cluster-issuer”
kubernetes.io/tls-acme: “true”
authProxy:
enabled: true
provider: oidc
cookieSecret: # base64 encoded random secret value
clientID: kubeapps
clientSecret: # YOUR RANDOMLY GENERATED SECRET. Must match what is configured in Dex.
additionalFlags:
— — oidc-issuer-url=https://dex.whiteharbor.tkg-vsphere-lab.winterfell.life
# Pinniped Support
# https://liveandletlearn.net/post/kubeapps-on-tanzu-kubernetes-grid-13-part-2/
pinnipedProxy:
enabled: true
defaultAuthenticatorName: kubeapps-jwt-authenticator
image:
repository: bitnami/kubeapps-pinniped-proxy
# Explicitly request the version of pinniped-proxy which supports the pre 0.6.0 version of pinniped.
tag: 2.2.1-debian-10-r22 # TODO: Remove this when TKG bumps pinniped to 0.6.0+
clusters:
— name: default
pinnipedConfig:
enable: true

5. Last steps to enable your user access

In this case, I’m assigning the cluster role binding to a group for which my user has active membership.

kubectl create clusterrolebinding id-workload-test-rb \
—-clusterrole cluster-admin \
—-group your-group-name

Final Thoughts — Dynamic Authenticators

Aside from an interesting system integration challenge and solution, I want to highlight the role that Pinniped plays. By having the Pinniped Concierge deployed within my cluster, I had the ability to dynamically add additional authenticators for the Kubernetes API server to trust. Prior to adding the JWTAuthenticator resource, my Kubernetes API server would only authenticate tokens signed by the Pinniped Supervisor. However, when I added the JWTAuthenticator resource, my Kubernetes API server trusted the tokens signed by Dex. This is the key to the whole integration. It goes without saying that permissions must be restricted on this resource. But for a cluster administrator, it is flexible and powerful tool.

--

--

Dodd Pfeffer

Solution Engineer working at VMware Tanzu team helping customers achieve success with Kubernetes