BYO Certs for TKG 1.4.1+ Auth

Dodd Pfeffer
5 min readJan 24, 2022

The process described in this blog has changed slightly with TKG 1.5.3. If you are using TKG 1.5.3+ , please go to this version of the blog.

Tanzu Kubernetes Grid (TKG) provides the means to deploy consistent and upstream aligned Kubernetes clusters leveraging the Kubernetes subproject Cluster API. TKG 1.4.0 introduced the use of Carvel API Resources format for core and user-managed packages. TKG Auth is delivered by the core package Pinniped.

This process was originally written up for TKG 1.4.0, however in TKG 1.4.1 a small change was made for deployments to vSphere such that TKG Auth components were each provided their own services of type load balancer. This version of the post has been updated to accommodate the change.

You configure TKG Auth to meet your specific implementation requirements using IDENTITY_MANAGEMENT_TYPE and LDAP_ or ODIC_ config parameters. However, certain custom configuration patterns can only be applied after management cluster creation because of the dependencies on post-cluster-creation resources. The result of this base configuration, is the use of self-signed certs and auto-generated URIs (exact format is IaaS dependent) for Pinniped and Dex.

Pinniped URL


Dex URL after agreeing to proceed

We want to present the browser based authentication flow to the user using a friendly FQDN, standard HTTPS port 443 and valid SSL certificates. We will follow these steps.

  1. Setup DNS
  2. Generate Certificates and Add to Cluster
  3. Update Pinniped Core Package
  4. “Post” Post Pinniped Deploy Configuration

Note: For the walk through below, we are assuming you have enabled LDAP and thus Dex is deployed with your TKG Management cluster. If you are using OIDC for auth, you can still follow the steps below, simply skip the references to DEX.

Initial Setup

I’ve deployed the management cluster with IDENTITY_MANAGEMENT_TYPE set to LDAP. My management cluster is named mgmt, and I want my FQDNs to be and I have an offline process to generate certificates for my organization.

Setup DNS

First we setup the DNS entries for the custom FQDNs for the Pinniped Supervisor and Dex services.

You can retrieve the External IPs assigned to the services. Use the values from the query below to create DNS entries for and Then skip to the next section.

$ kubectl get service pinniped-supervisor -n pinniped-supervisor
$ kubectl get service dexsvc -n tanzu-system-auth

Generate Certificates and Add to Cluster

Next follow your process to generate certificates in your organization for the desired FQDN. Create a secrets in the pinniped-supervisor and tanzu-system-auth namespace called custom-auth-cert-tls with the tls.key and the tls.crt.

$ kubectl create secret tls custom-auth-cert-tls \
--namespace pinniped-supervisor \
--cert=path/to/cert/file \
$ kubectl create secret tls custom-auth-cert-tls \
--namespace tanzu-system-auth \
--cert=path/to/cert/file \

Update Pinniped Core Package

Now that the certificates are in place, we need to update the Pinniped package to tell it about the secrets.

$ MGMT_CLUSTER_NAME=mgmt # udpate accordingly
$ kubectl get secret $MGMT_CLUSTER_NAME-pinniped-addon -n tkg-system -ojsonpath="{.data.values\.yaml}" | base64 --decode > /tmp/pinniped-addon-values.yaml

Edit the /tmp/pinniped-addon-values.yaml, setting .custom_tls_secret = “custom-auth-cert-tls”.

custom_tls_secret: "custom-auth-cert-tls"

Patch the addon secret values.yaml key.

$ NEW_VALUES_YAML=`cat /tmp/pinniped-addon-values.yaml | base64`$ kubectl patch secret $MGMT_CLUSTER_NAME-pinniped-addon -n tkg-system -p '{"data": {"values.yaml": "'$NEW_VALUES_YAML'"}}'

By patching the $MGMT_CLUSTER_NAME-pinniped-addon secret, the pinniped supervisor and dex pods will be re-created and terminate its SSL services with the custom TLS certificates. The pinniped-post-deploy-job will also be re-run.

$ kubectl get jobs -n pinniped-supervisor
pinniped-post-deploy-job 1/1 6m44s 6m44s

“Post” Post Pinniped Deploy Configuration

The pinniped-post-deploy-job updates resources based upon the configuration. However, it does know about our FQDNs and custom CA. So we need to manually update the dex config map and custom pinniped resources with references to our FQDN and the CA used for our custom TLS certificates.

$ CA_BUNDLE=`cat /path/to/ca/file | base64`
$ kubectl edit cm dex -n tanzu-system-auth
# edit and save configmap ...
issuer = <dex_url_from_above>
- redirectURIs:
- <pinniped_url_rom_above>/callback
...# And bounce dex
$ kubectl rollout restart deployment dex --namespace tanzu-system-auth$ kubectl patch federationdomain pinniped-federation-domain \
-n pinniped-supervisor \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':$PINNIPED_URL}]"
$ kubectl patch jwtauthenticator tkg-jwt-authenticator \
-n pinniped-concierge \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/spec/audience', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/spec/tls/certificateAuthorityData', 'value':$CA_BUNDLE}]"
$ kubectl patch oidcidentityprovider upstream-oidc-identity-provider \
-n pinniped-supervisor \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':'$DEX_URL'},{'op': 'replace', 'path': '/spec/tls/certificateAuthorityData', 'value':$CA_BUNDLE}]"
$ kubectl patch cm pinniped-info \
-n kube-public \
--type json \
-p="[{'op': 'replace', 'path': '/data/issuer', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/data/issuer_ca_bundle_data', 'value':$CA_BUNDLE}]"

That’s it, you should now be able to retrieve your kubeconfig and access your clusters.

$ MGMT_CLUSTER_NAME=mgmt # udpate accordingly$ tanzu management-cluster kubeconfig get$ kubectl config use-context \
tanzu-cli-$MGMT_CLUSTER_NAME@$MGMT_CLUSTER_NAME$ kubectl get all

Dex URL with Trusted Cert



Dodd Pfeffer

Solution Engineer working at VMware Tanzu team helping customers achieve success with Kubernetes