BYO Certs for TKG 1.4.0 Auth

Dodd Pfeffer
5 min readNov 3, 2021

The process described in this blog has changed slightly with TKG 1.4.1. If you are using TKG 1.4.1 through TKG 1.5.2, please go to this version of the blog. If you are using TKG 1.5.3+, please go to this version of the blog.

Tanzu Kubernetes Grid (TKG) provides the means to deploy consistent and upstream aligned Kubernetes clusters leveraging the Kubernetes subproject Cluster API. TKG 1.4 introduced the use of Carvel API Resources format for core and user-managed packages. TKG Auth is delivered by the core package Pinniped.

You configure TKG Auth to meet your specific implementation requirements using IDENTITY_MANAGEMENT_TYPE and LDAP_ or ODIC_ config parameters. However, certain custom configuration patterns can only be applied after management cluster creation because of the dependencies on post-cluster-creation resources. The result of this base configuration, is the use of self-signed certs and auto-generated URIs (exact format is IaaS dependent) for Pinniped and Dex.

Pinniped URL
Dex URL
Dex URL after agreeing to proceed

We want to present the browser based authentication flow to the user using a friendly FQDN, standard HTTPS port 443 and valid SSL certificates. We will follow these steps.

  1. Setup DNS
  2. Generate Certificates and Add to Cluster
  3. Update Pinniped Core Package
  4. “Post” Post Pinniped Deploy Configuration

Note: For the walk through below, we are assuming you have enabled LDAP and thus Dex is deployed with your TKG Management cluster. If you are using OIDC for auth, you can still follow the steps below, simply skip the references to DEX.

Initial Setup

I’ve deployed the management cluster with IDENTITY_MANAGEMENT_TYPE set to LDAP. My management cluster is named mgmt, and I want my FQDNs to be pinniped.mgmt.tanzu-lab.winterfell.life and dex.mgmt.tanzu-lab.winterfell.life. I have an offline process to generate certificates for my organization.

Setup DNS

First we setup the DNS entries for the custom FQDNs for the Pinniped Supervisor and Dex services.

If you have deployed TKG to Azure or AWS, you can retrieve the External IPs assigned to the services. Use the values from the query below to create DNS entries for pinniped.mgmt.tanzu-lab.winterfell.life and dex.mgmt.tanzu-lab.winterfell.life. Then skip to the next section.

$ kubectl get service pinniped-supervisor -n pinniped-supervisor
$ kubectl get service dexsvc -n tanzu-system-auth

If you have deployed TKG to vSphere, the base configuration of Pinniped and Dex is to expose the service as type NodePort. This is due to the potential for configuration without a cloud provider for services of type LoadBalancer. However, for our desired user experience, a cloud provider for services of type LoadBalancer is required and we assume you enabled the NSX ALB integration for this purpose. So we must update the service types. The service type is not an exposed property to configure, so we must configure a ytt overlay. TKG docs describe this process in detail. For quick configuration, you can edit the core package addon secret.

$ MGMT_CLUSTER_NAME=mgmt # udpate accordingly$ kubectl edit secret $MGMT_CLUSTER_NAME-pinniped-addon -n tkg-system

And then add the following stringData.

...
stringData:
overlays.yaml: |
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
---
#@overlay/replace
spec:
type: LoadBalancer
selector:
app: pinniped-supervisor
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8443
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "dexsvc", "namespace": "tanzu-system-auth"}}), expects="0+"
---
#@overlay/replace
spec:
type: LoadBalancer
selector:
app: dex
ports:
- name: https
protocol: TCP
port: 443
targetPort: 5556
...

TKG will reconcile the new configuration overlay and update the services. Retrieve the External IP assigned using the query below.

$ kubectl get service pinniped-supervisor -n pinniped-supervisor
$ kubectl get service dex -n tanzu-system-auth

Generate Certificates and Add to Cluster

Next follow your process to generate certificates in your organization for the desired FQDN. Create a secrets in the pinniped-supervisor and tanzu-system-auth namespace called custom-auth-cert-tls with the tls.key and the tls.crt.

$ kubectl create secret tls custom-auth-cert-tls \
--namespace pinniped-supervisor \
--cert=path/to/cert/file \
--key=path/to/key/file
$ kubectl create secret tls custom-auth-cert-tls \
--namespace tanzu-system-auth \
--cert=path/to/cert/file \
--key=path/to/key/file

Update Pinniped Core Package

Now that the certificates are in place, we need to update the Pinniped package to tell it about the secrets.

$ MGMT_CLUSTER_NAME=mgmt # udpate accordingly$ kubectl get secret $MGMT_CLUSTER_NAME-pinniped-addon -n tkg-system -ojsonpath="{.data.values\.yaml}" | base64 --decode > /tmp/pinniped-addon-values.yaml

Edit the /tmp/pinniped-addon-values.yaml, setting .custom_tls_secret = “custom-auth-cert-tls”.

...
custom_tls_secret: "custom-auth-cert-tls"
...

Patch the addon secret values.yaml key.

$ NEW_VALUES_YAML=`cat /tmp/pinniped-addon-values.yaml | base64`$ kubectl patch secret $MGMT_CLUSTER_NAME-pinniped-addon -n tkg-system -p '{"data": {"values.yaml": "'$NEW_VALUES_YAML'"}}'

By patching the $MGMT_CLUSTER_NAME-pinniped-addon secret, the pinniped supervisor and dex pods will be re-created and terminate its SSL services with the custom TLS certificates. The pinniped-post-deploy-job will also be re-run.

$ kubectl get jobs -n pinniped-supervisor
NAME COMPLETIONS DURATION AGE
pinniped-post-deploy-job 1/1 6m44s 6m44s

“Post” Post Pinniped Deploy Configuration

The pinniped-post-deploy-job updates resources based upon the configuration. However, it does know about our FQDNs and custom CA. So we need to manually update the dex config map and custom pinniped resources with references to our FQDN and the CA used for our custom TLS certificates.

$ CA_BUNDLE=`cat /path/to/ca/file | base64`
$ PINNIPED_URL=https://pinniped.mgmt.tanzu-lab.winterfell.life
$ DEX_URL=https://dex.mgmt.tanzu-lab.winterfell.life
$ kubectl edit cm dex -n tanzu-system-auth# edit and save configmap ...
...
data:
config.yaml:
issuer = <dex_url_from_above>
...
staticClients:
- redirectURIs:
- <pinniped_url_rom_above>/callback
...
# And bounce dex
$ kubectl rollout restart deployment dex --namespace tanzu-system-auth
$ kubectl patch federationdomain pinniped-federation-domain \
-n pinniped-supervisor \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':$PINNIPED_URL}]"
$ kubectl patch jwtauthenticator tkg-jwt-authenticator \
-n pinniped-concierge \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/spec/audience', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/spec/tls/certificateAuthorityData', 'value':$CA_BUNDLE}]"
$ kubectl patch oidcidentityprovider upstream-oidc-identity-provider \
-n pinniped-supervisor \
--type json \
-p="[{'op': 'replace', 'path': '/spec/issuer', 'value':'$DEX_URL'},{'op': 'replace', 'path': '/spec/tls/certificateAuthorityData', 'value':$CA_BUNDLE}]"
$ kubectl patch cm pinniped-info \
-n kube-public \
--type json \
-p="[{'op': 'replace', 'path': '/data/issuer', 'value':$PINNIPED_URL},{'op': 'replace', 'path': '/data/issuer_ca_bundle_data', 'value':$CA_BUNDLE}]"

That’s it, you should now be able to retrieve your kubeconfig and access your clusters.

$ MGMT_CLUSTER_NAME=mgmt # udpate accordingly$ tanzu management-cluster kubeconfig get$ kubectl config use-context \
tanzu-cli-$MGMT_CLUSTER_NAME@$MGMT_CLUSTER_NAME
$ kubectl get all
Dex URL with Trusted Cert

--

--

Dodd Pfeffer

Solution Engineer working at VMware Tanzu team helping customers achieve success with Kubernetes