Creating Kubernetes Service Accounts for Automation
--
CI/CD tools require access to Kubernetes in order to perform actions. Just like the kubectl cli, they require a kubeconfig. We will walk through the steps to create the service account and kubeconfig. At the end, we will use Tanzu Mission Control to grant the required permissions for the the service account on the cluster.
Services accounts are namespaced resources. Create a namespace for the service account, and then create the service account. I’ve chosen automation for the name of the namespace and ci for the service account.
$ kubectl create ns automation
namespace/automation created$ kubectl create serviceaccount ci -n automation
serviceaccount/ci created
Kubernetes will automatically create an auth token for the service account and place it in a secret within the same namespace as the service account. We want to retrieve the value of the token.
$ TOKENNAME=`kubectl -n automation get serviceaccount/ci -o jsonpath='{.secrets[0].name}'`
$ TOKEN=`kubectl -n automation get secret $TOKENNAME -o jsonpath='{.data.token}' | base64 --decode`
Now let’s create a context within our kubeconfig for the service account. First add the user, then create the context by associating the user with the current cluster. My cluster nickname is whiteharbor. My user nickname will be ci@whiteharbor and my context name will be ci@whiteharbor.
$ kubectl config set-credentials ci@whiteharbor --token=$TOKEN
User "ci@whiteharbor" set.$ kubectl config set-context ci@whiteharbor --cluster whiteharbor --user=ci@whiteharbor
Context "ci@whiteharbor" created.
Now you should see newly create context in the list of contexts
$ kubectl config get-contextsCUR NAME CLUSTER AUTHINFO
ci@whiteharbor whiteharbor ci@whiteharbor
foo-admin@foo foo foo-admin
* whiteharbor-admin@whiteharbor whiteharbor whiteharbor-admin
Commonly you may need a standalone kubeconfig to be used in a CI/CD tool. Use the following commands to use the new context and export it to a new file.
$ kubectl config use-context ci@whiteharbor
Switched to context "ci@whiteharbor".$ kubectl config view --flatten --minify > /tmp/ci@whiteharbor-kubeconfig.yaml
The kubeconfig will look something like this…
$ cat /tmp/ci@whiteharbor-kubeconfig.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5ekNDQWJPZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdOREl5TWpFeE4xb1hEVE14TURVd01qSXlNall4TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlBuCm1CSzRRcVhlUG9uYVhYcDZJK21teHIvdm5KS0xsdE5SdFFNVnd4cTN2ZGRVK0pCeGNXVXNzdXAySUcwejhweVAKdU9tQVhUYlYrVFZRSFhITUNjT2kwRTBFY0IxN1MxQkJzVGxINXVCZkpCV2tQZGt0OEVzVXI5YmtLMzUrWGdQegpoUHlBbElCZlhmMmtZOGU1b3lmd3IwSWd0ak9kdW5UYjFTcUdweGpGYXE4RzFPRXV0RmxIQUJxSTNXL2JDVE55CmJ3Uisvem1JU3ZUWm9CZ091Y3VYeTRtQnRkOFdzamh5YUJsY3NUc092cnQyenBPZC85K2ZZTVJEbk05NkxJQWYKU2RYSXJxaElNN3dVVmYxZnFmUnQwR3VBQzZLZ1pQdGlxYTJxYWJUeHdwdU8rRTZFOFJaUzdFeEdHMFZnV09aMwpzMEF6ZEh5Sml6RVpwY01KNkdzQ0F3RUFBYU1tTUNRd0RnWURWUjBQQVFIL0JBUURBZ0trTUJJR0ExVWRFd0VCCi93UUlNQVlCQWY4Q0FRQXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTHgzU2s1ZXRaRzlOdHR1c2FiUXdCVFQKU0EvT3RhWHVhNE84ZEdocXB4MjdyM0owQ2hVQkxDbDlScUs2Qm5CNjJPQ1d3NmJUb05TVG5mSGFUdDlsZ05acgpuc1NZcWFOZVRCdjRjVndLTFNpUEo3Sk4vM2dUVFllbnJrL3diaURXaVBScWtsUXM5bXp2akMwdDRUSTdRUFNNClRWb0tDSkZlSnVRTGFCeSt6VjJYYjFLbUE1U1RpaW81TG9aVzNxTlhtWDg5WVorYWh1REt3NEZYbXBhUzVVbm0KM0M4SE5hUEZjTndFbW1BczBndVBOUE5uaDE4NkovRmVreWtFS2tRZkVXWk9EVzdUZFc3R1U1Ry9kcFpkTkJmVgpPV0xwZnNkMkpZQ3ArWnVDdlVEdGJma3pUUmFORmRxTDV5Wkh1M1M4djIya21RZGgvMmJ3RWllM05IWDRYcjQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://192.168.7.17:6443
name: bearisland
contexts:
- context:
cluster: bearisland
namespace: elasticsearch-kibana
user: ci
name: bearisland-admin@bearisland
current-context: bearisland-admin@bearisland
kind: Config
preferences: {}
users:
- name: ci
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlVCRnZyRThEU2pHanVCMU9PSlNFVnR4VGZabUw3bXBTZ3hpVjduTnZEbHMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjb25jb3Vyc2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2ktdG9rZW4tN3hzbXYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2kiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1ZGI5ZmI0MC04ZWVkLTQzNTMtOWRkMC05NDQ2N2I0YWYyNWQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y29uY291cnNlOmNpIn0.gw78S2cjksotzNxcCSi-jpAU_pM-T-hvSaPQOj7Knmcl_0BDLBP-LUCncFNkPjmcJnlBS8IOuC90QF8t9hjT8DFFqu55i55_xC0d1mVBebrH1MfPwwKQTxExaISGgvOPJBv1KAyskKnxPI9M6X2oGcU89szhpLmsUf2seak59M2zsfNUDT8pbf4U1ZHpjdTsj3YUnO0FQzM5fMFlCutN60ZTNnpsALq9q0KEdiGLx8G0ebu5s63liQCW4yfI6f8dvsws7D2FOaGt_o7bUAJgMQJth-trxv2hJJiEy7Igbqwwc0l2IfHWoIRAspREAeV2yzVyN4SgxXd5zZt3_ODYtQ
Permissions on the Cluster
The above handles the authentication. However, the service account has no permissions on the cluster. For this, you can use create role bindings. Or do as I do, by using Tanzu Mission Control Access Policies.
Create a new role binding for the user system:serviceaccount:automation:ci
.