A key value delivered by the Tanzu Application Platform (TAP) is to coordinate the work of developer and operators in an application’s path to production. TAP achieves this through Supply Chain Choreography. A platform operator describes the path to production with a Cartographer SupplyChain and a developer submits Workloads to be processed by the supply chain. Tanzu Application Platform ships with out-of-the-box (OOTB) supply chains that a platform operator can use right from the start, or use as a guide to construct a customized supply chain. In my companion blog post, I did a deep-dive into the OOTB Supply Chain for Scanning and Testing configured on the “Build cluster” within a multi-cluster TAP topology. In this post, I will review the key activities for the developer and the operator that influence the supply chain execution.
It is through Kubernetes resources that the developer and operator influence the processing of workloads by the supply chain.
The developer is responsible for: Workload, Application Source Code.
The operator is responsible for: Supply Chain, test Pipelines, ScanPolicies, BuildService ClusterBuilder, Conventions, application manifests templates.
Day 0: Operator Deploys Supply Chain
The platform operator is responsible to deploy TAP on the “Build cluster”. This can be done using the build profile. The platform operator configures the Build Service, Supply Chain, Scanner, Target Metadata Store and GitOps parameters in the TAP data values file.
Day 1: Operator Prepares Namespace for Developer
With the supply chain configured, when a new request for workload(s) are received, the platform operator must prepare a namespace on the build cluster for the developer’s workloads.
- Create namespace, RBAC, and image secrets for build service.
- Create a secret for GitOps and associate to namespace’s default service account.
- Create a ScanPolicy within the namespace. The ScanPolicy is used by Supply Chain Security Tools Scanning system to evaluate CVE’s identified in source and image scans for compliance
- Create a Pipeline in namespace. The Pipeline resource is part of the Tekton api. It defines the set of steps to execute to test the application.
Day 1: Developer Submits Workload
The developer develops the application and commits code to main branch on remote git repository. Then describes workload in a manifest and applies it to developer namespace provided by operator.
If all goes well, you can observe healthy status for all participants in the supply chain for the workload. The supply chain also produces a Deliverable resource in the namespace that can be exported and used to deploy to a TAP Run cluster.
Supply Chain Choreography
This is not the end of processing for the workload. As all components involved in the supply chain, including the SupplyChain itself, are implemented as Kubernetes resources; the controllers that respond these resources are continually reconciling actual and desired state. Let’s review the day 2 use cases that trigger actions within the supply chain.
Day 2: Developer Actions
Developer actions that trigger supply chain activity:
- Developer pushes new commits to the source code repository. The Flux Source Controller is responsible for monitoring the source git repository and will retrieve the latest commits, updating the tarball in its artifact cache and updating the GitRepository status. This in turn triggers a re-execution of each of the other participants in the supply chain, just as performed with the initial execution.
- Developer enriches application desired runtime config. Developer updates the Workload spec.env or spec.resources configuration. As this configuration is directly referenced in the Config-Writer and App-Config stages, they would be reconciled. The end result is the application’s Kubernetes manifests would be updated to reflect these changes. Source-Test, and Image stages are not impacted.
- Developer explicitly declares intent for App Live View. Developer updates the Workload spec.params configuration. If any ClusterTemplate referenced the param, the templated resource would be re-stamped. Thus triggering successive reconciliation.
Day 2: Operator Actions
Operator actions that trigger supply chain activity:
- Organization’s scan policy is updated to whitelist a given CVE. Operator updates the ScanPolicy. The Source-Scanner and Image-Scanner stages will re-run and the results stored in the Supply Chain Security Tools Store. No other supply chain stages will execute.
- VMware published patched versions of build service base image stack an/or cloud native build packs. Operator updates the Build Service ClusterBuilder (e.g. manually or automated). The Image-Builder stage would reconcile and if successful, trigger each of the remaining stages in the pipeline: Image-Scanner, Config-Provider, App-Config, and Config-Writer. The application container has been updated and the application’s Kubernetes manifests has been updated to refer to the new container image. Image Scan are generated and published.
- Organization updates its desired pod spec for NodeJS applications. Platform operations team develops and deploys a Convention Server to implement the new convention. Platform team applies a Convention resource describing the convention. All new Config-Provide stages would result in this convention being considered, however for Workloads that have already been processed by this stage, they would be re-evaluated within the default 10 hour Convention Controller reconciliation loop. At this point, the NodeJS applications would have the conventions applied and the App-Config and Config-Writer stages would reconcile with these changes. This results in an updated application Kubernetes manifest written to the GitOps repo.
- Organization decides to customize target runtime for the application to lower level Kubernetes resources. This is governed by the App-Config stage. The ClusterConfigResource for this is managed by the OOTB Templates Carvel package. It is also explicitly referenced by the OOTB Supply Chain for Testing and Scanning. As such the Platform operations team needs to create a new ClusterConfigResource containing the desired Ingress, Service, and Deployment (for example). The OOTB Template can be used as a reference. The platform operations team may also clone and modify the OOTB supply chain, replacing the reference to the new ClusterConfigResource. Depending on if they wanted to maintain the OOTB pipeline, they may have to define a distinct “apps.tanzu.vmware.com/workload-type” label for the new supply chain. If not, then they would exclude the OOTB supply chain from the TAP data values file and update the TAP deployment. The platform operations team is in complete ownership of the customized supply chain, but leveraging many of the OOTB templates. Assuming no other changes, once complete, all workloads handled by the supply chain would have the App-Config stage reconciled followed by the Config-Writer. The applications Kubernetes manifests would be updated in the GitOps repository, however application container image it references would not change.
- Organization decides to add a 3rd party image scanning. Similar to the previous example, the platform operations team would need to create a custom SupplyChain. The OOTB supply chain can be used a reference and they could live side-by-side (assuming different workload-types) or the OOTB could be replaced. The OOTB templates can still be used by the customized supply chain. A simple integration may include the use of a new ImageClusterTemplate that stamps out a Runnable which wraps a Tekton PipelineRun or TaskRun resource similar to the Source-Scanner and Config-Runner templates. With this a pod may run the required scanning processes. Any Workloads that are governed by this custom pipeline would reconcile this new stage as soon as it is introduced.
Note: The platform operator may choose to modify the Pipeline resource referenced by the PipelineRun resource in the Runnable that the source-tester ClusterSourceTemplate stamps out. However since the Runnable is a wrapper against the immutable PipelineRun resource, there is no immediate impact to the Pipeline resource change. However, the next time the GitResource is updated and new Runnable will be stamped who’s PipelineRun will be based upon the updated Pipeline