This project documents a sample Tekton Pipeline
that builds and canary rollout of new versions of an application using iter8.
This is a superset of what is described in blog.
This assumes a basic understanding of iter8 and Tekton.
Istio: https://istio.io/docs/setup/
istioctl manifest apply \
--set profile=demo \
--set values.kiali.enabled=false \
--set values.grafana.enabled=false
Or, for Istio 1.7 or greater:
istioctl manifest install \
--set profile=demo \
--set values.kiali.enabled=false \
--set values.grafana.enabled=false \
--set values.prometheus.enabled=true
iter8: https://iter8.tools/installation/kubernetes/
curl -L -s https://raw.githubusercontent.com/iter8-tools/iter8/v1.0.0-rc3/install/install.sh \
| /bin/bash -
Tekton: https://github.com/tektoncd/pipeline/blob/master/docs/install.md
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
For simplicity we describe execution in a minikube cluster and assume that the pipeline is defined in the default
namespace and will be executed using the default
service account:
export PIPELINE_NAMESPACE=default
export SERVICE_ACCOUNT=default
We will demonstrate a canary rollout using the bookinfo application, an application used to demonstrate features of Istio.
It is comprised of 4 microservices and will be deployed to the bookinfo-iter8
namespace:
export APPLICATION_NAMESPACE=bookinfo-iter8
The application be deployed using the instructions provided by the iter8 tutorial. Alternatively, a Tekton task to provide a Tekton Task
to deploy the application:
kubectl --namespace ${PIPELINE_NAMESPACE} apply \
--filename <rbac rules> \
--filename <tekton task>
tkn task start \
--param NAMESPACE=${APPLICATION_NAMESPACE} \
--serivceaccount ${SERVICE_ACCOUNT}
The bookinfo application is composed of four microservices. We will demonstrate it using the reviews microservice. The sample pipeline builds code from a GitHub repository and deploys it using a canary rollout. If it satisfies the canary criteria, the new build will be promoted. To modify and build new versions, fork the sample project: https://github.com/iter8-tools/bookinfoapp-reviews.
For reference, define some environment variables that refer to your fork and the docker repo you will use:
export GITHUB_REPO=<github repo>
export DOCKER_REPO=<dockerhub repo>
The build task reads the source code from a GitHub repository, builds a Docker image and pushes it to a Docker registry. At execution time, the pods need permission to read the GitHub repository and write to Docker registry. This can be accomplished by defining secrets and associating them with the service account that is used to run the pipeline.
We used a public repository on https://github.com so that GitHub secret is needed. A secret access to DockerHub can be defined from your local Docker configuration file:
kubectl create secret generic dockerhub \
--from-file=.dockerconfigjson=${DOCKER_CONFIG_FILE} \
--type=kubernetes.io/dockerconfigjson
By default, the Docker configuration is in ~/.docker/config.json. However, if this is just a reference to a credential store (for example, on a Mac), you will need extract the details. For example, by the method described here.
For alternatives and additional details about authentication, see the Tekton Documentation.
When executing a Tekton pipeline, each task can executed using a different service account. In the subsequent discussion we will asssume the service account is the default service account, default
for all tasks.
The service account needs to be aware of the secret(s) providing access to GitHub and DockerHub:
kubectl patch --namespace ${PIPELINE_NAMESPACE} \
serviceaccount ${SERVICE_ACCOUNT} \
--patch '{"secrets": [{"name": "dockerhub"}]}'
Futhermore, the tasks need access to a number of cloud resources. The pipeline tasks create iter8 experiments, reads Istio virtual system and destination rules and create kubernetes services and deployments. The service account that runs these tasks must have permission to take these actions.
A ClusterRole
and ClusterRoleBinding
can be used to define the necessary permissions and to assign it to service account:
kubectl apply --filename - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tekton-iter8-role
rules:
- apiGroups: [""]
resources: [ "services", "nodes" ]
verbs: [ "get", "list" ]
- apiGroups: [ "apps" ]
resources: [ "deployments" ]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
- apiGroups: [ "iter8.tools" ]
resources: [ "experiments" ]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
- apiGroups: [ "networking.istio.io" ]
resources: [ "destinationrules", "virtualservices" ]
verbs: [ "get", "list" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tekton-iter8-binding-${PIPELINE_NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tekton-iter8-role
subjects:
- kind: ServiceAccount
name: ${SERVICE_ACCOUNT}
namespace: ${PIPELINE_NAMESPACE}
EOF
The sample pipeline uses two workspaces, experiment_storage and source_storage, to share files between tasks. Thes are backed by persistent volumes. On minikube, these can be defined as follows:
kubectl --namespace ${PIPELINE_NAMESPACE} \
apply --filename https://raw.githubusercontent.com/kalantar/iter8-tekton/master/volumes.yaml
The pipeline we've defined can be visualized as:
It can be defined as follows:
kubectl --namespace ${PIPELINE_NAMESPACE} apply \
--filename https://raw.githubusercontent.com/kalantar/iter8-tekton/master/tasks.yaml \
--filename https://raw.githubusercontent.com/kalantar/iter8-tekton/master/canary-pipeline.yaml
Briefly, the behavior of each task is:
- clone-source -- clones the GitHub repository with the application to build and deploy
- identify-baseline -- identify the baseline version using heuristic rules
- define-experiment -- define iter8 experiment from template in the source repository
- create-experiment -- apply the experiment defined by define-experiment
- build-and-push -- build microservice from source
- define-canary -- define the deployment yaml for the new version using kustomize template in the source repository
- deploy-canary -- deploy the canary version
- wait-completion -- wait for the experiment to complete
Tasks related to load generation for demonstration purposes:
- identify-endpoint -- identify the application endpoint
- generate-load -- generate load
- stop-load-generation -- terminate the load generation
Tasks related to cleanup:
- generate-uid -- generate a unique UUID by which to link tasks together
- cleanup-scratch-workspace -- delete any intermediate files
- cleanup-source-workspace -- delete source code and any intermediate files
Finally, to execute the pipeline, we must create a PipelineRun
resource. The PipelineRun
resource sets any run specific parameters.
A PipelineRun
can be created manually as follows:
export HOST='bookinfo.example.com'
kubectl apply --filename - <<EOF
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: canary-rollout
spec:
pipelineRef:
name: canary-rollout-iter8
workspaces:
- name: source
persistentVolumeClaim:
claimName: source-storage
- name: experiment-dir
persistentVolumeClaim:
claimName: experiment-storage
params:
- name: application-source
value: ${GITHUB_REPO}
- name: application-namespace
value: ${APPLICATION_NAMESPACE}
- name: application-image
value: ${DOCKER_REPO}
- name: application-query
value: productpage
- name: HOST
value: ${HOST}
- name: experiment-template
value: iter8/experiment.yaml
EOF
We can follow the execution of the pipeline:
watch tkn taskrun list --namespace ${PIPELINE_NAMESPACE}
We can follow the execution of iter8 by observing the creation and progress of the experiment:
watch kubectl --namespace ${APPLICATION_NAMESPACE} experiments.iter8.tools