Git Product home page Git Product logo

continuous-deployment-on-kubernetes's Introduction

Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes

For a more in depth best practices guide, go to the solution posted here.

Introduction

This guide will take you through the steps necessary to continuously deliver your software to end users by leveraging Google Container Engine and Jenkins to orchestrate the software delivery pipeline. If you are not familiar with basic Kubernetes concepts, have a look at Kubernetes 101.

In order to accomplish this goal you will use the following Jenkins plugins:

  • Jenkins Kubernetes Plugin - start Jenkins build executor containers in the Kubernetes cluster when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
  • Jenkins Pipelines - define our build pipeline declaratively and keep it checked into source code management alongside our application code
  • Google Oauth Plugin - allows you to add your google oauth credentials to jenkins

In order to deploy the application with Kubernetes you will use the following resources:

  • Deployments - replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
  • Services - load balancing and service discovery for our internal services
  • Ingress - external load balancing and SSL termination for our external service
  • Secrets - secure storage of non public configuration information, SSL certs specifically in our case

Prerequisites

  1. A Google Cloud Platform Account
  2. Enable the Compute Engine, Container Engine, and Container Builder APIs

Do this first

In this section you will start your Google Cloud Shell and clone the lab code repository to it.

  1. Create a new Google Cloud Platform project: https://console.developers.google.com/project

  2. Click the Activate Cloud Shell icon in the top-right and wait for your shell to open.

    If you are prompted with a Learn more message, click Continue to finish opening the Cloud Shell.

  3. When the shell is open, use the gcloud command line interface tool to set your default compute zone:

    gcloud config set compute/zone us-east1-d

    Output (do not copy):

    Updated property [compute/zone].
    
  4. Set an environment variable with your project:

    export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    
  5. Clone the lab repository in your cloud shell, then cd into that dir:

    git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git

    Output (do not copy):

    Cloning into 'continuous-deployment-on-kubernetes'...
    ...
    
    cd continuous-deployment-on-kubernetes

Create a Service Account with permissions

  1. Create a service account, on Google Cloud Platform (GCP).

    Create a new service account because it's the recommended way to avoid using extra permissions in Jenkins and the cluster.

    gcloud iam service-accounts create jenkins-sa \
        --display-name "jenkins-sa"

    Output (do not copy):

    Created service account [jenkins-sa].
    
  2. Add required permissions, to the service account, using predefined roles.

    Most of these permissions are related to Jenkins use of Cloud Build, and storing/retrieving build artifacts in Cloud Storage. Also, the service account needs to enable the Jenkins agent to read from a repo you will create in Cloud Source Repositories (CSR).

    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/viewer"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/source.reader"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.admin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.objectAdmin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/cloudbuild.builds.editor"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/container.developer"

    You can check the permissions added using IAM & admin in Cloud Console.

  3. Export the service account credentials to a JSON key file in Cloud Shell:

    gcloud iam service-accounts keys create ~/jenkins-sa-key.json \
        --iam-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    created key [...] of type [json] as [/home/.../jenkins-sa-key.json] for [[email protected]]
    
  4. Download the JSON key file to your local machine.

    Click Download File from More on the Cloud Shell toolbar:

  5. Enter the File path as jenkins-sa-key.json and click Download.

    The file will be downloaded to your local machine, for use later.

Create a Kubernetes Cluster

  1. Provision the cluster with gcloud:

    Use Google Kubernetes Engine (GKE) to create and manage your Kubernetes cluster, named jenkins-cd. Use the service account created earlier.

    gcloud container clusters create jenkins-cd \
      --num-nodes 2 \
      --machine-type n1-standard-2 \
      --cluster-version 1.15 \
      --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    NAME        LOCATION    MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
    jenkins-cd  us-east1-d  1.15.11-gke.15   35.229.29.69  n1-standard-2 1.15.11-gke.15  2          RUNNING
    
  2. Once that operation completes, retrieve the credentials for your cluster.

    gcloud container clusters get-credentials jenkins-cd

    Output (do not copy):

    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for jenkins-cd.
    
  3. Confirm that the cluster is running and kubectl is working by listing pods:

    kubectl get pods

    Output (do not copy):

    No resources found.
    

    You would see an error if the cluster was not created, or you did not have permissions.

  4. Add yourself as a cluster administrator in the cluster's RBAC so that you can give Jenkins permissions in the cluster:

    kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
    

Install Helm

In this lab, you will use Helm to install Jenkins with a stable chart. Helm is a package manager that makes it easy to configure and deploy Kubernetes applications. Once you have Jenkins installed, you'll be able to set up your CI/CD pipleline.

  1. Download and install the helm binary

    wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
  2. Unzip the file to your local system:

    tar zxfv helm-v3.2.1-linux-amd64.tar.gz
    cp linux-amd64/helm .
  3. Add the official stable repository.

    ./helm repo add stable https://kubernetes-charts.storage.googleapis.com
  4. Ensure Helm is properly installed by running the following command. You should see version v3.2.1 appear:

    ./helm version

    Output (do not copy):

    version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    

Configure and Install Jenkins

You will use a custom values file to add the GCP specific plugin necessary to use service account credentials to reach your Cloud Source Repository.

  1. Use the Helm CLI to deploy the chart with your configuration set.

    ./helm install cd-jenkins -f jenkins/values.yaml stable/jenkins --version 1.7.3 --wait

    Output (do not copy):

    ...
    For more information on running Jenkins on Kubernetes, visit:
    https://cloud.google.com/solutions/jenkins-on-container-engine
    
  2. The Jenkins pod STATUS should change to Running when it's ready:

    kubectl get pods

    Output (do not copy):

    NAME                          READY     STATUS    RESTARTS   AGE
    cd-jenkins-7c786475dd-vbhg4   1/1       Running   0          1m
    
  3. Configure the Jenkins service account to be able to deploy to the cluster.

    kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins

    Output (do not copy):

    clusterrolebinding.rbac.authorization.k8s.io/jenkins-deploy created
    
  4. Set up port forwarding to the Jenkins UI, from Cloud Shell:

    export JENKINS_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $JENKINS_POD_NAME 8080:8080 >> /dev/null &
  5. Now, check that the Jenkins Service was created properly:

    kubectl get svc

    Output (do not copy):

    NAME               CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
    cd-jenkins         10.35.249.67   <none>        8080/TCP    3h
    cd-jenkins-agent   10.35.248.1    <none>        50000/TCP   3h
    kubernetes         10.35.240.1    <none>        443/TCP     9h
    

    This Jenkins configuration is using the Kubernetes Plugin, so that builder nodes will be automatically launched as necessary when the Jenkins master requests them. Upon completion of the work, the builder nodes will be automatically turned down, and their resources added back to the cluster's resource pool.

    Notice that this service exposes ports 8080 and 50000 for any pods that match the selector. This will expose the Jenkins web UI and builder/agent registration ports within the Kubernetes cluster. Additionally the jenkins-ui services is exposed using a ClusterIP so that it is not accessible from outside the cluster.

Connect to Jenkins

  1. The Jenkins chart will automatically create an admin password for you. To retrieve it, run:

    printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
  2. To get to the Jenkins user interface, click on the Web Preview button in cloud shell, then click Preview on port 8080:

You should now be able to log in with username admin and your auto generated password.

Your progress, and what's next

You've got a Kubernetes cluster managed by GKE. You've deployed:

  • a Jenkins Deployment
  • a (non-public) service that exposes Jenkins to its agent containers

You have the tools to build a continuous deployment pipeline. Now you need a sample app to deploy continuously.

The sample app

You'll use a very simple sample application - gceme - as the basis for your CD pipeline. gceme is written in Go and is located in the sample-app directory in this repo. When you run the gceme binary on a GCE instance, it displays the instance's metadata in a pretty card:

The binary supports two modes of operation, designed to mimic a microservice. In backend mode, gceme will listen on a port (8080 by default) and return GCE instance metadata as JSON, with content-type=application/json. In frontend mode, gceme will query a backend gceme service and render that JSON in the UI you saw above. It looks roughly like this:

-----------      ------------      ~~~~~~~~~~~~        -----------
|         |      |          |      |          |        |         |
|  user   | ---> |   gceme  | ---> | lb/proxy | -----> |  gceme  |
|(browser)|      |(frontend)|      |(optional)|   |    |(backend)|
|         |      |          |      |          |   |    |         |
-----------      ------------      ~~~~~~~~~~~~   |    -----------
                                                  |    -----------
                                                  |    |         |
                                                  |--> |  gceme  |
                                                       |(backend)|
                                                       |         |
                                                       -----------

Both the frontend and backend modes of the application support two additional URLs:

  1. /version prints the version of the binary (declared as a const in main.go)
  2. /healthz reports the health of the application. In frontend mode, health will be OK if the backend is reachable.

Deploy the sample app to Kubernetes

In this section you will deploy the gceme frontend and backend to Kubernetes using Kubernetes manifest files (included in this repo) that describe the environment that the gceme binary/Docker image will be deployed to. They use a default gceme Docker image that you will be updating with your own in a later section.

You'll have two primary environments - canary and production - and use Kubernetes to manage them.

Note: The manifest files for this section of the tutorial are in sample-app/k8s. You are encouraged to open and read each one before creating it per the instructions.

  1. First change directories to the sample-app, back in Cloud Shell:

    cd sample-app
  2. Create the namespace for production:

    kubectl create ns production

    Output (do not copy):

    namespace/production created
    
  3. Create the production Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/production

    Output (do not copy):

    deployment.extensions/gceme-backend-production created
    deployment.extensions/gceme-frontend-production created
    
  4. Create the canary Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/canary

    Output (do not copy):

    deployment.extensions/gceme-backend-canary created
    deployment.extensions/gceme-frontend-canary created
    
  5. Create the Services for frontend and backend:

    kubectl --namespace=production apply -f k8s/services

    Output (do not copy):

    service/gceme-backend created
    service/gceme-frontend created
    
  6. Scale the production, frontend service:

    kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4

    Output (do not copy):

    deployment.extensions/gceme-frontend-production scaled
    
  7. Retrieve the External IP for the production services:

    This field may take a few minutes to appear as the load balancer is being provisioned

    kubectl --namespace=production get service gceme-frontend

    Output (do not copy):

    NAME             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
    gceme-frontend   LoadBalancer   10.35.254.91   35.196.48.78   80:31088/TCP   1m
    
  8. Confirm that both services are working by opening the frontend EXTERNAL-IP in your browser

  9. Poll the production endpoint's /version URL.

    Open a new Cloud Shell terminal by clicking the + button to the right of the current terminal's tab.

    export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)
    while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 3;  done

    Output (do not copy):

    1.0.0
    1.0.0
    1.0.0
    

    You should see that all requests are serviced by v1.0.0 of the application.

    Leave this running in the second terminal so you can easily observe rolling updates in the next section.

  10. Return to the first terminal/tab in Cloud Shell.

Create a repository for the sample app source

Here you'll create your own copy of the gceme sample app in Cloud Source Repository.

  1. Initialize the git repository.

    Make sure to work from the sample-app directory of the repo you cloned previously.

    git init
    git config credential.helper gcloud.sh
    gcloud source repos create gceme
  2. Add a git remote for the new repo in Cloud Source Repositories.

    git remote add origin https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme
  3. Ensure git is able to identify you:

    git config --global user.email "YOUR-EMAIL-ADDRESS"
    git config --global user.name "YOUR-NAME"
  4. Add, commit, and push all the files:

    git add .
    git commit -m "Initial commit"
    git push origin master

    Output (do not copy):

    To https://source.developers.google.com/p/myproject/r/gceme
     * [new branch]      master -> master
    

Create a pipeline

You'll now use Jenkins to define and run a pipeline that will test, build, and deploy your copy of gceme to your Kubernetes cluster. You'll approach this in phases. Let's get started with the first.

Phase 1: Add your service account credentials

First, you will need to configure GCP credentials in order for Jenkins to be able to access the code repository:

  1. In the Jenkins UI, Click Credentials on the left

  2. Click the (global) link

  3. Click Add Credentials on the left

  4. From the Kind dropdown, select Google Service Account from private key

  5. Enter the Project Name from your project

  6. Leave JSON key selected, and click Choose File.

  7. Select the jenkins-sa-key.json file downloaded earlier, then click Open.

  8. Click OK

You should now see 1 global credential. Make a note of the name of the credential, as you will reference this in Phase 2.

Phase 2: Create a job

This lab uses Jenkins Pipeline to define builds as groovy scripts.

Navigate to your Jenkins UI and follow these steps to configure a Pipeline job (hot tip: you can find the IP address of your Jenkins install with kubectl get ingress --namespace jenkins):

  1. Click the Jenkins link in the top left toolbar, of the ui

  2. Click the New Item link in the left nav

  3. For item name use sample-app, choose the Multibranch Pipeline option, then click OK

  4. Click Add source and choose git

  5. Paste the HTTPS clone URL of your gceme repo on Cloud Source Repositories into the Project Repository field. It will look like: https://source.developers.google.com/p/[REPLACE_WITH_YOUR_PROJECT_ID]/r/gceme

  6. From the Credentials dropdown, select the name of the credential from Phase 1. It should have the format PROJECT_ID service account.

  7. Under Scan Multibranch Pipeline Triggers section, check the Periodically if not otherwise run box, then set the Interval value to 1 minute.

  8. Click Save, leaving all other options with default values.

    A Branch indexing job was kicked off to identify any branches in your repository.

  9. Click Jenkins > sample-app, in the top menu.

    You should see the master branch now has a job created for it.

    The first run of the job will fail, until the project name is set properly in the Jenkinsfile next step.

Phase 3: Modify Jenkinsfile, then build and test the app

  1. Create a branch for the canary environment called canary

    git checkout -b canary

    Output (do not copy):

    Switched to a new branch 'canary'
    

    The Jenkinsfile is written using the Jenkins Workflow DSL, which is Groovy-based. It allows an entire build pipeline to be expressed in a single script that lives alongside your source code and supports powerful features like parallelization, stages, and user input.

  2. Update your Jenkinsfile script with the correct PROJECT environment value.

    Be sure to replace REPLACE_WITH_YOUR_PROJECT_ID with your project name.

    Save your changes, but don't commit the new Jenkinsfile change just yet. You'll make one more change in the next section, then commit and push them together.

Phase 4: Deploy a canary release to canary

Now that your pipeline is working, it's time to make a change to the gceme app and let your pipeline test, package, and deploy it.

The canary environment is rolled out as a percentage of the pods behind the production load balancer. In this case we have 1 out of 5 of our frontends running the canary code and the other 4 running the production code. This allows you to ensure that the canary code is not negatively affecting users before rolling out to your full fleet. You can use the labels env: production and env: canary in Google Cloud Monitoring in order to monitor the performance of each version individually.

  1. In the sample-app repository on your workstation open html.go and replace the word blue with orange (there should be exactly two occurrences):
//snip
<div class="card orange">
<div class="card-content white-text">
<div class="card-title">Backend that serviced this request</div>
//snip
  1. In the same repository, open main.go and change the version number from 1.0.0 to 2.0.0:

    //snip
    const version string = "2.0.0"
    //snip
  2. Push the version 2 changes to the repo:

    git add Jenkinsfile html.go main.go
    git commit -m "Version 2"
    git push origin canary
  3. Revisit your sample-app in the Jenkins UI.

    Navigate back to your Jenkins sample-app job. Notice a canary pipeline job has been created.

  4. Follow the canary build output.

    • Click the Canary link.
    • Click the #1 link the Build History box, on the lower left.
    • Click Console Output from the left-side menu.
    • Scroll down to follow.
  5. Track the output for a few minutes.

    When you see Finished: SUCCESS, open the Cloud Shell terminal that you left polling /version of canary. Observe that some requests are now handled by the canary 2.0.0 version.

    1.0.0
    1.0.0
    1.0.0
    1.0.0
    2.0.0
    2.0.0
    1.0.0
    1.0.0
    1.0.0
    1.0.0
    

    You have now rolled out that change, version 2.0.0, to a subset of users.

  6. Continue the rollout, to the rest of your users.

    Back in the other Cloud Shell terminal, create a branch called production, then push it to the Git server.

     git checkout master
     git merge canary
     git push origin master
  7. Watch the pipelines in the Jenkins UI handle the change.

    Within a minute or so, you should see a new job in the Build Queue and Build Executor.

  8. Clicking on the master link will show you the stages of your pipeline as well as pass/fail and timing characteristics.

    You can see the failed master job #1, and the successful master job #2.

  9. Check the Cloud Shell terminal responses again.

    In Cloud Shell, open the terminal polling canary's /version URL and observe that the new version, 2.0.0, has been rolled out and is serving all requests.

    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    

If you want to understand the pipeline stages in greater detail, you can look through the Jenkinsfile in the sample-app project directory.

Phase 5: Deploy a development branch

Oftentimes changes will not be so trivial that they can be pushed directly to the canary environment. In order to create a development environment, from a long lived feature branch, all you need to do is push it up to the Git server. Jenkins will automatically deploy your development environment.

In this case you will not use a loadbalancer, so you'll have to access your application using kubectl proxy. This proxy authenticates itself with the Kubernetes API and proxies requests from your local machine to the service in the cluster without exposing your service to the internet.

Deploy the development branch

  1. Create another branch and push it up to the Git server

    git checkout -b new-feature
    git push origin new-feature
  2. Open Jenkins in your web browser and navigate back to sample-app.

    You should see that a new job called new-feature has been created, and this job is creating your new environment.

  3. Navigate to the console output of the first build of this new job by:

    • Click the new-feature link in the job list.
    • Click the #1 link in the Build History list on the left of the page.
    • Finally click the Console Output link in the left menu.
  4. Scroll to the bottom of the console output of the job to see instructions for accessing your environment:

    Successfully verified extensions/v1beta1/Deployment: gceme-frontend-dev
    AvailableReplicas = 1, MinimumReplicas = 1
    
    [Pipeline] echo
    To access your environment run `kubectl proxy`
    [Pipeline] echo
    Then access your service via
    http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/
    [Pipeline] }
    

Access the development branch

  1. Set up port forwarding to the dev frontend, from Cloud Shell:

    export DEV_POD_NAME=$(kubectl get pods -n new-feature -l "app=gceme,env=dev,role=frontend" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward -n new-feature $DEV_POD_NAME 8001:80 >> /dev/null &
  2. Access your application via localhost:

    curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/

    Output (do not copy):

    <!doctype html>
    <html>
    ...
    </div>
    <div class="col s2">&nbsp;</div>
    </div>
    </div>
    </html>
    

    Look through the response output for "card orange" that was changed earlier.

  3. You can now push code changes to the new-feature branch in order to update your development environment.

  4. Once you are done, merge your new-feature branch back into the canary branch to deploy that code to the canary environment:

    git checkout canary
    git merge new-feature
    git push origin canary
  5. When you are confident that your code won't wreak havoc in production, merge from the canary branch to the master branch. Your code will be automatically rolled out in the production environment:

    git checkout master
    git merge canary
    git push origin master
  6. When you are done with your development branch, delete it from Cloud Source Repositories, then delete the environment in Kubernetes:

    git push origin :new-feature
    kubectl delete ns new-feature

Extra credit: deploy a breaking change, then roll back

Make a breaking change to the gceme source, push it, and deploy it through the pipeline to production. Then pretend latency spiked after the deployment and you want to roll back. Do it! Faster!

Things to consider:

  • What is the Docker image you want to deploy for roll back?
  • How can you interact directly with the Kubernetes to trigger the deployment?
  • Is SRE really what you want to do with your life?

Clean up

Clean up is really easy, but also super important: if you don't follow these instructions, you will continue to be billed for the GKE cluster you created.

To clean up, navigate to the Google Developers Console Project List, choose the project you created for this lab, and delete it. That's it.

continuous-deployment-on-kubernetes's People

Contributors

ajay-bangar avatar arturshpak avatar blackdark avatar bodepd avatar cgrant avatar chekote avatar craigdbarber avatar davidstanke avatar dazwilkin avatar dren79 avatar elibixby avatar evandbrown avatar gushob21 avatar henrybell avatar hwanjin-jeong avatar jacobfederer avatar johnlabarge avatar joshwand avatar keisei803 avatar luxifer avatar miketruty avatar nfollett89 avatar nielm avatar restless-et avatar rohitshah-tudip avatar samizuh avatar shubhambansal1997 avatar tedb avatar torstenwalter avatar viglesiasce avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

continuous-deployment-on-kubernetes's Issues

Deploying canary release fails with validation error

Following the README to deploy a canary release, the new build job fails with the following output:

[Pipeline] sh
[staging] Running shell script
+ kubectl --namespace=production apply -f k8s/services/
error validating "k8s/services/backend.yaml": error validating data: unexpected type: object; if you choose to ignore these errors, turn validation off with --validate=false
error validating "k8s/services/frontend.yaml": error validating data: unexpected type: object; if you choose to ignore these errors, turn validation off with --validate=false

Tutorial broken

Hi,

I think the tutorial needs to be fixed.

  1. the builds fail due to a docker error:
    screen shot 2017-04-04 at 19 42 57

  2. Swap Space note:
    screen shot 2017-04-04 at 19 46 38

  3. I don't know if it is just me or the step with the git repo is also false. the /r/default doesn't work. I had to create manuall a /gceme repo and use that.

Hope you guys can help to fix it fast, so that I can play around with the working tutorial. Thx!

Update Jenkins to fix security vulnerabilities

It looks like this currently deploys Jenkins 2.7.2 which was released in May 2016:
https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/jenkins/k8s/jenkins.yaml#L29

Since then there have been a number of very serious security vulnerabilities, including remote code execution:
https://jenkins.io/security/advisories/
https://jenkins.io/security/advisory/2016-11-16/
https://jenkins.io/security/advisory/2017-02-01/

Can we please get it updated to the latest release? Latest on dockerhub is 2.60.1

Only one slave executor is created.

When I have multiple jobs in the build queue only one slave executor is created and only one job is processed. In the settings I have the cap set to 10 but only one is ever created. Any idea what I am doing wrong or a way to get more slave executors to run?

Jenkins slave docker image out of date

I'm experiencing the following error since my cluster update.

error: group map[componentconfig:0xc820376e70 extensions:0xc820376ee0 policy:0xc820376f50 authentication.k8s.io:0xc8203770a0 :0xc820376c40 authorization.k8s.io:0xc820376d20 autoscaling:0xc820376d90 batch:0xc820376e00 federation:0xc820376bd0 apps:0xc820376cb0 rbac.authorization.k8s.io:0xc820376fc0] is already registered

I've been able to trace it to a most likely problematic kubectl version. The docker slave image must be cached on the "latest" version. Because the latest version is installing a "then latest version" of kubectl, it's now out of date.

What's the proper way across all the machines to update or remove this container?

Git Submodules

I am having problems with that script and my own repository.

My repo is private containing submodules (same account, credentials).
I already added the credentials (ssh key) to jenkins. The "main" repository get fetched as expected.
In the Jenkinsfile i have added git submodule update --init --recursive to fetch the submodules but it does not use the same credentials. As a result i get a failure

Using SSH

Host key verification failed.
fatal: Could not read from remote repository. 

Using HTTPS

 > git submodule update --init --recursive
FATAL: Command "git submodule update --init --recursive" returned status code 1:
stdout: 
stderr: Cloning into 'src/myproject'...
remote: Invalid username or password. If you log in via a third party service you must ensure you have an account password set in your account profile.

I am using

Jenkins: ver. 2.7.2
Git Client Plugin: 1.21.0
Git Plugin: 2.5.3

Any ideas how to fix that?

Provide Dockerfiles for Images Referenced in kubernetes configs

Would it be possible to share the Dockerfiles and their supporting files and scripts used in this lab? I've had success completing and extending the lab, but am curious to learn more about the details behind the jenkins leader and builder images in particular. Thanks!

image: gcr.io/cloud-solutions-images/jenkins-packer-agent:master-1f6b3f6
image: gcr.io/cloud-solutions-images/jenkins-gcp-leader:master-aa479b4
image: gcr.io/cloud-solutions-images/nginx-ssl-proxy:master-cc00da0

Slave Image

I cant find the reference to the slave image.
I would like to extend it.

How can i dot this?

Provide image build instructions

Thanks for publishing this!

One note though, is it possible to publish the image build instructions or, at least, the build script/rules for the image? At least when I tried it last, there was a considerable amount of prep work involved in getting a good Jenkins image running in a container: latest public Docker images are really buggy, building a container yourself requires much wrangling with java images, there are jenkins_home issues if one initialises it at the top of the filesystem, etc.

Otherwise this step has quite a bit of magic to it:

gcloud compute images create jenkins-home-image --source-uri https://storage.googleapis.com/solutions-public-assets/jenkins-cd/jenkins-home-v2.tar.gz

I think providing some idea in how to prepare an image yourself will actually make it more of a solution, than a demo.

P.S. I do admit I may be missing something - so if there is a link for the image prep - just let me know and I'll patch the docs.

Does the # of Jenkins Pods ever scale up?

I'm curious if the number of Jenkins pods (Masters) every is envisioned in needing to be scaled up. We currently run a Kubernetes cluster that spans multiple AZ-zones in AWS, so we would need to have a Persistent Disk that also spans multiple AZ zones.

Question: How to use Github for source but GCR for images?

Thanks so much for a great tutorial, it was extremely helpful in getting an understanding of k8s on GCE. Everything worked great when following the tutorial exactly, but I'm running into some issues when making variations to fit my own particular project environment.

I'm currently working with only the very beginning of the project, in that I'm grabbing the source code from SCM, building an image, and trying to push it to GCR. I'm using Github for SCM and not Google, and I've got that working great:

screenshot 2016-11-02 15 51 26

But the problem arises when I try to push my image to GCR. I have the credentials configured in the Jenkins Credentials area:

screenshot 2016-11-02 15 56 21

But because I am not using Google for SCM, the Google API credentials are not referenced in the projects configuration. I am assuming that the gcloud docker push commands work in this tutorial only because the prior checkout scm command happened to authenticate with the credentials, which allows the gcloud docker push to piggy back on the pre-existing authentication.

My question is, if I'm not using Google SCM, and therefore not authenticating with the Google API credentials at the point that I pull the source from SCM. How do I get Jenkins to authenticate before trying to do gcloud docker push?

Mount volume to docker image in pipeline won't work

Hi, i'm having an issue when i try to add a host dir as volume to a docker image.

When i try to run the same code locally on OsX all works fine.

Here is the test script that i'm running on jenkins ( on GCE, kubernetes )

node {
    checkout scm

   stage 'test'
    sh('cd /var && mkdir reactcompiler')
    sh('docker run -v /var/reactcompiler:/app/src/public -t reactcompiler:1.0.0 "cd /app/src/public && touch b.txt"')
    sh("cd /var/reactcompiler && ls")

}

Quota blocks scaling to 5 build agents

Just fixed some syntax errors in quota.yaml and after I enacted the quota the build agent can't scale past 1.

Should either increase the quota or reduce the suggested number of replicas (I'm not sure the quota is actually necessary at all, though you could mention it as a possibility)

Support for 1.2 deployment objects

rollingupdate doesn't work with multi-container replication controllers. How could this work with the new K8S 1.2 deployment objects? kubectl edit deployment/... doesn't lend itself very well to automation.

Ingress: Instance may belong to at most one load-balanced instance group

Going through this on a GKE cluster which has an existing HTTPS load balancer (manually setup), I find that Ingress cannot create a new instance group over the cluster instances.

kubectl apply -f jenkins/k8s/lb
kubectl describe ingress -n=jenkins
Name:                   jenkins
Namespace:              jenkins
Address:
Default backend:        jenkins-ui:8080 (10.12.1.26:8080)
TLS:
  tls terminates 
Rules:
  Host  Path    Backends
  ----  ----    --------
  *     *       jenkins-ui:8080 (10.12.1.26:8080)
Annotations:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  32m           32m             1       {loadbalancer-controller }                      Normal          ADD     jenkins/jenkins
  32m           27s             36      {loadbalancer-controller }                      Warning         GCE     [googleapi: Error 400: Validation failed for instance 'projects/myproejct...': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

Typically, I've setup up the Load Balancers on Google Cloud manually and used a single load balancer targeting the instance group, with a backend service for each Kubernetes service. The Google LoadBalancer Controller for Ingress seems to make a fresh instance group for each ingress resource, which causes a validation error for Google Cloud?

Anyone else seeing this? Thanks folks.

Private Repo - how to add alternative keys

I am using a private REPO on GitHub and cant figure out how to add my ssh keys. I assumed they would be picked up via the .ssh directory, but this does not seem to be the case. I have seen some feedback regarding this and it appears, the jenkins user needs to be updated to include the private keys?

Does anyone know how to achieve this, with this tutorial?

[question] could I run this locally on minikube?

Hi,
I've been trying to get jenkins setup on a local machine with no success. I came across minikube and thought it might solve my problem.

If anyone has an idea on how I could go about doing this, I would really appreciate it!

update Google Cloud API client import paths and more

The Google Cloud API client libraries for Go are making some breaking changes:

  • The import paths are changing from google.golang.org/cloud/... to
    cloud.google.com/go/.... For example, if your code imports the BigQuery client
    it currently reads
    import "google.golang.org/cloud/bigquery"
    It should be changed to
    import "cloud.google.com/go/bigquery"
  • Client options are also moving, from google.golang.org/cloud to
    google.golang.org/api/option. Two have also been renamed:
    • WithBaseGRPC is now WithGRPCConn
    • WithBaseHTTP is now WithHTTPClient
  • The cloud.WithContext and cloud.NewContext methods are gone, as are the
    deprecated pubsub and container functions that required them. Use the Client
    methods of these packages instead.

You should make these changes before September 12, 2016, when the packages at
google.golang.org/cloud will go away.

Building docker image

As part of Jenkins task we are building docker image with docker build ....

This causes failed to create endpoint focused_wozniak on network bridge: adding interface veth501fb14 to bridge docker0 failed: could not find bridge docker0: route ip+net: no such network interface error to happen on every second build

Wonder if you ever experienced such issue and not sure if problem is with docker-slave-image, jenkins plugin, kubernetes, docker :) Any advice appreciated

Multiple namespaces

In the readme :

You'll have two primary environments - staging and production - and use Kubernetes namespaces to isolate them.

But no reference of staging namespace (only production).
I see the labels used are differents, but shouldn't the readme be worded differently then ?

Jenkins Slave Connecting to Cloud SQL

Hello,

i would like to connect to my cloud sql instance in order to perform database migrations as part of my build process.

At the moment i get a time out so i tried to allow access to the cloud sql instance for the jenkins master ip address. This does not work. After i tried to allow access to any ip address "0.0.0.0/0" and this works.

I obviously don't want to allow access to any ip.

Any ideas on how to solve my problem?

Environments and branches confusions

Hello. Great guide ! However there are some points not clear to me :

  • "roll it out to the rest of your users by creating a branch called production and pushing it to the Git server:". The Git 3 commands after that don't show anything about a branch called production.
  • what is the exact purpose of the master branch ? On the first hand, it seems to be the way to deploy 4/5 pods in production, and on the other hand it seems to be more related to staging : "Once you are done, merge your branch back into master to deploy that code to the staging environment:...git push orign master"
  • in the picture, there is a stage called "Deploy Application to production namespace". However, I can't see any stage in the Jenkinsfile with that name (even created dynamically).

Provide information on utilizing restore.tgz / GCS_RESTORE_URL

Looking in the replication controller definition I find reference to the variable GCS_RESTORE_URL. Digging further into the start.sh script reveals that the script will attempt to pull down a "backup" from GCS and prime jenkins with this information.

Looking back through the RC I don't see a pod with the capability of performing the backup operation nor any documentation about how to perform the initial setup of the container. I would assume this means a user should stand up jenkins on their own and tar up JENKINS_HOME and place it on GCS noting the URL.

Finally, when looking through the container further I find that there is a rather old version of Jenkins:

root@45885ae19873:/usr/share/jenkins# java -jar jenkins.war --version
Running from: /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
1.609.2

I understand that this is the LTS release of Jenkins, but even still it's (as of this issue being filed) 234 days old with 7 LTS releases since then 3 batches of security advisories:

https://wiki.jenkins-ci.org/display/SECURITY/Jenkins+Security+Advisory+2016-02-24
https://wiki.jenkins-ci.org/display/SECURITY/Jenkins+Security+Advisory+2015-12-09
https://wiki.jenkins-ci.org/display/SECURITY/Jenkins+Security+Advisory+2015-11-11

Failed Sync

I am trying to follow the steps in the instructions and am stuck when trying to create the service and pods. Specifically, when I apply the contents of the k8s directory, the pod never starts. When I do a describe on the pod, I see "FailedSync" and a message of Error syncing pod, skipping: failed to "StartContainer" for "master" with CrashLoopBackOff: "Back-off 20s restarting failed container=master pod=jenkins-3366133267-243lz_jenkins(a4b07e8b-4c5d-11e7-836a-42010a8000a5)".

My Kubernetes version is 1.6.4. I saw after 1.6.0 the apiVersion changed to "apps/v1beta1" with respect to deployments. I tried both the new string and the old one but still got the same error. Any insight on other things to try to logs that might help determine the issue?

Is it possible to update Google Storage credentials without cloning the cluster?

I've forgot to enable the permission to write in storage of the instances in my Kubernetes cluster on GKE. Because of that I can't push docker images to my Google Registry after building the image.

Is there any optional ways to fix this or I need to do everything again with new cluster that enabled with this feature?

Pod crashing on startup

While following the steps outlined in the readme (and included in https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial ) I'm getting the following error starting up Jenkins which points to possible issues with missing a shebang and starting up with dash shell instead of bash.
nedkoh@jenkins-160516:/continuous-deployment-on-kubernetes$ kubectl get pods --namespace jenkins
NAME READY STATUS RESTARTS AGE
jenkins-167554897-lf1lc 0/1 CrashLoopBackOff 13 47m
nedkoh@jenkins-160516:
/continuous-deployment-on-kubernetes$ kubectl logs jenkins-167554897-lf1lc --namespace jenkins
/usr/local/bin/jenkins.sh: eval: line 10: syntax error near unexpected token `('

Problem with defining another podTemplate

Hi again, coming back to it after a while away.

Following the issue #65 with your last suggestion regarding the pod template definition:
https://github.com/jenkinsci/kubernetes-plugin#container-configuration

I tried to define a minimal template, but no success:

podTemplate(label: 'mypod', cloud: 'local cluster') {
    node('mypod') {
        stage('Run shell') {
            sh 'echo hello world'
            sh 'kubectl version'
        }
    }
}

error:
/home/jenkins/workspace/jenkins-test_master-X7X5Z54VVKXUA35MJKFUWRV7TO6FM63NQS24FG3FNTBT6FE67DVQ@tmp/durable-9034bb9b/script.sh: line 1: kubectl: not found

complete log:

Fetching origin...
Fetching upstream changes from origin
 > git --version # timeout=10
using GIT_ASKPASS to set credentials Accès organisation Vetup Github
 > git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/master
Seen 1 remote branch
Obtained Jenkinsfile from a64abd7b72f4e514a4a0260d302876d93968bdb4
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Running on kubernetes-273858a1488445f58b023362c92619a1-e01fc8e6dc9c in /home/jenkins/workspace/jenkins-test_master-X7X5Z54VVKXUA35MJKFUWRV7TO6FM63NQS24FG3FNTBT6FE67DVQ
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Run shell)
[Pipeline] sh
[jenkins-test_master-X7X5Z54VVKXUA35MJKFUWRV7TO6FM63NQS24FG3FNTBT6FE67DVQ] Running shell script
+ echo hello world
hello world
[Pipeline] sh
[jenkins-test_master-X7X5Z54VVKXUA35MJKFUWRV7TO6FM63NQS24FG3FNTBT6FE67DVQ] Running shell script
+ kubectl version
/home/jenkins/workspace/jenkins-test_master-X7X5Z54VVKXUA35MJKFUWRV7TO6FM63NQS24FG3FNTBT6FE67DVQ@tmp/durable-9034bb9b/script.sh: line 1: kubectl: not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE

Could you provide an example of a podTemplate working with your Jenkins installation please ?
(also tried to define the template through the kubernetes plugin UI, using /root working directory, with no success either).

Thanks a lot in advance.
Philippe

jenkins docker not found

Hi,

for some reason jenkins does't want to run the docker commands

node {
  def project = 'REPLACE_WITH_YOUR_PROJECT_ID'
  def appName = 'gceme'
  def feSvcName = "${appName}-frontend"
  def imageTag = "gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"

  checkout scm

  stage 'Build image'
  sh("docker build -t ${imageTag} .")

  stage 'Run Go tests'
  sh("docker run ${imageTag} go test")

  stage 'Push image to registry'
  sh("gcloud docker push ${imageTag}")

  stage "Deploy Application"
  switch (env.BRANCH_NAME) {
    // Roll out to canary environment
    case "canary":
        // Change deployed image in canary to the one we just built
        sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
        sh("kubectl --namespace=production apply -f k8s/services/")
        sh("kubectl --namespace=production apply -f k8s/canary/")
        sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
        break

    // Roll out to production
    case "master":
        // Change deployed image in canary to the one we just built
        sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
        sh("kubectl --namespace=production apply -f k8s/services/")
        sh("kubectl --namespace=production apply -f k8s/production/")
        sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
        break

    // Roll out a dev environment
    default:
        // Create namespace if it doesn't exist
        sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
        // Don't use public load balancing for development branches
        sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
        sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
        sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
        sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
        echo 'To access your environment run `kubectl proxy`'
        echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
  }
}

Output Jenkins docker not found:

[apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ] Running shell script
+ docker build -t eu.gcr.io/xxxxx/apiservice:master.1 .
/var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: 2: /var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: docker: not found

What am I doing wrong? When I ssh into the instance docker is available.

Errors creating slave

Hi,
I am seeing the errors after following the tutorial:

Provisioned slave Kubernetes Pod Template failed to launch
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default.svc.cluster.local/api/v1/namespaces/jenkins/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:310)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:261)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:232)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:207)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:547)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:243)
    at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:426)
    at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:406)
    at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Any insight into what my issue could be? Or has anyone else run into this?

Thanks,
Braden

Google Container permissions are randomly revoked.

I've been using this in my dev environment and the permissions from the node pool are randomly being removed. This ends up causing a 503 error. How do I go about reporting or identifying why their being revoked/reset/removed?

I can't actually change the permissions once it's live, so I know it's on Google's end. It was working.

The Jenkins Docker Plugin does not seem to work

I followed this great tutorial to install jenkins on my k8s cluster (I'm new to jenkins), I could run jobs w/o any problem until I tried to use the Docker plugin to start docker container inside a job, for example:

stage "Prepare environment"
	docker.image('node:4.1.2').inside {
	    print "inside a node server"
	    sh("echo test");  
	    //sh("npm install");      
	  }

I get an error:

java.io.IOException: Failed to run image 'node:4.1.2'. Error: docker: Error response from daemon: mkdir /root/workspace: read-only file system.

here is the detailed log:

[Pipeline] stage (Prepare environment)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Prepare environment
Proceeding
[Pipeline] sh
[play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA] Running shell script
+ docker inspect -f . node:4.1.2
.
[Pipeline] withDockerContainer
$ docker run -t -d -u 0:0 -w /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA -v /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA:/root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA:rw -v /root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA@tmp:/root/workspace/play_PLR-437-jenkins-config-WQ5IB66PEGACJNE6UHFF54RVEEBWEEDWRSVCZM3YSVATI3UYUBXA@tmp:rw -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat node:4.1.2
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: Failed to run image 'node:4.1.2'. Error: docker: Error response from daemon: mkdir /root/workspace: read-only file system.
	at org.jenkinsci.plugins.docker.workflow.client.DockerClient.run(DockerClient.java:125)
	at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:175)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:184)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:126)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:18)
	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:122)
	at org.jenkinsci.plugins.docker.workflow.Docker.node(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:63)
	at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(jar:file:/var/jenkins_home/plugins/docker-workflow/WEB-INF/lib/docker-workflow.jar!/org/jenkinsci/plugins/docker/workflow/Docker.groovy:116)
	at WorkflowScript.run(WorkflowScript:12)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
	at com.cloudbees.groovy.cps.Next.step(Next.java:58)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:328)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:80)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:240)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:228)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE

For info the Kubernetes plugin is configured like this by the tuto.

image

What can I do to be able to use the jenkins docker plugin ? I tried many things but no success, since I don't really know what I'm doing...

Thanks a lot

delete feature branch namespace

Thanks for the good work, I had a bit of issues with #32 but, I managed to find my way, and it helped a lot to bootstrap.

This question is more related to Jenkins, but if by any chance, somebody has the question?

With this deploy, we deploy each new feature branch automatically, this is great :) But how to detect that a branch has been deleted, and delete the associated namespace?

Thanks!

Add more language features (Node, Go, etc).

Making the image heavier is actually not that bad of a cost for faster builds and less bandwidth usage, assuming your Kubernetes nodes keep the image around when Jenkins workers are not in use (they should).

https://github.com/FuseRobotics/continuous-deployment-on-kubernetes/commits/master

I've iterated a bit on this repo here and added Node, Go, and a few other language support. Might be worth looking into merging a couple little pieces of that back here.

Login credentials not added to the container

I'm dealing with an issue that I can't login to admin's account on Jenkins after setting up the envirment with the options secret file as seen in the README.md file.

Tried to update the login details again in the cloud but again - no luck.

Default Jenkins resources to low

When I created the Deployment and tried to get to the settings page of Jenkins the instance always crashed. After a bit of investigation I found the resource definition in the Deployment to no be enough

          limits:
            cpu: 500m
            memory: 1000Mi
          requests:
            cpu: 500m
            memory: 1000Mi

I changed it to work fine

          requests:
            cpu: 500m
            memory: 2000Mi
          limits:
            cpu: 2000m
            memory: 6000Mi

Does the original 1Gb RAM limit work for anybody?

Jenkins Deployment

Having trouble with running Docker images in Jenkins slaves

Hi again!

So I'm making progress on my own build. The pipeline is working great and deploying my applications images just fine. But now I'm trying to compile certain project assets for the images and I'm running into issues.

For example, the project uses npm. In our environment, we don't use native installs of tooling, and instead run everything in Docker images. npm is no exception. The Docker images are working great in our dev environment, and our current CI environment (CircleCI), but whenever I try to run the following command in a stage of the Jenkinsfile:

sh('npm install');

I get the following error in the build logs:

[nds_google_container_engine-NLXIZZR7ZURR4JJEWIOWTD36XE5QABL5DQH7Z73ZQIDCJJIU2IDQ] Running shell script
+ npm install
docker: Error response from daemon: mkdir /root/workspace: read-only file system.

I've ran some tests and it seems that the home folder (/root) in the Jenkins slaves is read only (the whole slave may be read-only). I'm assuming the Docker daemon is trying to create this workspace folder, but I'm honestly unfamiliar with it. I use Docker on Mac, and some of the inner workings are obscured by the way the Hypervisor is used on this platform.

Maybe this is related to the Jenkins Kubernetes Plugin?

Do you have any suggestions on what the problem / resolution might be? I'm continuing my research regardless, but I thought I'd ping you just incase you had some wisdom to share.

Thanks again for all your help!

Document Jenkins configuration

The rather complex setup for using the Kubernetes plugin in Jenkins is hidden away in the preconfigured raw jenkins disk image. If I am using this project as a template/tutorial for how to configure my own Jenkins instance, I have to build this entire project, deploy it to GKE, and manually examine the Jenkins config to see how it's done.

It would be more useful to document the steps required to configure the k8s and google oauth plugins on Jenkins to work in the GKE environment.

gceme ImagePullBackOff

Hello
When i did this from GCP, i got the following ImagePullBackOff issue,
How can i recovery this?
Thanks

Every 2.0s: kubectl get pod -o wide --namespace=production                          Tue Jun 13 15:35:27 2017
NAME                                         READY     STATUS             RESTARTS   AGE       IP           NODE
gceme-backend-canary-1780429140-xx2jt        0/1       ImagePullBackOff   0          1h        10.32.0.6    gke-jenkins-cd-default-pool-4b0e015e-2zdm
gceme-backend-production-4047385373-rn372    0/1       ErrImagePull       0          1h        10.32.1.7    gke-jenkins-cd-default-pool-4b0e015e-gwlx
gceme-frontend-canary-5261038-fz6r5          0/1       ImagePullBackOff   0          1h        10.32.2.6    gke-jenkins-cd-default-pool-4b0e015e-jbgb
gceme-frontend-production-3959310519-zg9xv   0/1       ImagePullBackOff   0          7m        10.32.1.12   gke-jenkins-cd-default-pool-4b0e015e-gwlx

Health check fails with custom build

Is there something in https://storage.googleapis.com/solutions-public-assets/jenkins-cd/jenkins-home-v2.tar.gz to make kubernetes health checks work?

I've tried switch to my own customized jenkins docker image along with backing jenkins_home with a perisistent volume claim. Health checks fail, but if i port forward to the pod, jenkins appears to be running correctly.

At this point, the only theory I have is that health checks are failing for some unknown reason, resulting in 502s when I visit the ingress.

restarting pods throws error

delete and creating deployment doesn't work.. Throwing error

Failed to attach volume "jenkins-home" on node "gke-demo-default-pool-f7ab4024-6ttg" with: googleapi: Error 400: The disk resource 'projects/dev-lm/zones/us-central1-b/disks/jenkins-home' is already being used by gke-demo-default-pool-f7ab4024-npdg

Errors creating Pods

Hi,

I've been following both the Google Guide (from GCE) and this one and my pod never starts. It just throws out an error on the deployment stating this:
Unable to mount volumes for pod "jenkins-2539707196-u95xz_jenkins(2bed9074-693e-11e6-9c2d-42010af00181)": timeout expired waiting for volumes to attach/mount for pod "jenkins-2539707196-u95xz"/"jenkins". list of unattached/unmounted volumes=[jenkins-home] Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jenkins-2539707196-u95xz"/"jenkins". list of unattached/unmounted volumes=[jenkins-home]

I've been stuck with this error for quite some time and I'm absolutely positive that the disk is there and is not attached to any VM upon execution of the deployment file.

I've even tried attaching the disk to a VM and formatting it first (then unmount and dettach) but I keep hitting the same error.

Is there any corrections that need to be done on the deployment file?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.