Git Product home page Git Product logo

jenkins's Introduction

OpenShift Jenkins Images

Introduction

This repository contains Dockerfiles for building Jenkins Master and Agent images intended for use with OKD 4 and Red Hat OpenShift 4.

Hosted Images

All OpenShift 4 images (including the ones from this repository) are based off of the Red Hat Universal Base Image 8.

NOTE: Only the 64-bit JVM is available in all images.

Community

These images are available via quay.io and are community supported.

NOTE: The jenkins-agent-maven and jenkins-agent-nodejs image are no longer maintained as of version 4.11 and no longer published as of version 4.14.

Red Hat OpenShift

These images are available via the Red Hat Catalog for customers with subscriptions.

4.10 and lower

These images are intended for OpenShift 4.10 and lower.

4.11 and higher

These images are intended for OpenShift 4.11 and higher.

NOTE: The jenkins-agent-maven and jenkins-agent-nodejs image are no longer maintained or published as of version 4.11.

Building

Please see BUILDING.md.

Basic Usage

Please see BASIC_USAGE.md.

Advanced Usage

Please see ADVANCED_USAGE.md.

Plugins

Please see PLUGINS.md.

Security

Please see SECURITY.md.

Testing

Please see TESTING.md.

Contributing

Please see CONTRIBUTING.md.

jenkins's People

Contributors

adambkaplan avatar akram avatar apoorvajagtap avatar arnaud-deprez avatar bparees avatar coreydaley avatar csrwng avatar divyansh42 avatar gabemontero avatar ggareth avatar grdryn avatar jerboaa avatar jimmidyson avatar jitendar-singh avatar jkhelil avatar jtescher avatar jupierce avatar liangxia avatar mfojtik avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar otaviof avatar ramessesii2 avatar sayan-biswas avatar scoheb avatar stevekuznetsov avatar waveywaves avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jenkins's Issues

Git is not working properly on slaves

docker run -it --rm openshift/jenkins-slave-nodejs-centos7 bash
$ git clone https://github.com/tnozicka/nodejs-ex.git /home/jenkins/nodejs-ex

ends up with error:

Cloning into '/home/jenkins/nodejs-ex'...
remote: Counting objects: 393, done.
remote: Total 393 (delta 0), reused 0 (delta 0), pack-reused 393
Receiving objects: 100% (393/393), 119.53 KiB | 0 bytes/s, done.
Resolving deltas: 100% (161/161), done.
fatal: unable to look up current user in the passwd file: no such user
Unexpected end of command stream

Same for maven one:

docker run -it --rm openshift/jenkins-slave-maven-centos7 bash
$ git clone https://github.com/tnozicka/nodejs-ex.git /home/jenkins/nodejs-ex
Cloning into '/home/jenkins/nodejs-ex'...
remote: Counting objects: 393, done.
remote: Total 393 (delta 0), reused 0 (delta 0), pack-reused 393
Receiving objects: 100% (393/393), 119.53 KiB | 0 bytes/s, done.
Resolving deltas: 100% (161/161), done.
fatal: unable to look up current user in the passwd file: no such user
bash-4.2$ Unexpected end of command stream

Surprisingly, this is not a problem when running Jenkins on oc cluster up. When using ADB/CDK all attempts to checkout repositories on slaves fail with these errors.

(There is probably a different policy to get UID from or something like that.)

I am not sure about the proper solution here, but we had a hack in our images before summit to fix it: https://github.com/tnozicka/dockerfiles/blob/master/openshift-nodejs-builder/Dockerfile#L3

Running the commands like

GIT_COMMITTER_NAME='<unknown>' GIT_COMMITTER_EMAIL='<unknown>' git clone https://github.com/tnozicka/nodejs-ex.git /home/jenkins/nodejs-ex

helps.

@rupalibehera @bparees

[sync-plugin] - Pipeline stage duration are not synced after deleting the pipeline and creating it again

$(subject) + it shows the old durations and does not make sense.

Steps to reproduce:

  1. $ oc cluster up --version=v1.3.0-alpha.3
  2. $ oc process -f https://github.com/tnozicka/nodejs-ex/raw/kontinu8-next/.openshift-pipeline/pipeline-template.yaml -v 'GIT_URL=https://github.com/tnozicka/nodejs-ex.git,GIT_REF=kontinu8-next' | oc apply -f -
  3. $ oc start-build nodejs-ex-pipeline
  4. wait for it to complete

  5. oc delete bc/nodejs-ex bc/nodejs-ex-pipeline dc/mongodb dc/nodejs-ex routes/nodejs-ex svc/mongodb svc/nodejs-ex # this will not intentionally delete the imagestream which makes it faster next time
  6. oc process -f https://github.com/tnozicka/nodejs-ex/raw/kontinu8-next/.openshift-pipeline/pipeline-template.yaml -v 'GIT_URL=https://github.com/tnozicka/nodejs-ex.git,GIT_REF=kontinu8-next' | oc apply -f -
  7. Let the pipeline finish

After the second generation pipeline finishes you see the durations from first generation and it does not make sense. The whole duration of the pipeline is now 59 seconds, but e.g. "Create image" stage supposedly took 1 m 31 s. (The whole pipeline actually took 59 s now.)

Screenshots:

Generation 1, Run 1
g1-r1
Generation 1, Run 2
g1-r2
Generation 2, Run 1
g2-r1

@bparees @jimmidyson ideas?

Can't access jenkins API with OPENSHIFT_ENABLE_OAUTH=true

Hi

The authentication through the Openshift Login Plugin (with OPENSHIFT_ENABLE_OAUTH=true) works fine with a browser, but we can't access to jenkins API directly. Here is an example :
_curl -X GET -H "Authorization: Bearer $(our token)" https://${our_jenkins_url}/api/json?pretty=true -k -v

With this requests, we always get this message :

**Authentication required

You are authenticated as: anonymous
Groups that you are in:

Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer**

We have used the namespace admin token and the jenkins service account secret.
What are we missing here ?
Thanks for your help

Jenkins image on registry.access.redhat.com doesn't have git command/package

  • issue

I know that it is supposed to be installed. - https://github.com/openshift/jenkins/blob/master/1/Dockerfile.rhel7#L34
However, git doesn't exist in the jenkins-1-rhel7 container. Please see the output below

$ oc new-app registry.access.redhat.com/openshift3/jenkins-1-rhel7

$ oc rsh jenkins-1-rhel7-1-5hixu
bash-4.2$ git
bash: git: command not found

bash-4.2$ rpm -qa |grep git
openshift-3.0.1.0-1.git.527.f8d5fed.el7ose.x86_64
  • env
$ docker images |grep jenkins
docker.io/openshift/jenkins-1-centos7                   <none>              19c8281e9fac        3 weeks ago         551 MB
registry.access.redhat.com/openshift3/jenkins-1-rhel7   latest              225d177d917d        9 weeks ago         481.1 MB
  • other info

openshift/jenkins-1-centos7 (dockerhub) has git package/command.

Changing the admin PW via UI does not survive the restart of the container

Hello

I'm using openshift/jenkins-1-rhel7.
After changing the admin pw via the UI i can login with the new password. . But after restarting the container, the password configured in the environment ist set again.

In fact, the following code does not work:
https://github.com/openshift/jenkins/blob/master/1/contrib/s2i/run

[...]
90: if [ $old_password!=$new_password_hash ]; then
[...]

the password hash ALWAYS differs, even if the password is the same => the password is always reset to the one configured in the env.

Provide tagged version of openshift/jenkins images

Currently the openshift/jenkins images just use the latest tag.
For the obvious reasons, this can be problematic.

For example in the sysndesis propject we are using s2i with the openshift/jenkins-2-centos7:latest image. Every now and then a new latest tag comes out, using different plugin versions and completely breaking our ci.

Please provide actual tags, for us to use :-)

Slaves offline

I'm trying to run a Jenkins pipeline in enterprise OCP and can't get the slave images to run. I created a simple slave image similar to the nodejs example:

FROM openshift/jenkins-slave-base-rhel7

MAINTAINER Patrick Williams <...>

ENV BASH_ENV=/usr/local/bin/scl_enable \
    ENV=/usr/local/bin/scl_enable \
    PROMPT_COMMAND=". /usr/local/bin/scl_enable"

COPY contrib/bin/scl_enable /usr/local/bin/scl_enable

RUN yum repolist > /dev/null && \
    yum-config-manager --enable rhel-server-rhscl-7-rpms && \
    yum-config-manager --enable rhel-7-server-optional-rpms && \
    yum-config-manager --enable rhel-7-server-ose-3.2-rpms && \
    yum-config-manager --disable epel >/dev/null || : && \
    INSTALL_PKGS="rh-python35 rh-python35-python-devel rh-python35-python-setuptools rh-python35-python-pip rh-python35-python-psycopg2" && \
    yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
    rpm -V $INSTALL_PKGS && \
    yum clean all -y

RUN chown -R 1001:0 $HOME && \
    chmod -R g+rw $HOME

USER 1001

After building the image and pushing it to an image stream, I configured the Jenkins Kubernetes plugin to use this image, called 'python35'.

I've got a pipeline buildconfig that uses this jenkinsfile:

node('python35') {
    stage('test') {
        sh 'python --version'
    }
}

When I run the pipeline, I can see that pods are stood up in kubernetes but they never fully start. The jenkins build outputs something like:

[Pipeline] node
Still waiting to schedule task
python35-f448d26c2f442 is offline

while waiting for the pod to start.

It seems like the slave client isn't starting on the pods. There is no output when I do oc logs -f python35-..... Any advice on getting the client to start? Am I missing something in the docs?

Use mounted namespace secret for PROJECT_NAME

The recent version of k8s (and openshift) has the namespace name mounted in into /var/run/secrets/kubernetes.io/serviceaccount/namespace. We should check if that file exists and use it as the PROJECT_NAME by default.. We need to preserve PROJECT_NAME for backward compatibility and to allow people override.

Bare uid in USER instruction leaves root supplementary group

USER 1001

is not a good idea, because:

bash-4.2$ id
uid=1000030000 gid=0(root)

We should use useradd and groupadd to add an entry to /etc/passwd and /etc/group so that there's a supplementary group to use. E.g.:

RUN groupadd -g 1000 jenkins && useradd -u 1000 -g 1000 jenkins
USER jenkins

No user exists for uid XXXX when trying to run scp command

When I try to run an scp command from my jenkins pod hosted in openshift, or any ssh-related command I got errors like these:

$ ssh
No user exists for uid 1000060000
$ id
uid=1000060000 gid=0(root) groups=0(root),1000060000
$

doing some research, the root cause is that jenkins-openshift image is using a numeric user ID and not a fully featured Named user. This post provides great level of detail: http://blog.dscpl.com.au/2015/12/random-user-ids-when-running-docker.html.

A possible solution is to consider using the nss_wrapper library, details explained here: http://blog.dscpl.com.au/2015/12/unknown-user-when-running-docker.html

[sync-plugin] - 404 on Jenkins logs and job in Jenkins

Steps to reproduce.

  1. $ oc cluster up --version=v1.3.0-alpha.3
  2. $ oc process -f https://github.com/tnozicka/nodejs-ex/raw/kontinu8-next/.openshift-pipeline/pipeline-template.yaml -v 'GIT_URL=https://github.com/tnozicka/nodejs-ex.git,GIT_REF=kontinu8-next' | oc apply -f -
  3. $ oc start-build nodejs-ex-pipeline
  4. $ oc process -f https://github.com/tnozicka/nodejs-ex/raw/sync-WIP/.openshift-pipeline/pipeline-template.yaml -v 'GIT_URL=https://github.com/tnozicka/nodejs-ex.git,GIT_REF=sync-WIP' | oc apply -f -

Now try looking at the job in Jenkins or try viewing it's logs in Jenkins. You will get 404 there. You won't also see the job in Jenkins's overview, but you will see it a running build executor for it.

@jimmidyson

yum install "returned a non-zero code:47"

I try to do a build using the Dockerfile.rhel7 and it fails at the yum install. It starts out not being able to find a bunch of packages:

No package jenkins-plugin-kubernetes available.
No package jenkins-plugin-openshift-pipeline available.
No package jenkins-plugin-openshift-login available.
No package jenkins-plugin-credentials available.
No package jenkins-plugin-ace-editor available.
No package jenkins-plugin-branch-api available.
No package jenkins-plugin-cloudbees-folder available.
No package jenkins-plugin-durable-task available.
No package jenkins-plugin-git available.
No package jenkins-plugin-git-client available.
No package jenkins-plugin-git-server available.
No package jenkins-plugin-handlebars available.
No package jenkins-plugin-jquery-detached available.
No package jenkins-plugin-mapdb-api available.
No package jenkins-plugin-matrix-project available.
No package jenkins-plugin-mercurial available.
No package jenkins-plugin-momentjs available.

...
then it fails with:
F1014 18:34:30.453952 1 builder.go:204] Error: build error: The command '/bin/sh -c yum-config-manager --disable epel >/dev/null || : && INSTALL_PKGS="rsync gettext git tar zip unzip nss_wrapper java-1.8.0-openjdk java-1.8.0-openjdk-devel atomic-openshift-clients jenkins-1.651.2 jenkins-plugin-kubernetes jenkins-plugin-openshift-pipeline jenkins-plugin-openshift-login jenkins-plugin-credentials jenkins-plugin-ace-editor jenkins-plugin-branch-api jenkins-plugin-cloudbees-folder jenkins-plugin-durable-task jenkins-plugin-git jenkins-plugin-git-client jenkins-plugin-git-server jenkins-plugin-handlebars jenkins-plugin-jquery-detached jenkins-plugin-mapdb-api jenkins-plugin-matrix-project jenkins-plugin-mercurial jenkins-plugin-momentjs jenkins-plugin-multiple-scms jenkins-plugin-pipeline-build-step jenkins-plugin-pipeline-input-step jenkins-plugin-pipeline-rest-api jenkins-plugin-pipeline-stage-step jenkins-plugin-pipeline-stage-view jenkins-plugin-pipeline-utility-steps jenkins-plugin-plain-credentials jenkins-plugin-scm-api jenkins-plugin-script-security jenkins-plugin-ssh-credentials jenkins-plugin-structs jenkins-plugin-subversion jenkins-plugin-workflow-aggregator jenkins-plugin-workflow-api jenkins-plugin-workflow-basic-steps jenkins-plugin-workflow-cps jenkins-plugin-workflow-cps-global-lib jenkins-plugin-workflow-durable-task-step jenkins-plugin-workflow-job jenkins-plugin-workflow-multibranch jenkins-plugin-workflow-remote-loader jenkins-plugin-workflow-scm-step jenkins-plugin-workflow-step-api jenkins-plugin-workflow-step-api jenkins-plugin-workflow-support jenkins-plugin-openshift-sync" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && localedef -f UTF-8 -i en_US en_US.UTF-8' returned a non-zero code: 47

Is there an easy way to upgrade the Maven version for the maven slave?

In the Docker file for the maven slave image it has

ENV MAVEN_VERSION=3.0.5

The maven version required by the fabric8 goals I think needs to be higher than this

fabric8io/fabric8-maven-plugin#556

It seems a bit onerous to go the s2i route to create a new Jenkins Maven Slave image with an updated ENV variable? Is there an easier way?

I tried setting an environment variable on the jenkins deployment config but it isn't picked up by the slave (which didn't suprise me!).

Peter.

Groovy Sandbox might still need some tunning

I don't suppose there is a reason for this sandbox to forbid me from using basic string operations, right?

'''  xxxx  '''..stripMargin()
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods stripMargin java.lang.String
    at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:174)
    at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:95)
    at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
    at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
    at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15)
    at WorkflowScript.run(WorkflowScript:30)
    at ___cps.transform___(Native Method)
    at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:55)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixName(FunctionCallBlock.java:74)
    at sun.reflect.GeneratedMethodAccessor282.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
    at com.cloudbees.groovy.cps.Next.step(Next.java:58)
    at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:29)
    at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:29)
    at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:276)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:78)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:185)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:183)
    at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
    at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

S2I Assemble Script Does Not Support Hidden Folder In The Configuration Directory

This is problematic if you want to version control .m2/settings.xml. This can be done with some additionally configuration, but it would be nice to support this scenario out of the box.

The issue is that https://github.com/openshift/jenkins/blob/master/1/contrib/s2i/assemble#L34 is using mv which does not support hidden files. Does it make sense to look for .m2 specifically? Or is there a more general purpose solution here?

Happy to submit a patch to v1 and v2 once direction is decided.

change .metadata.annotations to .metadata.tags

Had a small issue with autodiscovery jenkins slaves from image streams in the project.
In the template annotations is used for optional slave-label annotation.
Maybe it should be changed to tags since tags are always present in the metadata of the image stream.

Centos Permission denied error

I even tried adding USER root in the Dockerfile and I still get:

[91m/bin/sh: /usr/local/bin/plugins.sh: Permission denied
658 F1014 20:18:14.636754 1 builder.go:204] Error: build error: The command '/bin/sh -c /usr/local/bin/plugins.sh /opt/openshift/base-plugins.txt && touch /opt/openshift/plugins/credentials.jpi.pinned && touch /opt/openshift/plugins/subversion.jpi.pinned && touch /opt/openshift/plugins/ssh-credentials.jpi.pinned && touch /opt/openshift/plugins/script-security.jpi.pinned && chown -R 1001:0 /opt/openshift && /usr/local/bin/fix-permissions /opt/openshift && /usr/local/bin/fix-permissions /var/lib/jenkins' returned a non-zero code: 126

With S2I, manage dynamic part in job config

I've created a template that uses a S2I build to copy job definitions.

When I create a new-app from this template, I need a way to set the token in the configuration of the job.

Is there a way to modify the source to replace a placeholder ?

Or another way to achieve that use case ?

jenkins successful test logs are not clean

Jenkins "successful" logs contain some scary statements:

Dumping logs for cb827cbbd0dadd8bf4c9fe54b357460e4793fd271a1ee2fb986ccb63ea2e1ad3
+ [[ 143 != \0 ]]
+ docker logs cb827cbbd0dadd8bf4c9fe54b357460e4793fd271a1ee2fb986ccb63ea2e1ad3
Detected password change, updating Jenkins configuration ...
Processing Jenkins Kubernetes configuration (/var/lib/jenkins/config.xml.tpl) ...
Running from: /usr/lib/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
Jenkins home directory: /var/lib/jenkins found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
cat: /run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
/usr/local/bin/jenkins-common.sh: line 51: /opt/openshift/passwd: Permission denied
NWRAP_ERROR(18) - nwrap_files_cache_reload: Unable to open '/opt/openshift/passwd' readonly -1:No such file or directory
NWRAP_ERROR(18) - nwrap_files_cache_reload: Unable to open '/opt/openshift/passwd' readonly -1:No such file or directory
NWRAP_ERROR(1) - nwrap_files_cache_reload: Unable to open '/opt/openshift/passwd' readonly -1:No such file or directory
NWRAP_ERROR(1) - nwrap_files_cache_reload: Unable to open '/opt/openshift/passwd' readonly -1:No such file or directory

This one in particular is worrisome:
/usr/local/bin/jenkins-common.sh: line 51: /opt/openshift/passwd: Permission denied

As seen here:
https://ci.openshift.redhat.com/jenkins/job/jenkins/120/console

kube-slave-common.sh not working with a new jenkins-slave imagestream

With my OpenShift Origin 1.3 setup i get problems when i try make a new jenkins-slave image.
I've made a proper new imagestream with label "role=jenkins-slave" for my jenkins-slave

But then Jenkins got an corrupted config.xml because go template is not working on my system:

oc v1.3.0
kubernetes v1.3.0+52492b4
features: Basic-Auth

Server https://10.2.2.2:8443
openshift v1.3.0
kubernetes v1.3.0+52492b4

The {{index ...}} is not working for me.

https://github.com/openshift/jenkins/blob/master/2/contrib/jenkins/kube-slave-common.sh#L89

example:
oc get is/jenkins-slave-maven3-ibmjdk8 --template={{if index .metadata.annotations "slave-directory"}}{{index .metadata.annotations "slave-directory"}}{{else}}${DEFAULT_SLAVE_DIRECTORY}{{end}}
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'

When i process the whole template:

oc get is/jenkins-slave-maven3-ibmjdk8 -o templatefile --template template.txt
error: error parsing template <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>

    <name>{{.metadata.name}}</name>
    
    <privileged>false</privileged>
    <command></command>
    <args></args>
    <instanceCap>5</instanceCap>
    <volumes/>
    <envVars/>
    <nodeSelector/>
    <serviceAccount>${oc_serviceaccount_name}</serviceAccount>
    <remoteFs>{{if index .metadata.annotations \"slave-directory\"}}{{index .metadata.annotations \"slave-directory\"}}{{else}}${DEFAULT_SLAVE_DIRECTORY}{{end}}</remoteFs>
    <label>{{if index .metadata.annotations \"slave-label\"}}{{index .metadata.annotations \"slave-label\"}}{{else}}${name}{{end}}</label>
  </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>

, template: output:12: unexpected "\" in operand

Sample does not work

When I follow the sample, this step fails:

cat job.xml | curl -X POST -H "Content-Type: application/xml" -H "Expect: " --data-binary @- http://$JENKINS_ENDPOINT/createItem?name=rubyJob

I looked into the container and saw this:

# docker logs -f k8s_jenkins-container.1af77831_jenkins-1-p9e0z_test_e9eee92c-1c21-11e5-a188-005056be126f_f02d302e
Running from: /usr/lib/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
Jun 26, 2015 4:47:16 PM winstone.Logger logInternal
INFO: Beginning extraction from war file
Jun 26, 2015 4:47:16 PM winstone.Logger logInternal
INFO: Winstone shutdown successfully
Jun 26, 2015 4:47:16 PM winstone.Logger logInternal
SEVERE: Container startup failed
java.io.FileNotFoundException: /var/jenkins_home/war/META-INF/MANIFEST.MF (No such file or directory)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at winstone.HostConfiguration.getWebRoot(HostConfiguration.java:280)
        at winstone.HostConfiguration.<init>(HostConfiguration.java:83)
        at winstone.HostGroup.initHost(HostGroup.java:66)
        at winstone.HostGroup.<init>(HostGroup.java:45)
        at winstone.Launcher.<init>(Launcher.java:143)
        at winstone.Launcher.main(Launcher.java:354)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at Main._main(Main.java:293)
        at Main.main(Main.java:98)

Keep slave build cache

Is it possible to keep dependent libraries and intermediate compilated files between multiple run of a build job?

I'm working on a sbt slave to compile a scala project. Sbt is really slow on first dependency resolution, so I need to keep sbt data cace between multiple run.

My first idea is to enable persitent storage on slave pods. A second approach is to zip the cache and store on the master with some jenkins plugin/feature.

Is this feature already implemented?

Thanks

s2i customization makes image unstartable

Hello

i'm trying to customize the jenkins image openshift/jenkins-1-rhel7. The original image seems to work fine.

As "base" is use: https://github.com/siamaksade/jenkins-s2i-example

In order to customize the image, i have the following buildconfig:

[...]
            "source": {
                "contextDir": "jenkins-s2i/s2i/master",
                "git": {
                    "uri": "https://git.repo.intern/scm/team/openshift.git",
                    "ref": "develop"
                },
                "sourceSecret": {
                    "name": "builder"
                },
                "type": "Git"
            },
            "strategy": {
                "sourceStrategy": {
                    "from": {
                        "kind": "ImageStreamTag",
                        "name": "jenkins-1-rhel7:latest",
                        "namespace": "openshift"
                    }
                },
                "type": "Source"
[...]

The build itself works fine (here only the source part):

[...]
I0809 05:35:07.885356       1 docker.go:622] Attaching to container "74ac9f156f6837cfb60aef16974367e4f88a5f84bf50030d6f67caf190c85068" ...
I0809 05:35:07.885832       1 docker.go:631] Starting container "74ac9f156f6837cfb60aef16974367e4f88a5f84bf50030d6f67caf190c85068" ...
---> Copying repository files ...
---> Installing Jenkins 4 plugins using /opt/openshift/plugins.txt ...
Downloading git-2.4.4 ...
Downloading scm-api-1.0 ...
Downloading git-client-1.19.6 ...
Downloading greenballs-1.15 ...
---> Removing sample Jenkins job ...
---> Installing new Jenkins configuration ...
I0809 05:35:13.792601       1 docker.go:689] Invoking postExecution function
I0809 05:35:13.792678       1 sti.go:289] No user environment provided (no environment file found in application sources)
E0809 05:35:13.792828       1 sti.go:571] Error reading docker stdout, EOF
I0809 05:35:13.838642       1 docker.go:734] Committing container with dockerOpts: {Container:74ac9f156f ....
[...]

But when starting the built image i get the following error:

touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?

it SEEMS to be the original /usr/local/bin/jenkins startupfile and not the customized one? Just a guess.

Any idea?

What is the recommended way to manage securely jenkins credentials

I'd like to avoid manual operation, in this case creating jenkins credentials by using the jenkins UI after the deployment.

I read in the documentation that we can put credentials.xml into source control. But it doesn't seem very secure …

What is the recommended way to manage jenkins credentials, in a secure way ?

Is there a way to synchronize openshift secret with jenkins credentials ?

When upgrading plugins in a Jenkins image plugins are not overwritten

If we are using persistent storage and mounting a PV on $JENKINS_HOME then the $JENKINS_HOME/plugins/* survive pod restart. If however we update the Jenkins image which includes a plugin version update when jenkins starts the plugin on the PV is used and not the updated version.

This comment here suggests this behaviour is intentional https://github.com/openshift/jenkins/blob/master/2/contrib/jenkins/install-plugins.sh#L54 I wonder if we could have an env var or something to say override instead of skip, would that make sense?

flake on publishing test success

You'll see this in the merge output:

+ test_pull_requests --mark_test_success --repo jenkins --config /var/lib/jenkins/.test_pull_requests_jenkins.json
  Marking SUCCESS for pull request #--repo in repo ''
/usr/share/ruby/net/http/response.rb:119:in `error!': 404 "Not Found" (Net::HTTPServerException)
	from /bin/test_pull_requests:543:in `block in get_comments'
	from /bin/test_pull_requests:535:in `each'
	from /bin/test_pull_requests:535:in `get_comments'
	from /bin/test_pull_requests:1074:in `get_comment_matching_regex'
	from /bin/test_pull_requests:1090:in `get_comment_with_prefix'
	from /bin/test_pull_requests:779:in `mark_test_success'
	from /bin/test_pull_requests:2358:in `<main>'
Build step 'Execute shell' marked build as failure

About the user for uid 1001

If configuring the ssh key for jenkins container, it will report No user exists for uid 1001 when running ssh-keygen.

some info FYI:

$ echo $USER


$ echo $UID

1001

$ id jenkins

uid=997(jenkins) gid=995(jenkins) groups=995(jenkins)

$ cat /opt/openshift/passwd

...
jenkins:x:1001:0:Jenkins Continuous Integration Server:/var/lib/jenkins:/bin/false

The problem should be there is no user created for the uid 1001, but it seems that it is trying to create a jenkins user with the uid 1001, right ? but the result shows jenkins user's uid is 997.

Add podTemplate Volume to allow to persist .m2/repository for maven build

The existing template which is used to generate the podTemplate from an imagestream

# convert_is_to_slave converts the OpenShift imagestream to a Jenkins Kubernetes
# Plugin slave configuration.
function convert_is_to_slave() {
  [ -z "$oc_cmd" ] && return
  local name=$1
  local template_file=$(mktemp)
  local template="
  <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
    <inheritFrom></inheritFrom>
    <name>{{.metadata.name}}</name>
    <instanceCap>5</instanceCap>
    <idleMinutes>0</idleMinutes>
    <label>{{if not .metadata.annotations}}${name}{{else}}{{if index .metadata.annotations \"slave-label\"}}{{index .metadata.annotations \"slave-label\"}}{{else}}${name}{{end}}{{end}}</label>
    <serviceAccount>${oc_serviceaccount_name}</serviceAccount>
    <nodeSelector></nodeSelector>
    <volumes/>
    <containers>
      <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
        <name>jnlp</name>
        
        <privileged>false</privileged>
        <alwaysPullImage>false</alwaysPullImage>
        <workingDir>/tmp</workingDir>
        <command></command>
        <args>\${computer.jnlpmac} \${computer.name}</args>
        <ttyEnabled>false</ttyEnabled>
        <resourceRequestCpu></resourceRequestCpu>
        <resourceRequestMemory></resourceRequestMemory>
        <resourceLimitCpu></resourceLimitCpu>
        <resourceLimitMemory></resourceLimitMemory>
        <envVars/>
      </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
    </containers>
    <envVars/>
    <annotations/>
    <imagePullSecrets/>
    <nodeProperties/>
  </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
  "
  echo "${template}" > ${template_file}
  $oc_cmd get -n "${PROJECT_NAME}" is/${name} -o templatefile --template ${template_file}
  rm -f ${template_file} &>/dev/null
}

doesn't allow to mount a volume to by example persist the .m2/repository used by the pod to build a maven project

Proposition is to add a PersistentVolumeClaim using as parameter the claimName and mounthPath

 <volumes>                                                                                                                                                                     
            <org.csanchez.jenkins.plugins.kubernetes.volumes.PersistentVolumeClaim>                                                                                                     
              <mountPath>/home/jenkins/.m2</mountPath>                                                                                                                                  
              <claimName>jenkins-maven</claimName>                                                                                                                                      
              <readOnly>false</readOnly>                                                                                                                                                
            </org.csanchez.jenkins.plugins.kubernetes.volumes.PersistentVolumeClaim>                                                                                                    
 </volumes>  

Jenkins service account is not propagated to slaves

Jenkins slaves are missing service account jenkins and have only the default one. They should have the same one as master.

My current workaround is:

#!/usr/bin/groovy
def k8s_token = '<unknown>'

node ('master') {
    k8s_token = readFile '/var/run/secrets/kubernetes.io/serviceaccount/token'
}

node ('nodejs') {
    // use ${k8s_token}
}

I would like to get rid of it.

Git plugin

Hi,

Since Git is the preferred SCM to be used with Openshift, is it a desired behavior that the Git plugin is not included in the image ?

Regards

Frequently hitting oom

After updating, my Jenkins server is frequently hitting OOM and crashing. It seems to be Java reporting the OOM rather then Docker enforcing the quota. Metrics are reporting it only maxes out at 1.3GB but it's allocated 2GB.

Perhaps this change may be the culprit 8afc4f8

When using an earlier version, before importing the updated image, it would run fine with 1GB used and 2GB available

duplicated openshift-pipeline in /var/lib/jenkins/plugin folder

I have dumped my desired Jenkins dependencies in a plugin.txt file.

Following the instructions of the s2i custom-build, I ended with two openshift-pipeline.hpi in /var/lib/jenkins/plugin folder

41 May 19 03:21 openshift-pipeline.hpi -> /usr/lib64/jenkins/openshift-pipeline.hpi
4124260 May 19 03:21 openshift-pipeline.jpi

This also happened with:

  • credentials.hpi
  • durable-task.hpi
  • kubernetes.hpi

Pass ENV Var to the podTemplate

Since OCP 3.6 we can pass ENV vars to the BuildConfig/Pipeline but I don't see within the Jenkins 2 Docker image that the maven PodTemplate used to configure the pod where the build will take place as described within a groovy pipeline script will be able to reuse such ENV vars.

Is there a trick to pass them or do we have to update this file to support it ?

Error: build error: contrib/openshift: no such file or directory

I was having trouble building the rhel7 version so I switched to the centos version. Now I see this error:

Step 6 : COPY ./contrib/openshift /opt/openshift
928 F1014 19:21:03.004407 1 builder.go:204] Error: build error: contrib/openshift: no such file or directory

Add the master url config to global Jenkins configs

OpenShift mater url should be configurable in global Jenkins configs and one chosen as "default". Buildsteps should be set to the one chosen as "default" and show a dropdown list for picking the master url. Text field in each buildstep means a change in the url requires every buildstep to be manually updated by the user.

viewing jenkins log files

Should be possible to "docker exec" into jenkins container and view the jenkins logs under /var/log/jenkins

extending image with s2i: proxy env variables for build only?

Hello

I have another Problem when extending the jenkins image via s2i.
I added the following settings to the build config:

[...]
        "strategy": {
            "sourceStrategy": {
                "env": [
                    {
                        "name": "https_proxy",
                        "value": "proxy:3128"
                    },
                    {
                        "name": "http_proxy",
                        "value": "proxy:3128"
                    },
                    {
                        "name": "no_proxy",
                        "value": "openshift.default.svc.cluster.local,localhost"
                    }
[...]

Why? I need to use a proxy in order to download the additional plugins from jenkins-ci
Whats the problem?
The problem is, that these env variables also exist in the running image and not onyl the builder. This forces me, to have an exessive long no_proxy host list (partial wildcards like *.interndon't work), so i have to list EVERY possible internal host.

So my Question:
Is there a way, to have these proxy settings only while assembling the image? Or unset them after that?
I don't want to copy the complete run script just to extend it with the line unset https_proxy.

Any suggestions?

Thanks
Dakky

can not perform oc operation in RHEL/Jenkins image

Hi,

I have created a pod in openshift enterprise using "oc new-app registry.access.redhat.com/openshift3/jenkins-1-rhel7" , I can access jenkins console fine.

But whenever I try to run "oc" commands from jenkins job,

it gives me "default cluster has no server defined"

There in no such issue though with jenkins/centos image.

any thoughts??

[Pipeline-sync] Pipeline build, started before Jenkins is deployed, gets deleted

If I create a pipeline BC and trigger it for the first time, it triggers jenkins deployment into that namespace and deletes the build. Starting it for the second time, after Jenkins is deployed, works.

Steps to reproduce:

  1. $ oc cluster up --version=v1.3.0-alpha.3 && oc process -f https://github.com/tnozicka/nodejs-ex/raw/kontinu8-next/.openshift-pipeline/pipeline-template.yaml -v 'GIT_URL=https://github.com/tnozicka/nodejs-ex.git,GIT_REF=kontinu8-next' | oc apply -f - && oc start-build nodejs-ex-pipeline && oc get build
  2. you can see build nodejs-ex-pipeline-1

  3. wait until Jenkins is deployed and ready

  4. $ oc get build # shows no build at all
  5. $ oc start-build nodejs-ex-pipeline
  6. $ oc get build # will show only the second one (nodejs-ex-pipeline-2) (+the source build)

@bparees

Enabling Systemd service in jenkins-1-centos7 image

I have a situation where I need to start some services within this jenkins container to make it work in our project. So i need Systemd enabled in order to do that...

As of now I get the below error when I try to run "systemctl" command within this container:

Failed to get D-Bus connection: Operation not permitted
Which is expected.

Now on my research, I found that if we use the below docker file to create an image and then run a container, we should be able to run systemctl commands:

FROM centos:7
MAINTAINER "you" [email protected]
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in ; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done);
rm -f /lib/systemd/system/multi-user.target.wants/
;
rm -f /etc/systemd/system/.wants/;
rm -f /lib/systemd/system/local-fs.target.wants/;
rm -f /lib/systemd/system/sockets.target.wants/udev;
rm -f /lib/systemd/system/sockets.target.wants/initctl;
rm -f /lib/systemd/system/basic.target.wants/
;
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]

Now I believe this should work fine with the docker file which is provided for given image.
So I updated the DOCKERFILE present in the slave-base folder, by adding the above commands.

After creating the image successfully, and running with the below command:

docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 8080:8080 image-name

Still the systemctl command fails...

My question is, what else do I need to do to enable systemctl commands.
There are currently 3 slave folders in this repository, do i need to update all three?

Fail the build when plugin fail to install

Currently when the plugin fail to install (99% downloading error), the installer finishes but the build continue, which result in inconsistent build result (Jenkins gets deployed, but without plugins that failed to install)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.