openshift-eng / aos-cd-jobs Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Jenkins parameters w/o any default values are considered unset by our stages so we cannot use them at all (because of set -u
).
See #391 (comment)
I'd really like to run registry and pruning related tests for registry code changes like before:
[testextended][extended:core(ImagePrun|registry)]
Extracted from e-mail discussion @stevekuznetsov
########## STARTING STAGE: SYNC ORIGIN PULL REQUEST 15034 ##########
+ [[ -s /var/lib/jenkins/jobs/kargakis_test/workspace/activate ]]
+ source /var/lib/jenkins/jobs/kargakis_test/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285
++ export PATH=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/ffb84452f35d4a28991a0f4e31f3686509895285/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/kargakis_test/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/kargakis_test/workspace/.config
/tmp/hudson2337358816745890171.sh: line 3: PULL_REFS: unbound variable
++ set +o xtrace
########## FINISHED STAGE: SUCCESS: SYNC ORIGIN PULL REQUEST 15034 [00h 00m 00s] ##########
Build step 'Execute shell' marked build as failure
https://github.com/openshift/aos-cd-jobs/blob/master/jobs/build/ocp/Jenkinsfile#L147 checks out the jenkins repo, but nothing subsequently changes it to the right branch for consumption by oit.
ose_images determines the right branch here:
https://github.com/openshift/aos-cd-jobs/blob/master/build-scripts/ose_images/ose.conf#L158-L165
Some of the branches are already documented, others aren't.
https://github.com/openshift/aos-cd-jobs#jenkins-pipeline-definitions-under-jobs
We should fill those descriptions up @jupierce @bbguimaraes @tdawson @smunilla
$subject
@jupierce I would assign to me but apparently I don't have rights. Please assign to me.
Opening this as a issue for visibility and to discuss the current process and perhaps find actionable improvements.
I would like to understand what is the current process for merging changes to this repo and what measures are in place to verify the changes before they affect people working on other repos that depend on the Jenkins jobs.
In the last couple of weeks I've seen errors in merge jobs of openshift-ansible:
Since bugs here affect a number of developers and PRs, I believe it is worth having a process to reduce the risk of human errors.
Requested in #220 (comment)
@stevekuznetsov would help in quickly identifying the stages that push a job to high durations.
########## STARTING STAGE: MAKE A TRELLO COMMENT ##########
+ [[ -s /var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/activate ]]
+ source /var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345
++ export PATH=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/b58d70e4ba8547fb716da04deb467ce17ccf4345/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/ami_build_origin_int_rhel_fork/workspace/.config
+ set +o nounset +o errexit
+ [[ -n <none> ]]
+ trello comment 'A fork ami has been created for this card: `_16`' --card-url '<none>'
++ export status=FAILURE
++ status=FAILURE
+ set +o xtrace
########## FINISHED STAGE: FAILURE: MAKE A TRELLO COMMENT ##########
cc: @stevekuznetsov
The openshift level of kubernetes carries multiple patches. Sometimes we need to be able to pull a level of kube that matches our vendor tree. We should automate the process of keeping these patches up to date by copying the UPSTREAM commits to openshift/kubernetes
, then auto-creating a bump(k8s.io/kubernetes):<openshift/kuberenetes sha>
commit in openshift/origin.
hack/move-upstream.sh
make this possible (I think), but its not set up to be run automatically.
@sttts you are well versed in bash, do you think you could put together a script that does this for us?
@stevekuznetsov once we have a script, can you wire our job?
So we can avoid issues like https://bugzilla.redhat.com/show_bug.cgi?id=1455472 in the future.
Will be easier once we switch to prow.
It's not necessary anymore
I pushed a commit, tagged the PR to run extended tests, then pushed a new commit. The old extended test job was not canceled (but a new job was started to run the new commit).
old job:
https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_templates/3/
new job:
https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_templates/4
Github URL needs "tree" in URL and should start with https://
OIT has detected a change in the Dockerfile for openshift3/ose-ansible
Source file: github.com/openshift/openshift-ansible/images/installer/Dockerfile.rhel7
This has been automatically reconciled and the new file can be seen here:
http://pkgs.devel.redhat.com/cgit/rpms/aos3-installation-docker/tree/Dockerfile?id=a2b02117af95716e20f383c27ef984effaf41800
During e2e tests. Example:
ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=ha:////4Izl/FHSTYTHykrmlzZzvlpaz1KQtn8ISZGCH/weetbEAAAAjR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDAw2yTZAQAIfTy0igAAAA==โข Failure [35.369 seconds]ha:////4KkXsqZwcoBRYjJhzhKwR9zNVPMps4dDi1lVCC6jyPfqAAAAjh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2Ogn2QEAspZgwYsAAAA=ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
[service-catalog] walkthrough
ha:////4E0uZ+vhxmbt1sosEgabVrkGkIVzGZSiKgVaFAIjWEFYAAAAoh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNnEGQWslOwD8ozaepQAAAA==/data/src/github.com/openshift/origin/cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/test/e2e/framework/framework.go:89ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=ha:////4Izl/FHSTYTHykrmlzZzvlpaz1KQtn8ISZGCH/weetbEAAAAjR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDAw2yTZAQAIfTy0igAAAA==Run walkthrough-example [It]ha:////4KkXsqZwcoBRYjJhzhKwR9zNVPMps4dDi1lVCC6jyPfqAAAAjh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2Ogn2QEAspZgwYsAAAA=ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
ha:////4E0uZ+vhxmbt1sosEgabVrkGkIVzGZSiKgVaFAIjWEFYAAAAoh+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNnEGQWslOwD8ozaepQAAAA==/data/src/github.com/openshift/origin/cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/test/e2e/walkthrough.go:338ha:////4FLDFiTSrtYr0TzBc1BS5w7gH7FascK4PgOXbRGyV30+AAAAkR+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
ha:////4MeVcH72wR3UhhDqyvcTQ7177yig6/hEwOTecrrdEgO0AAAAox+LCAAAAAAAAP9b85aBtbiIQT2jNKU4P0+vIKc0PTOvWC8xrzgzOT8nv0gvODO3ICfVoyQ3xy+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQdnMzAAJrJTsABbRw/aUAAAA=failed to wait ClusterServiceBroker to be ready
Expected error:
<*errors.errorString | 0xc420260c30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
We need to find a way for devs to remove CARRY commits without force-pushing. Creating a revert of a CARRY commit as a SQUASH does not work because git rebase -i
doesn't know what to do with the empty commit. We may want to add logic and identify empty commits before actually doing the rebase and drop all those commits automatically.
Found in #258
The initial version of the new fork_ami job works only with public repos; changing the flow to git clone
locally and update the remote repo with the local copy does not work for some reason related to git+ssh. We probably want to investigate more.
@jhadvig one of the side-effects of the issues today was that I noticed that sjb/hack/determine_install_upgrade_version.py
is returning something other than the input package version for the upgrade target. When does this make sense? I thought we always want to upgrade to the code we just built, right? Maybe I am forgetting something in the history of this script :\
May be related to the recent changes in the integration job
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890
++ export PATH=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/9aea3b4f81e266b026e21975a3a6a5a1cfddd890/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_install_update/workspace@4/.config
++ mktemp
+ script=/tmp/tmp.M96xyH5KQb
+ cat
+ chmod +x /tmp/tmp.M96xyH5KQb
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.M96xyH5KQb openshiftdevel:/tmp/tmp.M96xyH5KQb
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "/tmp/tmp.M96xyH5KQb"'
+ cd /data/src/github.com/openshift/aos-cd-jobs
++ cat ./PKG_NAME
+ pkg_name=origin
+ [[ origin == \o\r\i\g\i\n ]]
+ deployment_type=origin
+ echo origin
++ cat ORIGIN_BUILT_VERSION
+ sudo python sjb/hack/determine_install_upgrade_version.py origin-3.6.0-0.alpha.2.433.d922159.x86_64 --dependency_branch master
Traceback (most recent call last):
File "sjb/hack/determine_install_upgrade_version.py", line 126, in <module>
available_pkgs = sort_pkgs(available_pkgs)
File "sjb/hack/determine_install_upgrade_version.py", line 73, in sort_pkgs
exceptional_pkg["original_pkg"] = copy.deepcopy(pkg)
File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/usr/lib64/python2.7/copy.py", line 334, in _reconstruct
state = deepcopy(state, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 298, in _deepcopy_inst
state = deepcopy(state, memo)
File "/usr/lib64/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/usr/lib64/python2.7/copy.py", line 257, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib64/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/usr/lib64/python2.7/copy.py", line 329, in _reconstruct
y = callable(*args)
File "/usr/lib64/python2.7/copy_reg.py", line 93, in __newobj__
return cls.__new__(cls, *args)
TypeError: object.__new__(thread.lock) is not safe, use thread.lock.__new__()
Exception AttributeError: AttributeError("'YumRepository' object has no attribute '_sack'",) in <bound method YumRepository.__del__ of <yum.yumRepo.YumRepository object at 0x2cc5590>> ignored
Exception AttributeError: AttributeError("'YumSqlitePackageSack' object has no attribute 'primarydb'",) in <bound method YumSqlitePackageSack.__del__ of <yum.sqlitesack.YumSqlitePackageSack object at 0x2cc54d0>> ignored
++ export status=FAILURE
++ status=FAILURE
+ set +o xtrace
########## FINISHED STAGE: FAILURE: INSTALL ORIGIN [00h 00m 03s] ##########```
Similar to the fix in #102 , a fix is needed to compile extended tests using go 1.7.
Slowest test in our queue by far
For two months now.
@stevekuznetsov seems like the docker registry is not built correctly for the integration test?
Today, we have artifact generation and fetching tasks for all the jobs that run off of AWS, as it only requires one hop from the Jenkins master using the origin-ci-tool
SSH configuration onto the AWS host. We do not have the same level of artifact gathering for the GCE job, as this job uses the AWS host as a intermediary and connects to GCE from there -- so we grab e.g. the origin-node
log from the AWS host, but that host is not running origin
.
We need to determine if we can get the normal artifact gathering logic to apply cleanly to the GCE jobs (maybe by using SSH with -J
jump) or if we just need to hard-code a list of artifacts to get us to a better state quickly, if we think the prior will be hard to do and we do not expect to have more jobs on GCE in the future.
We install docker
using the rhel7next*
suite of repos, which are synced from the latest RHEL 7 Extras compose in Brew. During this process, recently we are bringing in skopeo-containers
as a dependency:
$ sudo yum --disablerepo=\* --enablerepo=rhel7next\* install docker
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
--> Processing Dependency: docker-client = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: docker-common = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: docker-rhel-push-plugin = 2:1.12.6-32.git88a4867.el7 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: container-selinux >= 2:2.12-2 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: oci-register-machine >= 1:0-3.10 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: skopeo-containers for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Processing Dependency: libseccomp.so.2()(64bit) for package: 2:docker-1.12.6-32.git88a4867.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.15-1.git583ca40.el7 will be installed
---> Package docker-client.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package docker-common.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package docker-rhel-push-plugin.x86_64 2:1.12.6-32.git88a4867.el7 will be installed
---> Package libseccomp.x86_64 0:2.3.1-2.el7 will be installed
---> Package oci-register-machine.x86_64 1:0-3.11.gitdd0daef.el7 will be installed
---> Package oci-systemd-hook.x86_64 1:0.1.7-4.gite533efa.el7 will be installed
--> Processing Dependency: libyajl.so.2()(64bit) for package: 1:oci-systemd-hook-0.1.7-4.gite533efa.el7.x86_64
---> Package skopeo-containers.x86_64 1:0.1.20-2.el7 will be installed
--> Running transaction check
---> Package yajl.x86_64 0:2.0.4-4.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================
Installing:
docker x86_64 2:1.12.6-32.git88a4867.el7 rhel7next-extras 14 M
Installing for dependencies:
container-selinux noarch 2:2.15-1.git583ca40.el7 rhel7next-extras 29 k
docker-client x86_64 2:1.12.6-32.git88a4867.el7 rhel7next-extras 3.3 M
docker-common x86_64 2:1.12.6-32.git88a4867.el7 rhel7next-extras 76 k
docker-rhel-push-plugin x86_64 2:1.12.6-32.git88a4867.el7 rhel7next-extras 1.5 M
libseccomp x86_64 2.3.1-2.el7 rhel7next 56 k
oci-register-machine x86_64 1:0-3.11.gitdd0daef.el7 rhel7next-extras 1.0 M
oci-systemd-hook x86_64 1:0.1.7-4.gite533efa.el7 rhel7next-extras 30 k
skopeo-containers x86_64 1:0.1.20-2.el7 rhel7next-extras 7.9 k
yajl x86_64 2.0.4-4.el7 rhel7next 39 k
Transaction Summary
===================================================================================================================================================
Install 1 Package (+9 Dependent packages)
This means when we run pre-flight checks in the installer that try to install and use skopeo
, they need to be grabbing the bleeding edge version from the rhel7next*
repositories as well, otherwise they fail like so:
Failure summary:
1. Host: localhost
Play: Verify Requirements
Task: openshift_health_check
Message: One or more checks failed
Details: check "docker_image_availability":
Some dependencies are required in order to check Docker image availability.
Error: Package: 1:skopeo-0.1.19-1.el7.x86_64 (oso-rhui-rhel-server-extras)
Requires: skopeo-containers = 1:0.1.19-1.el7
Installed: 1:skopeo-containers-0.1.20-2.el7.x86_64 (@rhel7next-extras)
skopeo-containers = 1:0.1.20-2.el7
Available: 1:skopeo-containers-0.1.17-0.7.git1f655f3.el7.x86_64 (oso-rhui-rhel-server-extras)
skopeo-containers = 1:0.1.17-0.7.git1f655f3.el7
Available: 1:skopeo-containers-0.1.17-1.el7.x86_64 (oso-rhui-rhel-server-extras)
skopeo-containers = 1:0.1.17-1.el7
Available: 1:skopeo-containers-0.1.18-1.el7.x86_64 (oso-rhui-rhel-server-extras)
skopeo-containers = 1:0.1.18-1.el7
Available: 1:skopeo-containers-0.1.19-1.el7.x86_64 (oso-rhui-rhel-server-extras)
skopeo-containers = 1:0.1.19-1.el7
The execution of "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml"
includes checks designed to fail early if the requirements
of the playbook are not met. One or more of these checks
failed. To disregard these results, you may choose to
disable failing checks by setting an Ansible variable:
openshift_disable_check=docker_image_availability
Failing check names are shown in the failure details above.
Some checks may be configurable by variables if your requirements
are different from the defaults; consult check documentation.
Variables can be set in the inventory or passed on the
command line using the -e flag to ansible-playbook.
We should probably add a oct prepare skopeo
command (oct
should be well-formed for this, but maybe we want a more generic oct prepare package
?). Then, we need to add logic to the package-dockertested
job to install skopeo
from rhel7next*
with the new command, as it does for docker
. Furthermore, we need to update the call to the update-dockertested-repo.sh
script to add to the dockertested
repo the package containing skopeo
. Then, we can finally update the job configuration for ami_build_origin_int_rhel_base
so that new base AMIs will install the correct version of skopeo
.
@adammhaile I recently updated push-openshift-online-to-mirrors.sh to improve variable naming and simplify it:
Compare to what it would have looked like originally (before renaming): https://github.com/openshift/aos-cd-jobs/blob/70c3488790dde633c9b8c80b3667650d67872107/build-scripts/rcm-guest/libra-repo-to-mirrors.sh
Can you do this for https://github.com/openshift/aos-cd-jobs/blob/70c3488790dde633c9b8c80b3667650d67872107/build-scripts/rcm-guest/push-to-mirrors.sh ?
Using the EOF trick (https://github.com/openshift/aos-cd-jobs/blob/master/build-scripts/rcm-guest/push-openshift-online-to-mirrors.sh#L56) can really make the script easier to read.
@adammhaile Please use new rcm script streaming method for:
https://github.com/openshift/aos-cd-jobs/blob/master/jobs/build/stage-to-prod/Jenkinsfile
This is encapsulated in a new Groovy library I'm writing and I'd like to start using everywhere: https://github.com/openshift/aos-cd-jobs/blob/master/jobs/build/ocp/Jenkinsfile#L377
Which is loaded:
https://github.com/openshift/aos-cd-jobs/blob/master/jobs/build/ocp/Jenkinsfile#L55
And defined: https://github.com/openshift/aos-cd-jobs/blob/master/pipeline-scripts/buildlib.groovy
Or update the existing job that pushes releases in dockerhub to also push origin-gce (if that makes sense).
Using make -j
at https://github.com/openshift/aos-cd-jobs/blob/master/sjb/config/test_cases/test_branch_origin_check.yml#L19 makes CI logs harder to understand as the output of test-unit and test-cmd is interposed.
Would a PR to remove this "-j" be acceptable?
We want to add a new way to get artifacts where we grab the output of a command run so that we can get all sorts of stuff, including but not limited to:
yum list installed
It would be nice to /test fork_ami from a PR and have a fork AMI created w/o needing to navigate through Jenkins to start it. Eventually we could make the bot post back a link to the AMI or better yet the oct
command to run in order to stand up the newly created AMI (depends on openshift/origin-ci-tool#130)
We should document how the branching logic works off of master
.
Tried in https://ci.openshift.redhat.com/jenkins/view/All/job/test_pull_request_origin_kargakis/2/console, got a /tmp/tmp.K0kUIWsnF0: line 9: ORIGIN_TARGET_BRANCH: unbound variable
back.
The integration test job today runs hack/test-integration.sh
and hack/test-end-to-end{,-docker}.sh
. While the later test normally takes up to 8min, it may end up hang for more than an hour (see openshift/origin#15093). We need to investigate if we can run the integration test in less than an hour* so it won't make any difference billing-wise to have the additional job.
If we want to share Jenkinsfiles between jobs, it would be nice if the updater should allow symlinks.
jobs/starter/upgrade/Jenkinsfile
jobs/dedicated/upgrade --symlink--> ../starter/upgrade
test_pull_request_origin_integration
has no output from integration tests https://trello.com/c/i580d5R9/129-ci-generate-junit-xml-reports-from-the-origin-integration-test-suitetest_pull_request_origin_integration
has no output from e2e tests openshift/origin#13295test_pull_request_origin_extended_networking_minimal
has no output openshift/origin#13320test_pull_request_origin_extended_conformance_gce
doesn't get merged output #78Caused failures in jobs:
https://ci.openshift.redhat.com/jenkins/job/zz_origin_gce_image/292/consoleText
because registry.access.redhat.com is not pushable. We shouldn't be using registry.access.redhat.com as the default registry for the AMI VM, changing the default registry in docker is a bad idea.
I am wondering if we can store our failure causes inside config in this repo as opposed to having them buried inside Jenkins
Today, we run a number of post-build publishers
. As only one org.jenkinsci.plugins.postbuildscript.PostBuildScript
can be run as a publisher
, we end up with a block of buildSteps
of type hudson.tasks.Shell
. This means that if we have a post-build flow like -- generate artifacts, retrieve artifacts, fetch systemd
journals, deprovision cloud resources -- and one of the first hudson.tasks.Shell
fails, the rest will not run. Although the larger org.jenkinsci.plugins.postbuildscript.PostBuildScript
is set to run regardless of whether the job failed or not, the linear flow of hudson.tasks.Shell
will exit early on any individual failure. We can try to address this by adding || true
to our actions in these steps but in reality we just need a way to parameterize a named_shell_action
so we don't always add set -o errexit
. This way, failures will be silently ignored and all post-build tasks will run.
/cc @soltysh
Build break openshift/origin#16882 was caused by a binary under test built from stage
being paired with incompatible newer docker images from master
.
I understand that on GCE, a binary under test may see older docker images, and that this artifact helps somewhat in showing system upgradability without requiring lockstep.
@smarterclayton is the intention to do this the other way around as well? If not, can we have separate sets of images for separate branches in GCE?
The batch merge link on https://origin-sq-status-ci.svc.ci.openshift.org/#/queue points to https://prow.k8s.io/?type=batch
@Kargakis @stevekuznetsov I don't know what repo to open these against.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.