kubevirt / project-infra Goto Github PK
View Code? Open in Web Editor NEWProject infrastructure administrative tools
License: Apache License 2.0
Project infrastructure administrative tools
License: Apache License 2.0
Deleting the workspace, is intentionally the signal for vagrant to recreate it's VMs. However, since this also means that we loose the .vagrant files, vagrant refuses to properly clean the old VMs up, because it thinks that someone else is using these VMs.
This leads to an immediate test failure. See https://rfenkhuber.fedorapeople.org/jenkins/270/console.log.
We need a docker proxy which can cache often used images within the cluster. @danielBelenky Can you look into this?
Regarding the branch protection configuration @awels and @aglitke should decide what checks for merging they need and then we can enable them here.
See configuration for feasible examples.
/cc @fabiand
We had to limit the same job type to run only once on a node at the same time. This limitation is not true anymore. Remove it.
Once #168 is merged, the results of the reports will be available in a view that looks like
It would be nice to have those cells with a background color that represent the overall score of that report (i.e. less flaky tests -> greener).
Once kubernetes/test-infra#19009 is merged, we should update our release container with the new version. This will avoid in the future situations like resolved in #600.
Move the jobs from templates to kubevirt/project-infra/periodics.yaml so that it can directly be updated via job configs instead of rolling the config out via ansible (in terms of updating prow i.e)
As the jobs rely on a configmap from the namespace we need to find a way to have that configmap also available there.
Currently only the master branch gets reported. We need to create jobs for the release-* branches also.
While we're at that we might consider adding a root html that points us to all available reports starting from here: https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/index.html
We should upgrade the label-sync. We currently use this one.
/kind enhancement
Right now, we only run one build of the functional tests in the whole CI cluster. Allow one concurrent build per node.
Would be nice to have some links to go back to previous pages when you browser summaries.
The job syncing the labels for repo https://github.com/nmstate/kubernetes-nmstate
fails with an error indicating that the configuration for the label priority/highest
seems to have an invalid color value.
Log of latest job run:
{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"Throttle(300, 100)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:840","func":"main.syncOrg","level":"info","msg":"Found 1 repos","org":"nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:360","func":"main.loadLabels.func1","level":"info","msg":"Listing labels for repo","org":"nmstate","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"GetRepoLabels(nmstate, kubernetes-nmstate)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:846","func":"main.syncOrg","level":"info","msg":"Syncing labels for 1 repos","org":"nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"color":"9.999e+09","component":"label_sync","file":"label_sync/main.go:414","func":"main.change","label":"priority/highest","level":"info","msg":"change","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:566","func":"main.RepoUpdates.DoUpdates","level":"info","msg":"Applying 1 changes","org":"nmstate","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"UpdateRepoLabel(nmstate, kubernetes-nmstate, priority/highest, priority/highest, 9.999e+09)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","error":"failed to list labels: [status code 422 not one of [200], body: {\"message\":\"Validation Failed\",\"errors\":[{\"resource\":\"Label\",\"code\":\"invalid\",\"field\":\"color\"}],\"documentation_url\":\"https://developer.github.com/v3/issues/labels/#update-a-label\"}]","file":"label_sync/main.go:729","func":"main.main","level":"fatal","msg":"failed to update nmstate","severity":"fatal","time":"2020-06-25T23:19:56Z"}
Steps to reproduce:
As of #209 the branch protection has been set up per repo. This should be enhanced and simplified, as in: enable it for all branches org wide,i.e.
branch-protection:
protect: true
required_status_checks:
contexts:
- dco
- continuous-integration/travis-ci/pr
- coverage/coveralls
BUT when testing locally with phaino there were couple of problems:
Since around 2020-06-09 the label-sync prow job fails.
Error log looks like this:
time="2020-06-16T03:16:15Z" level=fatal msg="failed to update nmstate" error="failed to list labels: [status code 404 not one of [201], body: {\"message\":\"Not Found\",\"documentation_url\":\"https://developer.github.com/v3/issues/labels/#create-a-label\ "} status code 404 not one of [201], body:
...
Suspicion is that the last config update didn't work somehow, which may be confirmed by this log message from the last successful job run:
time="2020-06-07T23:17:19Z" level=warning msg="Repo isn't inside orgs" org=nmstate orgs=kubevirt repo="nmstate/kubernetes-nmstate"
Further suspicion is that this is somehow related to the access from prow to the nmstate repositories, which might be indicated by label-sync failing with 404 when trying to list the existing labels.
Jobs for flakefinder report generation are missing configuration for org
and repo
.
Example for missing configuration:
Example for correct configuration:
Relates to: cluster-network-addons-operator#447
/triage build-officer
Now with confirmance tests merged we have at least for the core providers a much higher guarantee that they are working. Now is the time where we can start thinking about building and pushing these clusters as postsubmits and e.g. hardcode the cluster-shasums in gocli
.
@jean-edouard @qinqon you two were pretty interested in this important move. Would you two organize on that?
Currently when the tests fail we don't have the prow visualization of the junit xml, need to update that.
WIP is started in https://github.com/dhiller/project-infra/tree/expose-junit-xml
Usually github projects has some badges showing the state of projects (unit test results, static analysis, integrationg ci, etc...) so it's like a semaphore showing the project's health.
Would be nice to have the "severity" from flakefinder result as badges so we can link to those at github projects, maybe one badge per for time range, 24h, 168h, 672h.
To have data that is more fresh we need to add a job and extend flakefinder that produces reports for hourly time windows.
The flakefinder and indexpagecreator tools already have make targets that do everything including updating image shas on jobs and pushing the new images. We now need to create postsubmit jobs that build, push and commit the updated job configurations.
Pods are failing:
1dfb96da-b623-11ea-9363-0a580a820cc5 0/2 Init:0/2 0 33m periodic-project-infra-branch-protector
1e0245fe-b623-11ea-9363-0a580a820cc5 0/2 Init:0/2 0 33m periodic-project-infra-label-sync-kubevirt
1e0646fe-b623-11ea-9363-0a580a820cc5 0/2 Init:0/2 0 33m periodic-project-infra-label-sync-nmstate
periodic-project-infra-branch-protector pod events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 37m kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp succeeded for volume "tools"
Normal SuccessfulMountVolume 37m kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp succeeded for volume "logs"
Normal SuccessfulMountVolume 37m kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp succeeded for volume "oauth"
Normal Scheduled 37m default-scheduler Successfully assigned 1dfb96da-b623-11ea-9363-0a580a820cc5 to ovirt-srv20.phx.ovirt.org
Normal SuccessfulMountVolume 37m kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp succeeded for volume "gcs-credentials"
Warning FailedMount 10m (x21 over 37m) kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp failed for volume "config" : configmaps "config" not found
Warning FailedMount 6m27s (x23 over 37m) kubelet, ovirt-srv20.phx.ovirt.org MountVolume.SetUp failed for volume "job-config" : configmaps "job-config" not found
Warning FailedMount 62s (x16 over 35m) kubelet, ovirt-srv20.phx.ovirt.org Unable to mount volumes for pod "1dfb96da-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b74300-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1dfb96da-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config job-config]
periodic-project-infra-label-sync-kubevirt pod events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned 1e0245fe-b623-11ea-9363-0a580a820cc5 to shift-n10.phx.ovirt.org
Normal SuccessfulMountVolume 37m kubelet, shift-n10.phx.ovirt.org MountVolume.SetUp succeeded for volume "logs"
Normal SuccessfulMountVolume 37m kubelet, shift-n10.phx.ovirt.org MountVolume.SetUp succeeded for volume "tools"
Normal SuccessfulMountVolume 37m kubelet, shift-n10.phx.ovirt.org MountVolume.SetUp succeeded for volume "oauth"
Normal SuccessfulMountVolume 37m kubelet, shift-n10.phx.ovirt.org MountVolume.SetUp succeeded for volume "gcs-credentials"
Warning FailedMount 7m22s (x23 over 37m) kubelet, shift-n10.phx.ovirt.org MountVolume.SetUp failed for volume "config" : configmaps "label-config" not found
Warning FailedMount 111s (x16 over 35m) kubelet, shift-n10.phx.ovirt.org Unable to mount volumes for pod "1e0245fe-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b66bb2-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1e0245fe-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config]
periodic-project-infra-label-sync-nmstate pod events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 38m kubelet, shift-n11.phx.ovirt.org MountVolume.SetUp succeeded for volume "logs"
Normal SuccessfulMountVolume 38m kubelet, shift-n11.phx.ovirt.org MountVolume.SetUp succeeded for volume "tools"
Normal Scheduled 38m default-scheduler Successfully assigned 1e0646fe-b623-11ea-9363-0a580a820cc5 to shift-n11.phx.ovirt.org
Normal SuccessfulMountVolume 38m kubelet, shift-n11.phx.ovirt.org MountVolume.SetUp succeeded for volume "oauth"
Normal SuccessfulMountVolume 38m kubelet, shift-n11.phx.ovirt.org MountVolume.SetUp succeeded for volume "gcs-credentials"
Warning FailedMount 8m18s (x23 over 38m) kubelet, shift-n11.phx.ovirt.org MountVolume.SetUp failed for volume "config" : configmaps "label-config" not found
Warning FailedMount 2m49s (x16 over 36m) kubelet, shift-n11.phx.ovirt.org Unable to mount volumes for pod "1e0646fe-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b712db-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1e0646fe-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config]
/triage build-officer
/kind bug
I see reports like:
[xUnit] [ERROR] - Test reports were found but not all of them are new. Did all the tests run?
* /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/junit.xml is 20 hr old
in our jobs.
Everything we cache is cached in docker. We can simply clean the whole workspace before we start a build.
We sometimes have issues with networking, so we should run the jobs more often. Otherwise the build officer does not have much data to work on.
/triage build-officer
/kind enhancement
Jobs on master should be copied and tied to a release branch when we do a release. Otherwise we will for instance try to run kubevirtci clusters which are not understood on release branches.
At latest runs of that job looks like nothing is run and weird value is returned at exit code
Cleaning up binfmt_misc ...
+ cleanup_binfmt_misc
+ '[' '!' -f /proc/sys/fs/binfmt_misc/status ']'
+ mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
+ echo -1
+ ls -al /proc/sys/fs/binfmt_misc
total 0
drwxr-xr-x. 2 root root 0 Jun 26 04:37 .
dr-xr-xr-x. 1 root root 0 Jun 26 12:43 ..
--w-------. 1 root root 0 Jun 26 04:37 register
-rw-r--r--. 1 root root 0 Jun 26 12:43 status
================================================================================
Done setting up docker in docker.
+ /bin/sh -c 'TARGET_COMMIT=$PULL_BASE_SHA automation/repeated_test.sh'
Test lanes: k8s-1.18 k8s-1.17 k8s-1.16
Test files touched: imageupload_test
Number of per lane runs: 3
+ EXIT_VALUE=123
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Since our servers are hidden behind a firewall, external contributors can't access the build logs. Therefore upload the build logs on a public place, and change the status url to point to the results.
we should also display non-merged PRs which are /lgtm and /approve and entered the merge pool.
Testing, ignore
Is this a BUG REPORT or FEATURE REQUEST?:
bug
What happened:
The periodic job failed during build with
error loading module requirements
What you expected to happen:
finish build and run the job
How to reproduce it (as minimally and precisely as possible):
this is a part of the periodic jobs that run on prow.
Anything else we need to know?:
https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/logs/periodic-kubevirt-bump-vendor-patch/1288805482441478144
Environment:
In regard of the topic of automating maintenance of reviewers and approvers for jobs a cron job that syncs the OWNERS file from the repository root to the job directory might be helpful.
Instead of writing sth from scratch we should investigate whether there's a mechanism that takes care of this already that we can borrow. I remember they do sth similar for github.com/openshift/release .
I.e. daily sync from
https://github.com/nmstate/kubernetes-nmstate/blob/master/OWNERS
to
https://github.com/kubevirt/project-infra/blob/master/github/ci/prow/files/jobs/nmstate/OWNERS
@rmohr WDYT?
Part of this issue might be cleaning up the current root OWNERS file.
delete old label-sync job after label-sync-kubevirt and label-sync-nmstate are rolled out
Prow supports in repo jobs config adding .prow.yaml at project root, to do so the plugin https://github.com/kubernetes/test-infra/blob/master/prow/inrepoconfig.md has to be activated.
Currently the tests look like they are sorted alpha numerically ascending by test name. They should be sorted by number of failures descending instead to have the tests with the most failures on top.
It contains following tasks:
prow image builder which contains initupload, entrypoint, clonerefs and sidecar binaries. This image is used to build prow utility images. #587
Based on prow image builder, build 4 utilities images, initupload, entrypoint, clonerefs and sidecar.
Build bootstrap images
Example:
Daily: https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/kubevirt/kubevirt/flakefinder-2020-08-22-024h.html
Weekly: https://storage.googleapis.com/kubevirt-prow/reports/flakefinder/kubevirt/kubevirt/flakefinder-2020-08-22-168h.html
Daily is missing failures from PRs kubevirt/kubevirt#4000, kubevirt/kubevirt#4036, kubevirt/kubevirt#4005
According to test-infra docs for pod utilities adding decorate: true
should lead to being in a clone of the target repository for which the job is configured. This obviously works for presubmit and postsubmit type jobs, but for periodics it doesn't. What I observed was that for periodics the clonerefs container was missing in the pod.
Fixing this would reduce the overhead we have to do when working with periodics that update repositories, i.e. manually cloning repositories and the like could be removed.
Example: periodic autoowners update job
❯ oc describe pod 0b1b77d0-912a-11ea-a911-e86a64a85ba2 ↵ INT ⎈ kubevirt-prow-jobs/shift-ovirt-org:8443/[email protected]/kubevirt-prow-jobs
Name: 0b1b77d0-912a-11ea-a911-e86a64a85ba2
Namespace: kubevirt-prow-jobs
Node: ovirt-srv05.phx.ovirt.org/66.187.230.7
Start Time: Fri, 08 May 2020 14:47:23 +0200
Labels: created-by-prow=true prow.k8s.io/build-id= prow.k8s.io/id=0b1b77d0-912a-11ea-a911-e86a64a85ba2 prow.k8s.io/job=periodic-project-infra-autoowners
prow.k8s.io/type=periodic
Annotations: kubernetes.io/limit-ranger:
LimitRanger plugin set: cpu, memory request for container test; cpu, memory request for container sidecar; cpu, memory request for init co...
openshift.io/scc: restricted
prow.k8s.io/job: periodic-project-infra-autoowners
Status: Failed
IP: 10.130.5.151
Init Containers:
initupload:
Container ID: docker://2c01e1f1f7c77d50812b003d2f930072543ef67be79b7d39c0af2858cf6cee35
Image: gcr.io/k8s-prow/initupload:v20200204-7e8cd997a
Image ID: docker-pullable://gcr.io/k8s-prow/initupload@sha256:31d38ccb05c85477321065ffd95486d062049ad37d7645ea8bb0c6dea8a80263
Port: <none>
Host Port: <none>
Command:
/initupload
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 May 2020 14:47:26 +0200
Finished: Fri, 08 May 2020 14:47:27 +0200
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 1Gi
Environment:
INITUPLOAD_OPTIONS: {"bucket":"kubevirt-prow","path_strategy":"explicit","gcs_credentials_file":"/secrets/gcs/service-account.json","dry_run":false}
JOB_SPEC: {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}
Mounts:
/secrets/gcs from gcs-credentials (rw)
place-entrypoint:
Container ID: docker://bd284800d2287054d45521f74382db70e18b6be76820dc4be4802c8fa589c06d
Image: gcr.io/k8s-prow/entrypoint:v20200204-7e8cd997a
Image ID: docker-pullable://gcr.io/k8s-prow/entrypoint@sha256:eba25f91a21b311ccccea58e472d684942b50a92f1c9a80de489bbe63b22fa13
Port: <none>
Host Port: <none>
Command:
/bin/cp
Args:
/entrypoint
/tools/entrypoint
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 May 2020 14:47:28 +0200
Finished: Fri, 08 May 2020 14:47:28 +0200
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 1Gi
Environment: <none>
Mounts:
/tools from tools (rw)
Containers:
test:
Container ID: docker://7ed23e3bd567451e23ec99fc456cee5f46d67a52ea77757d5b26a42e14ad7654
Image: docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7
Image ID: docker-pullable://docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7
Port: <none>
Host Port: <none>
Command:
/tools/entrypoint
State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 08 May 2020 14:47:29 +0200
Finished: Fri, 08 May 2020 14:49:48 +0200
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 1Gi
Environment:
GIT_COMMITTER_NAME: kubevirt-bot
GIT_COMMITTER_EMAIL: [email protected]
GIT_AUTHOR_NAME: kubevirt-bot
GIT_AUTHOR_EMAIL: [email protected]
ARTIFACTS: /logs/artifacts
BUILD_ID: 1258740279360360448
BUILD_NUMBER: 1258740279360360448
CI: true
GOPATH: /home/prow/go
JOB_NAME: periodic-project-infra-autoowners
JOB_SPEC: {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}
JOB_TYPE: periodic
PROW_JOB_ID: 0b1b77d0-912a-11ea-a911-e86a64a85ba2
ENTRYPOINT_OPTIONS: {"timeout":7200000000000,"grace_period":15000000000,"artifact_dir":"/logs/artifacts","args":["/bin/sh","-c","mkdir -p /tmp \u0026\u0026 cd /tmp \u0026\u0026 echo 'cat /etc/github/oauth' \u003e /tmp/git-askpass-helper.sh \u0026
\u0026 export GIT_ASKPASS=/tmp/git-askpass-helper.sh \u0026\u0026 git clone https://github.com/kubevirt/project-infra.git \u0026\u0026 cd project-infra \u0026\u0026 autoowners --dry-run=true --github-login=kubevirt-bot --org=kubevirt --repo=project-infra
--assign=dhiller --target-dir=. --target-subdir=github/ci/prow/files --config-subdir=jobs --github-token-path=/etc/github/oauth\n"],"process_log":"/logs/process-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.jso
n"}
Mounts:
/etc/github from token (rw)
/logs from logs (rw)
/tools from tools (rw)
sidecar:
Container ID: docker://0f3b5cfa57042658a7f74228aa11460ee01e44277005c87b6bfce68c06a01278
Image: gcr.io/k8s-prow/sidecar:v20200204-7e8cd997a
Image ID: docker-pullable://gcr.io/k8s-prow/sidecar@sha256:75aedfb96e4f935ee5235c5f8674a8625205bd79c08ab8f415d0fe9c41f123cd
Port: <none>
Host Port: <none>
Command:
/sidecar
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 May 2020 14:47:30 +0200
Finished: Fri, 08 May 2020 14:49:49 +0200
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 1Gi
Environment:
JOB_SPEC: {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}
SIDECAR_OPTIONS: {"gcs_options":{"items":["/logs/artifacts"],"bucket":"kubevirt-prow","path_strategy":"explicit","gcs_credentials_file":"/secrets/gcs/service-account.json","dry_run":false},"entries":[{"args":["/bin/sh","-c","mkdir -p /tmp \u0026\u
0026 cd /tmp \u0026\u0026 echo 'cat /etc/github/oauth' \u003e /tmp/git-askpass-helper.sh \u0026\u0026 export GIT_ASKPASS=/tmp/git-askpass-helper.sh \u0026\u0026 git clone https://github.com/kubevirt/project-infra.git \u0026\u0026 cd project-infra \u0026\
u0026 autoowners --dry-run=true --github-login=kubevirt-bot --org=kubevirt --repo=project-infra --assign=dhiller --target-dir=. --target-subdir=github/ci/prow/files --config-subdir=jobs --github-token-path=/etc/github/oauth\n"],"process_log":"/logs/proce
ss-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.json"}]}
Mounts:
/logs from logs (rw)
/secrets/gcs from gcs-credentials (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
token:
Type: Secret (a volume populated by a Secret)
SecretName: oauth-token
Optional: false
logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tools:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
gcs-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: gcs
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 50m kubelet, ovirt-srv05.phx.ovirt.org MountVolume.SetUp succeeded for volume "logs"
Normal SuccessfulMountVolume 50m kubelet, ovirt-srv05.phx.ovirt.org MountVolume.SetUp succeeded for volume "tools"
Normal SuccessfulMountVolume 50m kubelet, ovirt-srv05.phx.ovirt.org MountVolume.SetUp succeeded for volume "gcs-credentials"
Normal SuccessfulMountVolume 50m kubelet, ovirt-srv05.phx.ovirt.org MountVolume.SetUp succeeded for volume "token"
Normal Scheduled 50m default-scheduler Successfully assigned 0b1b77d0-912a-11ea-a911-e86a64a85ba2 to ovirt-srv05.phx.ovirt.org
Normal Created 50m kubelet, ovirt-srv05.phx.ovirt.org Created container
Normal Pulled 50m kubelet, ovirt-srv05.phx.ovirt.org Container image "gcr.io/k8s-prow/initupload:v20200204-7e8cd997a" already present on machine
Normal Started 50m kubelet, ovirt-srv05.phx.ovirt.org Started container
Normal Started 50m kubelet, ovirt-srv05.phx.ovirt.org Started container
Normal Created 50m kubelet, ovirt-srv05.phx.ovirt.org Created container
Normal Pulled 50m kubelet, ovirt-srv05.phx.ovirt.org Container image "gcr.io/k8s-prow/entrypoint:v20200204-7e8cd997a" already present on machine
Normal Pulled 50m kubelet, ovirt-srv05.phx.ovirt.org Container image "docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7" already present on machine
Normal Created 50m kubelet, ovirt-srv05.phx.ovirt.org Created container
Normal Started 50m kubelet, ovirt-srv05.phx.ovirt.org Started container
Normal Pulled 50m kubelet, ovirt-srv05.phx.ovirt.org Container image "gcr.io/k8s-prow/sidecar:v20200204-7e8cd997a" already present on machine
Normal Created 50m kubelet, ovirt-srv05.phx.ovirt.org Created container
Normal Started 50m kubelet, ovirt-srv05.phx.ovirt.org Started container
Add an additional job, which does the same thing like the job which tests pull requests, after every merge on master.
We should include repeated test failures from lane pull-kubevirt-check-tests-for-flakes
into the flakefinder status reports
Some random thoughts:
A non-exhaustive list of automation flows which we need to add.
/upload
(or alternatively run a postsubmit job which does the update and creates a PR).There exists already a set of tools which can help us achieving this:
The autoowners job requires an ugly amend of the signed commit to satisfy the dco check, because autoowners does not support signing the commit itself.
Add method in kubernetes/test-infra
experiment/autobumper/bumper/bumper.go that supports signing the commit and use that in openshift/ci-tools
autoowners
@dhiller we would need another cron for CDI, now that it uses prow.
Periodic job periodic-test-infra-rotten is failing and there is no logs at prow.
The following run
https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_containerized-data-importer/1281/pull-containerized-data-importer-e2e-os-3.11.0-crio/1280485566927867904
failed because of Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
/triage build-officer
flakefinder
needs to filter the test results to report on to the target branch that the PR is merged into. This is to avoid seeing old test failures for other branches on the reported branch.
SO the reports are done per branch, per timespan. Therefore it would make sense to separate them by directory also.
As a first step, limit it to master. In the future, we should report for release branches too.
The config updater complains about a missing config name when posting the config map.
Convert the job from a batch job into a periodic cron job that is hosted in kubevirt/project-infra/periodics.yaml
/kind enhancement
Make sure that the clock between slaves and the master is not too far apart.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.