Git Product home page Git Product logo

project-infra's People

Contributors

0xfelix avatar akalenyu avatar alicefr avatar arnongilboa avatar awels avatar brianmcarey avatar brybacki avatar cwilkers avatar davidvossel avatar dhiller avatar enp0s3 avatar fgimenez avatar gbenhaim avatar jean-edouard avatar kubevirt-bot avatar lyarwood avatar maya-r avatar mazzystr avatar mhenriks avatar nunnatsa avatar ormergi avatar oshoval avatar phoracek avatar qinqon avatar ramlavi avatar rhrazdil avatar rmohr avatar slintes avatar xpivarc avatar zhlhahaha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

project-infra's Issues

Background color in flakefinder overview report

Once #168 is merged, the results of the reports will be available in a view that looks like
flakefinder report

It would be nice to have those cells with a background color that represent the overall score of that report (i.e. less flaky tests -> greener).

convert template/label-sync.yaml to prow periodic jobs

Move the jobs from templates to kubevirt/project-infra/periodics.yaml so that it can directly be updated via job configs instead of rolling the config out via ansible (in terms of updating prow i.e)

As the jobs rely on a configmap from the namespace we need to find a way to have that configmap also available there.

label-sync-nmstate: job fails with invalid label error

The job syncing the labels for repo https://github.com/nmstate/kubernetes-nmstate fails with an error indicating that the configuration for the label priority/highest seems to have an invalid color value.

Log of latest job run:

{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"Throttle(300, 100)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:840","func":"main.syncOrg","level":"info","msg":"Found 1 repos","org":"nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:360","func":"main.loadLabels.func1","level":"info","msg":"Listing labels for repo","org":"nmstate","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"GetRepoLabels(nmstate, kubernetes-nmstate)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:846","func":"main.syncOrg","level":"info","msg":"Syncing labels for 1 repos","org":"nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"color":"9.999e+09","component":"label_sync","file":"label_sync/main.go:414","func":"main.change","label":"priority/highest","level":"info","msg":"change","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","file":"label_sync/main.go:566","func":"main.RepoUpdates.DoUpdates","level":"info","msg":"Applying 1 changes","org":"nmstate","repo":"kubernetes-nmstate","severity":"info","time":"2020-06-25T23:19:55Z"}
{"client":"github","component":"label_sync","file":"prow/github/client.go:574","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"UpdateRepoLabel(nmstate, kubernetes-nmstate, priority/highest, priority/highest, 9.999e+09)","severity":"info","time":"2020-06-25T23:19:55Z"}
{"component":"label_sync","error":"failed to list labels: [status code 422 not one of [200], body: {\"message\":\"Validation Failed\",\"errors\":[{\"resource\":\"Label\",\"code\":\"invalid\",\"field\":\"color\"}],\"documentation_url\":\"https://developer.github.com/v3/issues/labels/#update-a-label\"}]","file":"label_sync/main.go:729","func":"main.main","level":"fatal","msg":"failed to update nmstate","severity":"fatal","time":"2020-06-25T23:19:56Z"}

Steps to reproduce:

  • log in to kubevirt openshift
  • switch to ns kubevirt-prow
  • select pods
  • see pods label-sync-nmstate-XXXXX failed

/cc @qinqon @phoracek

Enhance branch protection

As of #209 the branch protection has been set up per repo. This should be enhanced and simplified, as in: enable it for all branches org wide,i.e.

branch-protection:
  protect: true
  required_status_checks:
    contexts:
    - dco
    - continuous-integration/travis-ci/pr
    - coverage/coveralls

BUT when testing locally with phaino there were couple of problems:

  • the requests timed out (possibly due to rate limits or the http://ghproxy configured which obviously doesn't work locally)
  • there were 404s (possibly related to bad access rights)
  • some seemed to not be able to find current (non-existing) branch protection rules, resulting in erroring with 404, might be worth opening an issue against test-infra

prow: label-sync cron job is failing

Since around 2020-06-09 the label-sync prow job fails.
Error log looks like this:

time="2020-06-16T03:16:15Z" level=fatal msg="failed to update nmstate" error="failed to list labels: [status code 404 not one of [201], body: {\"message\":\"Not Found\",\"documentation_url\":\"https://developer.github.com/v3/issues/labels/#create-a-label\ "} status code 404 not one of [201], body:
...

Suspicion is that the last config update didn't work somehow, which may be confirmed by this log message from the last successful job run:

time="2020-06-07T23:17:19Z" level=warning msg="Repo isn't inside orgs" org=nmstate orgs=kubevirt repo="nmstate/kubernetes-nmstate"

Further suspicion is that this is somehow related to the access from prow to the nmstate repositories, which might be indicated by label-sync failing with 404 when trying to list the existing labels.

cnao flakefinder jobvs are missing org and repo configs

Jobs for flakefinder report generation are missing configuration for org and repo.

Example for missing configuration:

Example for correct configuration:

Relates to: cluster-network-addons-operator#447

/triage build-officer

flakefinder: badge system

Usually github projects has some badges showing the state of projects (unit test results, static analysis, integrationg ci, etc...) so it's like a semaphore showing the project's health.

Would be nice to have the "severity" from flakefinder result as badges so we can link to those at github projects, maybe one badge per for time range, 24h, 168h, 672h.

New periodics for label-sync and branch-protector have config issues

Pods are failing:

1dfb96da-b623-11ea-9363-0a580a820cc5   0/2     Init:0/2      0          33m     periodic-project-infra-branch-protector                       
1e0245fe-b623-11ea-9363-0a580a820cc5   0/2     Init:0/2      0          33m     periodic-project-infra-label-sync-kubevirt                    
1e0646fe-b623-11ea-9363-0a580a820cc5   0/2     Init:0/2      0          33m     periodic-project-infra-label-sync-nmstate

periodic-project-infra-branch-protector pod events:

Events:
  Type     Reason                 Age                   From                                Message
  ----     ------                 ----                  ----                                -------
  Normal   SuccessfulMountVolume  37m                   kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp succeeded for volume "tools"
  Normal   SuccessfulMountVolume  37m                   kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp succeeded for volume "logs"
  Normal   SuccessfulMountVolume  37m                   kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp succeeded for volume "oauth"
  Normal   Scheduled              37m                   default-scheduler                   Successfully assigned 1dfb96da-b623-11ea-9363-0a580a820cc5 to ovirt-srv20.phx.ovirt.org
  Normal   SuccessfulMountVolume  37m                   kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp succeeded for volume "gcs-credentials"
  Warning  FailedMount            10m (x21 over 37m)    kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp failed for volume "config" : configmaps "config" not found
  Warning  FailedMount            6m27s (x23 over 37m)  kubelet, ovirt-srv20.phx.ovirt.org  MountVolume.SetUp failed for volume "job-config" : configmaps "job-config" not found
  Warning  FailedMount            62s (x16 over 35m)    kubelet, ovirt-srv20.phx.ovirt.org  Unable to mount volumes for pod "1dfb96da-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b74300-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1dfb96da-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config job-config]

periodic-project-infra-label-sync-kubevirt pod events:

Events:
  Type     Reason                 Age                   From                              Message
  ----     ------                 ----                  ----                              -------
  Normal   Scheduled              38m                   default-scheduler                 Successfully assigned 1e0245fe-b623-11ea-9363-0a580a820cc5 to shift-n10.phx.ovirt.org
  Normal   SuccessfulMountVolume  37m                   kubelet, shift-n10.phx.ovirt.org  MountVolume.SetUp succeeded for volume "logs"
  Normal   SuccessfulMountVolume  37m                   kubelet, shift-n10.phx.ovirt.org  MountVolume.SetUp succeeded for volume "tools"
  Normal   SuccessfulMountVolume  37m                   kubelet, shift-n10.phx.ovirt.org  MountVolume.SetUp succeeded for volume "oauth"
  Normal   SuccessfulMountVolume  37m                   kubelet, shift-n10.phx.ovirt.org  MountVolume.SetUp succeeded for volume "gcs-credentials"
  Warning  FailedMount            7m22s (x23 over 37m)  kubelet, shift-n10.phx.ovirt.org  MountVolume.SetUp failed for volume "config" : configmaps "label-config" not found
  Warning  FailedMount            111s (x16 over 35m)   kubelet, shift-n10.phx.ovirt.org  Unable to mount volumes for pod "1e0245fe-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b66bb2-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1e0245fe-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config]

periodic-project-infra-label-sync-nmstate pod events:

Events:
  Type     Reason                 Age                   From                              Message
  ----     ------                 ----                  ----                              -------
  Normal   SuccessfulMountVolume  38m                   kubelet, shift-n11.phx.ovirt.org  MountVolume.SetUp succeeded for volume "logs"
  Normal   SuccessfulMountVolume  38m                   kubelet, shift-n11.phx.ovirt.org  MountVolume.SetUp succeeded for volume "tools"
  Normal   Scheduled              38m                   default-scheduler                 Successfully assigned 1e0646fe-b623-11ea-9363-0a580a820cc5 to shift-n11.phx.ovirt.org
  Normal   SuccessfulMountVolume  38m                   kubelet, shift-n11.phx.ovirt.org  MountVolume.SetUp succeeded for volume "oauth"
  Normal   SuccessfulMountVolume  38m                   kubelet, shift-n11.phx.ovirt.org  MountVolume.SetUp succeeded for volume "gcs-credentials"
  Warning  FailedMount            8m18s (x23 over 38m)  kubelet, shift-n11.phx.ovirt.org  MountVolume.SetUp failed for volume "config" : configmaps "label-config" not found
  Warning  FailedMount            2m49s (x16 over 36m)  kubelet, shift-n11.phx.ovirt.org  Unable to mount volumes for pod "1e0646fe-b623-11ea-9363-0a580a820cc5_kubevirt-prow-jobs(26b712db-b623-11ea-a0c8-001a4a5b7e12)": timeout expired waiting for volumes to attach/mount for pod "kubevirt-prow-jobs"/"1e0646fe-b623-11ea-9363-0a580a820cc5". list of unattached/unmounted volumes=[config]

/triage build-officer
/kind bug

Clean environment before building in jenkins

I see reports like:

[xUnit] [ERROR] - Test reports were found but not all of them are new. Did all the tests run?
  * /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/junit.xml is 20 hr old

in our jobs.

Everything we cache is cached in docker. We can simply clean the whole workspace before we start a build.

Clone CI jobs on release

/kind enhancement

Jobs on master should be copied and tied to a release branch when we do a release. Otherwise we will for instance try to run kubevirtci clusters which are not understood on release branches.

check-tests-for-flakes ends with `EXIT_VALUE=123`

At latest runs of that job looks like nothing is run and weird value is returned at exit code

https://storage.googleapis.com/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/3585/pull-kubevirt-check-tests-for-flakes/1276496086088814592/build-log.txt

Cleaning up binfmt_misc ...
+ cleanup_binfmt_misc
+ '[' '!' -f /proc/sys/fs/binfmt_misc/status ']'
+ mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
+ echo -1
+ ls -al /proc/sys/fs/binfmt_misc
total 0
drwxr-xr-x. 2 root root 0 Jun 26 04:37 .
dr-xr-xr-x. 1 root root 0 Jun 26 12:43 ..
--w-------. 1 root root 0 Jun 26 04:37 register
-rw-r--r--. 1 root root 0 Jun 26 12:43 status
================================================================================
Done setting up docker in docker.
+ /bin/sh -c 'TARGET_COMMIT=$PULL_BASE_SHA automation/repeated_test.sh'
Test lanes: k8s-1.18 k8s-1.17 k8s-1.16
Test files touched: imageupload_test
Number of per lane runs: 3
+ EXIT_VALUE=123
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================

Report Jenkins test results in a public place

Since our servers are hidden behind a firewall, external contributors can't access the build logs. Therefore upload the build logs on a public place, and change the status url to point to the results.

periodic-kubevirt-bump-vendor-patch fail to build

Is this a BUG REPORT or FEATURE REQUEST?:

bug

What happened:
The periodic job failed during build with
error loading module requirements
What you expected to happen:
finish build and run the job
How to reproduce it (as minimally and precisely as possible):
this is a part of the periodic jobs that run on prow.

Anything else we need to know?:
https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/logs/periodic-kubevirt-bump-vendor-patch/1288805482441478144
Environment:

Sync jobs OWNERS files with OWNERS from source repos

In regard of the topic of automating maintenance of reviewers and approvers for jobs a cron job that syncs the OWNERS file from the repository root to the job directory might be helpful.

Instead of writing sth from scratch we should investigate whether there's a mechanism that takes care of this already that we can borrow. I remember they do sth similar for github.com/openshift/release .

I.e. daily sync from
https://github.com/nmstate/kubernetes-nmstate/blob/master/OWNERS
to
https://github.com/kubevirt/project-infra/blob/master/github/ci/prow/files/jobs/nmstate/OWNERS

@rmohr WDYT?

Part of this issue might be cleaning up the current root OWNERS file.

Bootstrap and prow utility image build for ARM

It contains following tasks:

  • prow image builder which contains initupload, entrypoint, clonerefs and sidecar binaries. This image is used to build prow utility images. #587

  • Based on prow image builder, build 4 utilities images, initupload, entrypoint, clonerefs and sidecar.

  • Build bootstrap images

periodics don't use clonerefs init container

According to test-infra docs for pod utilities adding decorate: true should lead to being in a clone of the target repository for which the job is configured. This obviously works for presubmit and postsubmit type jobs, but for periodics it doesn't. What I observed was that for periodics the clonerefs container was missing in the pod.

Fixing this would reduce the overhead we have to do when working with periodics that update repositories, i.e. manually cloning repositories and the like could be removed.

Example: periodic autoowners update job

❯ oc describe pod 0b1b77d0-912a-11ea-a911-e86a64a85ba2                                                          ↵ INT ⎈ kubevirt-prow-jobs/shift-ovirt-org:8443/[email protected]/kubevirt-prow-jobs 
Name:         0b1b77d0-912a-11ea-a911-e86a64a85ba2
Namespace:    kubevirt-prow-jobs
Node:         ovirt-srv05.phx.ovirt.org/66.187.230.7
Start Time:   Fri, 08 May 2020 14:47:23 +0200                                                                                                                                                                                                                 
Labels:       created-by-prow=true                                                                                                                                                                                                                                          prow.k8s.io/build-id=                                                                                                                                                                                                                                         prow.k8s.io/id=0b1b77d0-912a-11ea-a911-e86a64a85ba2                                                                                                                                                                                                           prow.k8s.io/job=periodic-project-infra-autoowners                                                                                                                                                                                               
              prow.k8s.io/type=periodic
Annotations:  kubernetes.io/limit-ranger:
                LimitRanger plugin set: cpu, memory request for container test; cpu, memory request for container sidecar; cpu, memory request for init co...                                                                                                 
              openshift.io/scc: restricted
              prow.k8s.io/job: periodic-project-infra-autoowners                                                               
Status:       Failed     
IP:           10.130.5.151
Init Containers:                                   
  initupload:                                      
    Container ID:  docker://2c01e1f1f7c77d50812b003d2f930072543ef67be79b7d39c0af2858cf6cee35                                   
    Image:         gcr.io/k8s-prow/initupload:v20200204-7e8cd997a                                                              
    Image ID:      docker-pullable://gcr.io/k8s-prow/initupload@sha256:31d38ccb05c85477321065ffd95486d062049ad37d7645ea8bb0c6dea8a80263                                                                                                                       
    Port:          <none>
    Host Port:     <none>
    Command:                                                                                                                   
      /initupload                       
    State:          Terminated                      
      Reason:       Completed           
      Exit Code:    0                                                                                                          
      Started:      Fri, 08 May 2020 14:47:26 +0200
      Finished:     Fri, 08 May 2020 14:47:27 +0200
    Ready:          True                       
    Restart Count:  0                                   
    Requests:                            
      cpu:     100m                                          
      memory:  1Gi                                                                                                                                                                                                                                            
    Environment:                                     
      INITUPLOAD_OPTIONS:  {"bucket":"kubevirt-prow","path_strategy":"explicit","gcs_credentials_file":"/secrets/gcs/service-account.json","dry_run":false}                                                                                                   
      JOB_SPEC:            {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}                                                                                   
    Mounts:                                                                                                                                                                                                                                                   
      /secrets/gcs from gcs-credentials (rw)                                                                                                                                                                                                                  
  place-entrypoint:                                                                                                            
    Container ID:  docker://bd284800d2287054d45521f74382db70e18b6be76820dc4be4802c8fa589c06d                                   
    Image:         gcr.io/k8s-prow/entrypoint:v20200204-7e8cd997a                                                                                                                                                                                             
    Image ID:      docker-pullable://gcr.io/k8s-prow/entrypoint@sha256:eba25f91a21b311ccccea58e472d684942b50a92f1c9a80de489bbe63b22fa13                                                                                                                       
    Port:          <none>                                                                                                                                                                                                                                     
    Host Port:     <none>                                                                                                      
    Command:                                                                                                                                                                                                                                                  
      /bin/cp                                                                                                                  
    Args:                                                                                                                                                                                                                                                     
      /entrypoint                                                                                                              
      /tools/entrypoint                                                                                                                                                                                                                                       
    State:          Terminated                                                                                                                                                                                                                                
      Reason:       Completed                                                                                                  
      Exit Code:    0                                                                                                          
      Started:      Fri, 08 May 2020 14:47:28 +0200                                                                                                                                                                                                           
      Finished:     Fri, 08 May 2020 14:47:28 +0200                                                                            
    Ready:          True                                                                                                       
    Restart Count:  0          
    Requests:                                                                                                                                                                                                                                                 
      cpu:        100m                                                                                                                                                                                                                                        
      memory:     1Gi                                                                                                                                                                                                                                         
    Environment:  <none>                                                                                                                                                                                                                                      
    Mounts:                                                                                                                                                                                                                                                   
      /tools from tools (rw)                                                                                                                                                                                                                                  
Containers:                                                                                                                                                                                                                                                   
  test:                                                                                                                                                                                                                                                       
    Container ID:  docker://7ed23e3bd567451e23ec99fc456cee5f46d67a52ea77757d5b26a42e14ad7654                                                                                                                                                                  
    Image:         docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7                                                                                                                                    
    Image ID:      docker-pullable://docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7                                                                                                                  
    Port:          <none>                                                                                                                                                                                                                                     
    Host Port:     <none>                                                                                                                                                                                                                                     
    Command:                                                                                                                                                                                                                                                  
      /tools/entrypoint                                                                                                                                                                                                                                       
    State:          Terminated                                                                                                                                                                                                                                
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 08 May 2020 14:47:29 +0200
      Finished:     Fri, 08 May 2020 14:49:48 +0200
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  1Gi
    Environment:
      GIT_COMMITTER_NAME:   kubevirt-bot
      GIT_COMMITTER_EMAIL:  [email protected]
      GIT_AUTHOR_NAME:      kubevirt-bot
      GIT_AUTHOR_EMAIL:     [email protected]
      ARTIFACTS:            /logs/artifacts
      BUILD_ID:             1258740279360360448
      BUILD_NUMBER:         1258740279360360448
      CI:                   true
      GOPATH:               /home/prow/go
      JOB_NAME:             periodic-project-infra-autoowners
      JOB_SPEC:             {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}
      JOB_TYPE:             periodic
      PROW_JOB_ID:          0b1b77d0-912a-11ea-a911-e86a64a85ba2
      ENTRYPOINT_OPTIONS:   {"timeout":7200000000000,"grace_period":15000000000,"artifact_dir":"/logs/artifacts","args":["/bin/sh","-c","mkdir -p /tmp \u0026\u0026 cd /tmp \u0026\u0026 echo 'cat /etc/github/oauth' \u003e /tmp/git-askpass-helper.sh \u0026
\u0026 export GIT_ASKPASS=/tmp/git-askpass-helper.sh \u0026\u0026 git clone https://github.com/kubevirt/project-infra.git \u0026\u0026 cd project-infra \u0026\u0026 autoowners --dry-run=true --github-login=kubevirt-bot --org=kubevirt --repo=project-infra
 --assign=dhiller --target-dir=. --target-subdir=github/ci/prow/files --config-subdir=jobs --github-token-path=/etc/github/oauth\n"],"process_log":"/logs/process-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.jso
n"}
    Mounts:
      /etc/github from token (rw)
      /logs from logs (rw)
      /tools from tools (rw)
  sidecar:
    Container ID:  docker://0f3b5cfa57042658a7f74228aa11460ee01e44277005c87b6bfce68c06a01278
    Image:         gcr.io/k8s-prow/sidecar:v20200204-7e8cd997a
    Image ID:      docker-pullable://gcr.io/k8s-prow/sidecar@sha256:75aedfb96e4f935ee5235c5f8674a8625205bd79c08ab8f415d0fe9c41f123cd
    Port:          <none>
    Host Port:     <none>
    Command:
      /sidecar
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 08 May 2020 14:47:30 +0200
      Finished:     Fri, 08 May 2020 14:49:49 +0200
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  1Gi
    Environment:
      JOB_SPEC:         {"type":"periodic","job":"periodic-project-infra-autoowners","buildid":"1258740279360360448","prowjobid":"0b1b77d0-912a-11ea-a911-e86a64a85ba2"}
      SIDECAR_OPTIONS:  {"gcs_options":{"items":["/logs/artifacts"],"bucket":"kubevirt-prow","path_strategy":"explicit","gcs_credentials_file":"/secrets/gcs/service-account.json","dry_run":false},"entries":[{"args":["/bin/sh","-c","mkdir -p /tmp \u0026\u
0026 cd /tmp \u0026\u0026 echo 'cat /etc/github/oauth' \u003e /tmp/git-askpass-helper.sh \u0026\u0026 export GIT_ASKPASS=/tmp/git-askpass-helper.sh \u0026\u0026 git clone https://github.com/kubevirt/project-infra.git \u0026\u0026 cd project-infra \u0026\
u0026 autoowners --dry-run=true --github-login=kubevirt-bot --org=kubevirt --repo=project-infra --assign=dhiller --target-dir=. --target-subdir=github/ci/prow/files --config-subdir=jobs --github-token-path=/etc/github/oauth\n"],"process_log":"/logs/proce
ss-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.json"}]}
    Mounts:
      /logs from logs (rw)
      /secrets/gcs from gcs-credentials (rw)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oauth-token
    Optional:    false
  logs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  tools:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  gcs-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcs
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:
  Type    Reason                 Age   From                                Message
  ----    ------                 ----  ----                                -------
  Normal  SuccessfulMountVolume  50m   kubelet, ovirt-srv05.phx.ovirt.org  MountVolume.SetUp succeeded for volume "logs"
  Normal  SuccessfulMountVolume  50m   kubelet, ovirt-srv05.phx.ovirt.org  MountVolume.SetUp succeeded for volume "tools"
  Normal  SuccessfulMountVolume  50m   kubelet, ovirt-srv05.phx.ovirt.org  MountVolume.SetUp succeeded for volume "gcs-credentials"
  Normal  SuccessfulMountVolume  50m   kubelet, ovirt-srv05.phx.ovirt.org  MountVolume.SetUp succeeded for volume "token"
  Normal  Scheduled              50m   default-scheduler                   Successfully assigned 0b1b77d0-912a-11ea-a911-e86a64a85ba2 to ovirt-srv05.phx.ovirt.org
  Normal  Created                50m   kubelet, ovirt-srv05.phx.ovirt.org  Created container
  Normal  Pulled                 50m   kubelet, ovirt-srv05.phx.ovirt.org  Container image "gcr.io/k8s-prow/initupload:v20200204-7e8cd997a" already present on machine
  Normal  Started                50m   kubelet, ovirt-srv05.phx.ovirt.org  Started container
  Normal  Started                50m   kubelet, ovirt-srv05.phx.ovirt.org  Started container
  Normal  Created                50m   kubelet, ovirt-srv05.phx.ovirt.org  Created container
  Normal  Pulled                 50m   kubelet, ovirt-srv05.phx.ovirt.org  Container image "gcr.io/k8s-prow/entrypoint:v20200204-7e8cd997a" already present on machine
  Normal  Pulled                 50m   kubelet, ovirt-srv05.phx.ovirt.org  Container image "docker.io/kubevirtci/autoowners@sha256:025f8ba96ffdc6d3adf17a0058898e17a8fe814314ec3c4bd2af9812aeeda7b7" already present on machine
  Normal  Created                50m   kubelet, ovirt-srv05.phx.ovirt.org  Created container
  Normal  Started                50m   kubelet, ovirt-srv05.phx.ovirt.org  Started container
  Normal  Pulled                 50m   kubelet, ovirt-srv05.phx.ovirt.org  Container image "gcr.io/k8s-prow/sidecar:v20200204-7e8cd997a" already present on machine
  Normal  Created                50m   kubelet, ovirt-srv05.phx.ovirt.org  Created container
  Normal  Started                50m   kubelet, ovirt-srv05.phx.ovirt.org  Started container

[tracker] Automation tasks to tackle

A non-exhaustive list of automation flows which we need to add.

kubevirt code updates

  • automatic periodic vendor update on master
  • automatic k8s-dependency updates on new k8s releases
  • automatic update of kubevirtci in kubevirt/kubevirt if a new PR is merged in kubevirtci
  • finish the dependency mirror prow plugin, so that maintainers can write /upload (or alternatively run a postsubmit job which does the update and creates a PR).

release engineering

  • release-note collecting from the PRs
  • add them to the pre-release on the github release page
  • do a release-note PR once the release is published with the cleaned release notes
  • job-forking: Fork the CI jobs when we create a new release branch
  • pushing latest release version to gcs to a well known location, so that the docs can reference the latest release without docs update.

ci clusters

  • building kubevirtci clusters in postsubmitjobs when changes are pushed, and pushing the new shasums to gcs
  • automatically copying the provision scripts for new k8s clusters and adding build jobs for it when a new k8s release happens (also requires untangling the merged provision script, so that we can simply copy/paste)
  • running k8s conformance tests on kubevirtci clusters
  • automatically propose enabling ci lanes once they are green for a definded sequence of runs

prow improvements

  • switch to crier for github status reporting
  • enable in deck the feature to re-trigger jobs from the UI
  • job dashboards and alertrules with the new prometheus metrics to identify errors quicker

There exists already a set of tools which can help us achieving this:

Flakefinder should differentiate between branches

flakefinder needs to filter the test results to report on to the target branch that the PR is merged into. This is to avoid seeing old test failures for other branches on the reported branch.

SO the reports are done per branch, per timespan. Therefore it would make sense to separate them by directory also.

As a first step, limit it to master. In the future, we should report for release branches too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.