Git Product home page Git Product logo

Comments (9)

perotinus avatar perotinus commented on June 24, 2024

Some specific tasks:

  • Run the bazel tests on a per-PR basis
  • Run standard validation checks on a per-PR basis (ideally from https://github.com/kubernetes/repo-infra/tree/master/verify)
  • Add a CI job that runs the bazel tests At this point, every PR passes these checks. CI only becomes useful IMO when we have tests that aren't run on every PR.
  • Rearrange the sig-multicluster dashboard on testgrid into federation and cluster-registry sections, and expose the cluster-registry CI jobs there Not relevant if there's no CI.

from cluster-registry.

pmorie avatar pmorie commented on June 24, 2024

@perotinus I believe in the comment above, the CI job that runs the bazel tests is complete - accurate?

from cluster-registry.

perotinus avatar perotinus commented on June 24, 2024

@pmorie I was thinking of CI as a continuously-running job rather than a per-PR job. At this point, I don't think there's a practical need for CI, since there are no external dependencies in tests, and the tests run by the CI would be exactly the same as those run by the per-PR job, but it may be worth setting up so that, in the future, there is a place to put tests that should not block PRs but are worth running consistently.

from cluster-registry.

font avatar font commented on June 24, 2024

@perotinus @pmorie @madhusudancs Should we add one job that runs all the verify steps or add one job for each verify step? Need to consider the pros and cons to each.

One job for each allows the testgrid to show each one separately and allows one to see results for each separately. This could be useful if there are multiple failures.

One job for all may run faster due to reduced time for scheduling the job. But we miss out on parallelism of having multiple jobs. We'd also have to have a script that will run each one separately and report the combined results at the end so that it doesn't exit on the first failure. This would help triage and fix multiple failures faster, otherwise one would have to iterate several times for each failed step.

The test-infra repo appears to have one job for each verify step while other repos like cri-containerd seem to have one job that runs all their verify steps. Is test-infra the model to follow?

from cluster-registry.

madhusudancs avatar madhusudancs commented on June 24, 2024

But we miss out on parallelism of having multiple jobs. We'd also have to have a script that will run each one separately and report the combined results at the end so that it doesn't exit on the first failure.

We can configure these the way we want in a single job case too right? Run the verify steps in parallel in a single job and aggregate results in the end as you propose?

I think too many parallel jobs is brittle, but if we only have a handful of verify jobs that should be Ok. test-infra has verify-bazel, verify-gofmt and verify-govet. What else are we going to have for verification other than tests? 3 doesn't sound that bad.

from cluster-registry.

font avatar font commented on June 24, 2024

I'm thinking we can probably lump sum verify-gofmt, verify-gometalinter, and verify-govet, from https://github.com/kubernetes/repo-infra/tree/master/verify as @perotinus links above, into one job. Then there is verify-bazel, verify-codegen, and verify-openapi-spec perhaps in one job as well. Or just one job for each script. I don't think I have a particular preference yet.

from cluster-registry.

perotinus avatar perotinus commented on June 24, 2024

The split of "go code checks" vs "ensuring generated code is correct" seems reasonable to me. We might want to ask the test-infra people if they have a preference for several small jobs or one large job.

from cluster-registry.

font avatar font commented on June 24, 2024

I had asked them a couple days ago and they don't have a preference yet. They seemed to be okay with one job per verification step for parallelism, but it was really up to us. The only issue was if the prow job required a cache, SSD, port, etc. it would take longer to get scheduled due to a known hack they are trying to fix.

from cluster-registry.

perotinus avatar perotinus commented on June 24, 2024

Closing this: we're running linters on each PR, and there's no benefit to CI if every PR is passing all available tests. We can look into CI when we have tests that don't block PR submission.

from cluster-registry.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.