Git Product home page Git Product logo

thought-machine / please Goto Github PK

View Code? Open in Web Editor NEW
2.4K 29.0 205.0 26.84 MB

High-performance extensible build system for reproducible multi-language builds.

Home Page: https://please.build

License: Apache License 2.0

Python 1.05% Shell 1.21% Go 87.16% C++ 0.10% C 0.68% Emacs Lisp 0.06% Dockerfile 0.20% Starlark 9.54% Roff 0.01%
build build-tool build-system multi-language monorepo bazel pants buck please-build

please's Introduction

Please Build Status Build Status Go Report Card Gitter chat

Please is a cross-language build system with an emphasis on high performance, extensibility and reproducibility. It supports a number of popular languages and can automate nearly any aspect of your build process.

See please.build for more information.

Currently Linux (tested on Ubuntu), macOS and FreeBSD are actively supported.

If you're a fan of Please, don't forget to add yourself to the adopters file.

Getting Started

The easiest way to install it on your own machine is to run:

curl -s https://get.please.build | bash

Or, if you prefer, grab one of the tarballs off our releases page and extract it yourself; it typically lives in ~/.please.

You can also install using Homebrew:

brew tap thought-machine/please
brew install please

Then you simply run plz init at the root of your project to set up a default config and you're ready to go. The best way to get to grips with Please is through the codelabs! There's also the getting started guide that explains the core concepts.

How is it so fast?

Please has a robust and correct caching model that enables us to aggressively cache artifacts. Caching is based on the hashes of inputs (both files, and environment variables) to each rule rather than last modified timestamps. Builds are hermetic so don't have access to anything they haven't explicitly defined as inputs. This means that if anything changes, we know exactly what might've been affected, so the minimal set of targets get built and tested.

Because each task is hermetic, they can be run in parallel without any chance of interfering with each-other. Combine these two concepts with shared remote caches, and it makes for a blazing fast build system for any language or technology.

Please is also written in Go and every effort has been made to make it as fast as possible. There's no startup time waiting to bring up VMs, interpreting code or communicating with remote JVM processes. The code itself takes full advantage of Go's concurrency and asynchronicity. The end result is a snappy command line tool that gets to work immediately and feels great to use.

Why Please, and not Maven, pip, or go build?

A build system is more than just a mechanism for invoking the compiler. If you're working on just one language, don't have any code generation, and don't need to publish any artifacts anywhere, you might not need Please. Chances are this is not the case.

Building software often involves more than just compiling code. There's deployment config to template, code to generate, and quite often, there's more than one language involved. Please provides a powerful, comprehensive, and understandable framework that you can use to craft a truly holistic build process.

Please does this through a consistent and seamless command line interface; there's no need to learn new build systems and technologies for different languages. Build any target with plz build, test any target with plz test, no matter what's going on under the hood.

The Docker & Kubernetes codelab covers building a Kubernetes based application with Please, including reliably deploying code to a local cluster, and pushing it to a remote registry.

The genrule() codelab covers extending Please with custom build definitions to truly automate any part of your deployment process.

Why Please, and not make?

Make is a great tool for running tasks. It's easy enough to understand because it leaves you very close to the shell. The problem is, it has limited capability to build out complexity. There have been attempts to generate make files from higher level tools like cmake and ninja, but they fall short of what Please sets out to achieve.

The Please build language is a full programming language. There are a high level set of build rules that make up a declarative DSL to define build targets, however you can drop into an imperative language that resembles python when necessary:

subinclude("//my_custom_defs:markdown_page")

pages = []
for page in glob(include = ["*.md"]):
    pages += markdown_page(
        name = page.removesuffix(".md"),
        srcs = [page],
        visibility = ["//website/..."],
    )

go_binary (
    name = "webserver",
    srcs = ["main.go"],
    deps = ["//third_party/go:protobuf"],
    data = pages,
    visibility = ["//services/foo/..."],
)

This is distinctively operating at a higher level when compared to make:

protobuf:
    go install google.golang.org/protobuf

webserver: protobuf
    go tool compile --pack foo.go -o foo.a

pages: ???

Additionally, make builds are not hermetic. The above make example installs protobuf into the host machines Go path. Please builds only have access to files and environment variables they have explicitly been given access to. You can play around in the environment targets are built in with plz build //some/target --shell. Additionally, on linux systems, Please can take this a step further with Linux namespaces to improve sandboxing especially of the network. Please also has built in task parallelism so can take full advantage of multi-core machines which were not a consideration 40 years ago when make was designed.

Finally, Please has a robust caching mechanism base on hashing the inputs of each rule. Makes cache invalidation is based on the last modified timestamp which can change unexpectedly forwards and backwards in time. Combine this with hermetic builds, and Please caching is never incorrect.

Why Please, not Bazel, Buck or Pants?

These build systems are all very similar so choosing between them can be hard. Please originally replaced buck implementing the subset of features we used. We found that buck (and the competition) worked great when using the supported languages but fell flat when breaking new ground.

The biggest difference between Please and the competition is that Please is designed from the ground up to be extensible. The built-in languages are all defined in the same build language as you use to define your targets, demonstrating that there's nothing special about them. This puts the build definitions where they should be: in your domain. You have all the same tools Please has to expand your build definitions to satisfy your needs.

Please does all this with a focus on simplicity and transparency. There are a limited amount of concepts that are needed to get started and once they are grocked, the possibilities are endless. Please relies on these concepts rather than requiring lots of magic and incantation. Configuration is simple and largely optional so getting going is easy, and there's no single WORKSPACE file nobody really owns, with lines of configuration that nobody really understands.

The command line interface is designed with similar considerations in mind. Subcommands can be added to Please though aliases and tie into the Please tab-completions. Not only can flags and arguments be completed, but they can also leverage the build graph to complete labels enabling you to truly craft your developer experience the way you want it.

Building Please

If you're looking to get involved, check out the contributor guidance to help you get started. If you're a fan of Please, don't forget to add yourself to the adopters file.

To build Please yourself, run ./bootstrap.sh in the repo root. This will bootstrap a minimal version of Please using Go and then rebuild it using itself.

You'll need to have Go 1.21+ installed to build Please although once built it can target any recent version (we think back to about 1.8ish).

Optional dependencies for various tests include Python, Java, clang, gold and docker - none of those are required to build components so their tests will be excluded if they aren't available.

If you'd rather not worry about installing the dependencies, we provide a prebuilt Docker image based on Ubuntu which is capable of building the whole thing for you: docker run -it thoughtmachine/please_ubuntu

You can install a locally built copy of Please using plz install, or if it's the first time, by running ./install.sh after it's built. This will overwrite whatever you currently have in ~/.please with your local version, although you can get back to a released version again by running plz update --force.

To automatically fix linting and code generation issues, run plz autofix.

Documentation

Status

Please is released & we consider it stable; we follow semver for releases, so major versions indicate potentially breaking changes to the BUILD language, command line or other behaviour. We try to minimise this where possible.

We're very happy to accept pull requests, feature requests, and bugs if it's not working for you. We don't always have time for everything but Please is under active development.

please's People

Contributors

bnmetrics avatar chpeer avatar chrisnovakovic avatar dcomas avatar dependabot[bot] avatar dimpavloff avatar fabiansiddiqi avatar fsiddiqia avatar goddenrich avatar henryaj avatar isobelormiston avatar izissise avatar janhancic avatar katzdm avatar llucherini avatar macripps avatar mauriceyap avatar namtzigla avatar natpen avatar noxxious avatar pebers avatar peterbocan avatar peterebden avatar sagikazarmark avatar samwestmoreland avatar sean- avatar sfirmery avatar spanglelabs avatar tatskaari avatar tiagovtristao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

please's Issues

Remote build defs

Thinking about #13, it's probably not ideal to integrate esoteric languages directly into plz since they become a compile-time dependency. A better system would be if one could subinclude('https://github.com/thought-machine/please-plugins/grm/grm.build_defs) and get all the bits you needed (with optional revision / hash verification, of course).

I actually think most of the bits we need for this are there already (it's really just a combination of remote_file and subinclude) although it would be nice to be able to create 'virtual targets' to fetch any necessary tools which didn't get duplicated in every package. That would need more significant changes so might encourage people to do it manually for now.

System level dependencies part 2

Extension to #8 , add ability to identify system-level tools and add them as dependencies so changes in them invalidate other outputs. For example protoc, if someone installs a new version we'd know it was different and regenerate all generated code correctly.

Support building with Go 1.6

The new version of Go has some stricter checks about what can be passed to cgo (see https://github.com/golang/proposal/blob/master/design/12416-cgo-pointers.md). We fail on this because we pass pointers to Go structures through cgo and back again. AFAIK we are not doing anything conceptually wrong because the pointers are opaque to C, but it's against the new rules so we'll have to find some workaround / alternate approach.

Note that targeting go1.6 seems to work just fine, it's only building plz with it that is problematic.

Clear distinction between tool / output configs

At the moment they're the same, ie. if you do "plz build -c cover //mypackage:mytarget", any tools needed for that rule are compiled with -c cover as well. Ideal would be that they were compiled with -c opt since we don't need them to be slowed down by coverage instrumentation.

Generalisation of this is if we supported cross-compiling we'd need to understand to build tools for the local arch.

cc_embed_binary doesn't work on OSX

The rule works fine on systems with non-losing linkers but unfortunately Apple's one doesn't support -b binary any more and their replacement is a bit weak. Need to find some workaround.

Panic when a python_library uses a proto_library in resources

From James. It wasn't actually what he intended (he really wanted to use deps) but of course plz shouldn't panic.

Something about dependencies not being resolved; no doubt require/provide related. Not sure if it will be easy to resolve though.

srcs aren't updated by require/provide

i.e. in the following situation

genrule(
    name = 'a',
    provides = {'go': ':b'},
)

genrule(
    name = 'b',
    outs = 'b.go',
)

genrule(
    name = 'c',
    srcs = [':a'],
    requires = ['go'],
)

SRCS of c is empty at build time because a has no outs, but it should be b.go.

No doubt it needs to be reassigned somewhere, but it's no doubt not obvious exactly how to tie it in because this stuff is a bit fiddly.

Rewrite please_maven

I suspect that getting the logic just so for the dependency resolution is going to be a bit of a rabbit hole; currently we know we need to walk back up parent projects to resolve variables but there might be any amount of other things lurking.

It may be an idea to rewrite it in Java so we can reuse existing logic for this stuff. The Braintree guys have written https://github.com/pgr0ss/bazel-deps which might be a good basis for it - looks nice and simple.

I've got this pencilled in for an intern project but that will obviously depend on whether we find anyone appropriate and whether they get assigned to something "more important" (if such a thing could ever exist).

Pexes and jars are non-deterministic

Because we use zip and jar respectively which save timestamps.

Should extend the increasingly misnamed jarcat to perform the functions we use them for (which are very limited, essentially "zip everything in this directory") and override the modification timestamps.

Add a wrapper script, ala Gradle

Would make it easy to set up a repo for other people to use.

We might also consider storing tools etc in ~/.please by default instead of in /opt which would also be lower friction.

Work factor limit for cache

From our high-level design meeting at the Keats the other night: experiment with adding a "work factor" flag so we don't upload cache artifacts that turn out to be easy to build locally.

Better incrementality for go_test

go_test rebuilds all its transitive inputs at build time which can be slow. Would be nice if it reused previous compiles better.

The biggest problem is that we want coverage and don't know whether the test will be invoked with 'plz cover' or not, so we need build config support for this to work.

maven_jars can't handle deps with no sources

Can be worked around by excluding them and adding a maven_jar and a manual dep, but that is somewhat awkward.

Not sure if / how we can detect the presence of sources when please_maven runs.

Build time access to dep input source hashes by label.

With the relatively recent introduction of the ability to access the hash of the input sources of a rule we now have the ability (or at least I hope we do) to produce an output file which has a named derived from input sources. This is handy for files that we want to cache forever, or which are versioned by their contents.

To use this however we will need a way to access some information all input deps by label (so that we can reference their outputs). The most obvious example would be a rule which templates a javascript file into an index.html, with the javascript file being cached forever on some CDN. We could stamp the output js file with its input hash 12345.js and then have an html template that can include it like this:

<script src="//app/frontend/js:compiled" />

The build rule could then template that label value for the hash of the sources (as long as it was a declared dependency).

This is obviously useful for other situations, perhaps the most pressing being the creation of containers and their use in a deployment config file (such as a kubernetes config file). We might want a potential build rule to look like this.

kube_config(
  name = 'my_app',
  config = 'config_template.yaml',
  container_repo = 'https://secret-containers.company.com:1234',
  deps = ['//my_app/backend:container_rule']
)

This rule will build the deps (which would create versioned containers), then template their container name / version / repo string into the config file.

Ensure post-build functions only run once

We have a case where we retrieve build output, run a post-build function, and then fail to retrieve the final artifacts. Then after building for real we re-run the post build function which is not idempotent and it dies.

Ideally we would guarantee it only runs once, but it's not clear how we do that in the face of caching, and I think it's too harsh to not cache any rule with a post-build function.
Or maybe we just say they must be idempotent?

python_library outputs are not symlinks any more

They definitely were back in the pre-1.0 days, because it enables a workflow where you set PYTHONPATH to plz-out/gen, edit files locally and run with python and it should Just Work. Currently that doesn't when you edit deps because you've got stale copies of them in plz-out.

Can't remember exactly why this changed - I suspect it may have had something to do with JS, I remember lots of problems with symlinks around webpack / npm.

Delete downloaded artifacts of incorrect versions

If we upload an artifact of the wrong version, plz downloads it once, then in future always attempts to run that, but fails because it's the wrong one. It would be better if it checked after download (would need to be pre-fork) and deleted it.

Possible memory leaks in interpreter

I'm not sure we're correctly freeing memory that is returned to PyPy in various places (eg. glob() returns a char**). Should investigate and fix.

Targets incorrectly receiving tools as transitive deps

Found with Lorenzo before; it looks like when a target with need_transitive_deps = True depends on a target with some tools, it receives those tools as well as the tree of deps in its working directory. It shouldn't do this, tools shouldn't appear in the working directory that way.

"cannot find gc roots" error

This seems to appear rarely and is apparently PyPy related. I'm pretty sure it happens when something else has already gone wrong, maybe because we panic upwards through a cgo call which I'm sure will be extremely surprising to PyPy.

Not entirely sure how best to solve. Ideal would be never to panic through cgo and report all errors back to get reraised, but it's hard to see how to do that in a generic way.

Java compiler flags

java_library and .plzconfig should allow customising the flags passed to javac, in a similar manner to what we do for gcc / clang.

zip_safe autodetection is broken

On python_binary at least, because the target with the pre-build function doesn't have any dependencies. I think it's still OK on python_test which is maybe why I didn't notice sooner.

Verify build rule arguments sooner

It's not too hard to pass the wrong type to an argument (e.g. string instead of list of strings) and the resulting error messages can be obtuse. Maybe we should verify the type of all arguments earlier to be more explicit.

Integrate Javascript build rules

We have internal rules for building JS. Is not so obvious how best to separate them from our private repo but would be super nice to have them included.

Maintain directory hierarchy for containerised data deps

When a containerised test specifies dependent binaries that exist at different levels of the directory hierarchy, they are flattened when copied into the container. Ie.

filegroup(
    name = 'services',
    srcs = [
        '//foo:foo_service',
        '//foo/bar:bar_service',
    ],
    link = False
)

python_test(
    name = 'e2e_test',
    srcs = ['e2e_test.py'],
    data = [
        ':services',
    ],
    container = True,
    labels = ['e2e'],
)

puts foo.pex and bar.pex into the working directory. I would expected foo.pex to be there, but bar.pex to be in a bar/ subdirectory.

Strange plz build issue when dependency is incorrectly referenced with double `//`

You can find the relevant test case in core3/ branch model-python-server-test

We have a BUILD file like this (noe the double slash in the last dependency).
(core3/experimental/diana/models/frontend/src/js/components/monitor/BUILD)

js_library(
  name = 'monitor',
  srcs = ['monitor.js'],
  deps = [
    '//experimental/diana/models/frontend/src/js/components/header:header',
    '//experimental/diana/models/frontend/src/js/services:monitor_service',
    '//experimental/diana/models/frontend/src/js/services:benchmark_service',
    '//experimental/diana/models/frontend/src/js/components/graphs:size_graph',
    '//experimental/diana/models/frontend/src/js/components/graphs:bench_graph',
    '//experimental/diana/models/frontend/src/js/components//graphs:time_graph',
    '//third_party/js:react',
  ],
  visibility = ['//experimental/diana/models/...'],
)

Building that project results in the error below. Running a second time it is successful!

~/dev/core3  model-python-server-test ✔                                                                                                                                   45m  ⍉
▶ plz build //experimental/diana/models/frontend/src/...                             
Building [78/166, 0.2s]:
=> [ 0.0s] //third_party/js:babel-preset-react_npm Finished post-build function for //third_party/js:babel-preset-react_npm
=> [ 0.0s] //third_party/js:babel-plugin-syntax-trailing-function-commas_npm Preparing...
=> [ 0.0s] //third_party/js:webpack-dev-server_npm Preparing...
=> [ 0.0s] //third_party/js:css-loader_npm Running post-build function for //third_party/js:css-loader_npm
=> [ 0.0s] //third_party/js:chokidar_npm Finished post-build function for //third_party/js:chokidar_npm
=> [-0.0s] //third_party/js:babel-relay-plugin_npm Finished post-build function for //third_party/js:babel-relay-plugin_npm
=> [-0.0s] //third_party/js:imports-loader_npm Running post-build function for //third_party/js:imports-loader_npm
=> [-0.0s] //third_party/js:chalk_npm Running post-build function for //third_party/js:chalk_npm
=> [-0.0s] //third_party/js:fs-extra_npm Running post-build function for //third_party/js:fs-extra_npm
=> [-0.0s] //third_party/js:extract-text-webpack-plugin_npm Running post-build function for //third_party/js:extract-text-webpack-plugin_npm
10:24:46.702 CRITICAL: //experimental/diana/models/frontend/src/js/components/graphs:time_graph failed:
Error moving outputs for target //experimental/diana/models/frontend/src/js/components/graphs:time_graph: Rule //experimental/diana/models/frontend/src/js/components/graphs:time_graph failed to create output plz-out/tmp/experimental/diana/models/frontend/src/js/components/graphs/time_graph/timeGraph.js

~/dev/core3  model-python-server-test ✔                                                                                                                                   46m  ⍉
▶ plz build //experimental/diana/models/frontend/src/...
Build finished; total time 13.56s, incrementality 92.6%. Outputs:
  plz-out/gen/experimental/diana/models/frontend/src/css/main.scss
  plz-out/gen/experimental/diana/models/frontend/src/js/compiled.js
  plz-out/gen/experimental/diana/models/frontend/src/js/compiled.js.css
  plz-out/gen/experimental/diana/models/frontend/src/js/compiled.js.css.map
  plz-out/gen/experimental/diana/models/frontend/src/js/compiled.js.map
  plz-out/gen/experimental/diana/models/frontend/src/js/components/graphs/benchGraph.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/graphs/sizeGraph.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/graphs/timeGraph.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/header/header.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/header/headerData.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelBenchmark.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelName.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelSize.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelTime.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelUncompressedSize.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/model/modelVersion.js
  plz-out/gen/experimental/diana/models/frontend/src/js/components/monitor/monitor.js
  plz-out/gen/experimental/diana/models/frontend/src/js/services/BenchmarkService.js
  plz-out/gen/experimental/diana/models/frontend/src/js/services/MonitorService.js

I think the problem is that it fails the first time (but manages to place it in the cache) and passes the second time because it's going through a different code path since the artifact is in the cache.

Dependency cycles will silently be skipped

When building multiple targets at once, like some/target/..., targets with dependency cycles will be silently skipped. Furthermore, tests that depend on those targets will be skipped as well, giving the impression that everything is in correct working order.

go_test rule doesn't seem to get exported dependency from the target under testing

From Riccardo:

I had the following BUILD file:

go_library(
    name = 'postprocessing',
    srcs = ['internal_bank_transfers.go'],
    deps = [
        '//common/protos:banking_proto',
        '//common/protos:category_proto',
        '//ml/api/proto:yunus_proto',
        '//third_party/go:logging',
    ],
    visibility = ['//ml/api/...'],
)

go_test(
    name = 'internal_bank_transfers_test',
    srcs = ['internal_bank_transfers_test.go'],
    deps = [
        ':postprocessing',
        '//common/protos:banking_proto',
        '//common/protos:category_proto',
        '//third_party/go:testify',
    ],
)

Turns out that the go_test was failing with the following error:

14:44:19.878 CRITICAL: //ml/api/yunus/postprocessing:internal_bank_transfers_test failed:
Error building target //ml/api/yunus/postprocessing:internal_bank_transfers_test: exit status 1
src/ml/api/yunus/postprocessing/internal_bank_transfers.go:12:2: cannot find package "ml/api/proto/yunus_proto" in any of:
    /usr/local/go/src/ml/api/proto/yunus_proto (from $GOROOT)
    /home/riccardo/workspace/core3/plz-out/tmp/ml/api/yunus/postprocessing/internal_bank_transfers_test/src/ml/api/proto/yunus_proto (from $GOPATH)
    /home/riccardo/workspace/core3/plz-out/tmp/ml/api/yunus/postprocessing/internal_bank_transfers_test/third_party/go/src/ml/api/proto/yunus_proto

until i added the //ml/api/proto:yunus_proto library to the deps of the test.

Allow different test configuration for test_cmd

We have some situations, like testing in javascript, where having some kind of debug flag to allow a more interactive debugging session. For example with karma we can launch the test in a browser and put breakpoints, load sourcemaps, etc, but we only want that for local development so it shouldn't be the default behaviour.

Source stripping for pexes

It would be nice to be able to build a python_binary which didn't contain any source, just .pyc/.pyo bytecode.

I've got a branch (python-precompile) which implements this in a reasonably nice way; each python_library precompiles its relevant bytecode. I'm a little worried that the files contain timestamps, although I suspect this doesn't make things worse at the moment because the zipfiles have mtimes in them anyway.

So far I've not merged because it turns out to be a pessimism on the pexes; they are a little larger (expected) but weirdly about 10-15% slower to start up. You'd obviously expect it was faster since the bytecode didn't need recompilation, and indeed stripped pexes are 30% faster or so. Not sure precisely what's going on with it, ideally we would be able to get the performance improvement in both cases.

System level dependencies part 1

Ability to "build" things that are not in the source tree. Motivating example: Riccardo wants to use descriptor.proto which is installed with protobuf. Historically we've vendorised these things (eg. byte.grm for Thrax) but it'd be nice to be able to include standard things like that without having to vendorise them.

Vague plan is to have a way of identifying 'system' dependencies which plz provides by default. In this case it'd just be a proto_library rule plus whatever's needed to move the files about.

max_flakes interpretation can be confusing

Currently plz test --max_flakes=1 runs a test only once. This is possibly not the most obvious interpretation; one could also read it that it allows one flake, and hence would run it twice if the first time failed.

Let's decide what the correct behaviour / interpretation is then make it that way.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.