Git Product home page Git Product logo

facebook / buck2 Goto Github PK

View Code? Open in Web Editor NEW
3.4K 59.0 204.0 108.43 MB

Build system, successor to Buck

Home Page:

License: Apache License 2.0

Rust 71.70% Python 4.15% Shell 0.01% CSS 0.03% HTML 0.01% JavaScript 0.14% Go 0.07% TypeScript 0.13% Starlark 21.50% Batchfile 0.01% Erlang 2.04% C++ 0.06% RenderScript 0.01% Nix 0.01% Java 0.01% Makefile 0.02% Objective-C 0.09% Raku 0.01% Perl 0.01% C 0.01%

buck2's Introduction

Buck2: fast multi-language build system

Version License Build Status

Homepage  •  Getting Started  •  Contributing

Buck2 is a fast, hermetic, multi-language build system, and a direct successor to the original Buck build system ("Buck1") — both designed by Meta.

But what do those words really mean for a build system — and why might they interest you? "But why Buck2?" you might ask, when so many build systems already exist?

  • Fast. It doesn't matter whether a single build command takes 60 seconds to complete, or 0.1 seconds: when you have to build things, Buck2 doesn't waste time — it calculates the critical path and gets out of the way, with minimal overhead. It's not just the core design, but also careful attention to detail that makes Buck2 so snappy. Buck2 is up to 2x faster than Buck1 in practice1. So you spend more time iterating, and less time waiting.
  • Hermetic. When using Remote Execution2, Buck2 becomes hermetic: it is required for a build rule to correctly declare all of its inputs; if they aren't specified correctly (e.g. a .c file needs a .h file that isn't correctly specified), the build will fail. This enforced correctness helps avoids entire classes of errors that most build systems allow, and helps ensure builds work everywhere for all users. And Buck2 correctly tracks dependencies with far better accuracy than Buck1, in more languages, across more scenarios. That means "it compiles on my machine" can become a thing of the past.
  • Multi-language. Many teams have to deal with multiple programming languages that have complex inter-dependencies, and struggle to express that. Most people settle with make and tie together dune to pip and cargo. But then how do you run test suites, code coverage, or query code databases? Buck2 is designed to support multiple languages from the start, with abstractions for interoperation. And because it's completely scriptable, and users can implement language support — it's incredibly flexible. Now your Python library can depend on an OCaml library, and your OCaml library can depend on a Rust crate — and with a single build tool, you have a consistent UX to build and test and integrate all of these components.

If you're familiar with systems like Buck1, Bazel, or Pants — then Buck2 will feel warm and cozy, and these ideas will be familiar. But then why create Buck2 if those already exist? Because that isn't all — the page "Why Buck2?" on our website goes into more detail on several other important design critera that separate Buck2 from the rest of the pack, including:

  • Support for ultra-large repositories, through filesystem virtualization and watching for changes to the filesystem.
  • Totally language-agnostic core executable, with a small API — even C/C++ support is written as a library. You can write everything from scratch, if you wanted.
  • "Buck Extension Language" (BXL) can be used for self-introspection of the build system, allowing automation tools to inspect and run actions in the build graph. This allows you to more cleanly support features that need graph introspection, like LSPs or compilation databases.
  • Support for distributed compilation, using the same Remote Execution API that is supported by Bazel. Existing solutions like BuildBarn, BuildBuddy, EngFlow, and NativeLink all work today.
  • An efficient, robust, and sound design — inspired by modern theory of build systems and incremental computation.
  • And more!

If these headline features make you interested — check out the Getting Started guide!

🚧🚧🚧 Warning 🚧🚧🚧 — rough terrain lies ahead

Buck2 was released recently and currently does not have a stable release tag at this time. Pre-release tags/binaries, and stable tags/binaries, will come at later dates. Despite that, it is used extensively inside of Meta on vast amounts of code every day, and buck2-prelude is the same code used internally for all these builds, as well.

Meta just uses the latest committed HEAD version of Buck2 at all times. Your mileage may vary — but at the moment, tracking HEAD is ideal for submitting bug reports and catching regressions.

The short of this is that you should consider this project and its code to be battle-tested and working, but outside consumers will encounter quite a lot of rough edges right now — several features are missing or in progress, some toolchains from Buck1 are missing, and you'll probably have to fiddle with things more than necessary to get it nice and polished.

Please provide feedback by submitting issues and questions!

Installing Buck2

You can get started by downloading the latest buck2 binary for your platform. The latest tag always refers to a recent commit; it is updated on every single push to the GitHub repository, so it will always be a recent version.

You can also compile Buck2 from source, if a binary isn't immediately available for your use; check out the file for information.

Terminology conventions

Frequently used terms and their definitions can be found on the glossary page.


Buck2 is licensed under both the MIT license and Apache-2.0 license; the exact terms can be found in the LICENSE-MIT and LICENSE-APACHE files, respectively.


  1. This number comes from internal usage of Buck1 versus Buck2 at Meta. Please note that appropriate comparisons with systems like Bazel have yet to be performed; Buck1 is the baseline because it's simply what existed and what had to be replaced. Please benchmark Buck2 against your favorite tools and let us know how it goes!

  2. Buck2 currently does not sandbox local-only build steps; in contrast, Buck2 using Remote Execution is always hermetic by design. The vast majority of build rules are remote compatible, as well. Despite that, we hope to lift this restriction in the (hopefully short-term) future so that local-only builds are hermetic as well.

buck2's People


andrewjcg avatar blackm00n avatar bobyangyf avatar capickett avatar chatura-atapattu avatar christolliday avatar cjhopman avatar davidbarsky avatar ezgicicek avatar get9 avatar iguridi avatar jakobdegen avatar kapji avatar krallin avatar lmvasquezg avatar maxovtsin avatar milend avatar ndmitchell avatar podtserkovskiy avatar raulg4435 avatar rmaz avatar shonaganuma avatar stepancheg avatar thegeorge avatar themarwhal avatar vladimirmakaev avatar wendy728 avatar wilfred avatar zertosh avatar zsol avatar


 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar


 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buck2's Issues

Idea for a local execution and cache

Have you considered extending buckd to act as a simple remote cache + remote worker (maybe backed by a container runtime) and e.g. expose REAPI via a socket?

This would allow the client to use the same API and it only puts extra logic on a daemon.

Define a protobuf toolchain

#28 introduces a custom rule to download a protobuf distribution appropriate for the current platform and ready for use by rules like rust_protobuf_library.
This concept could be extended to promote protobuf to a proper toolchain under prelude//toolchains similar to e.g. prelude//toolchains/cxx/zig.
Before implementing this we should research protobuf toolchains in the Bazel ecosystem and see which lessons can be drawn for Buck2. A useful resource may be this document by the Bazel rule author SIG.

Building Buck on Illumos

Hey there! I'm excited about Buck, and want to give it a try generally. I think it could solve some problems that I have at work. But we use illumos for various things, which means sometimes stuff gets weird. I personally use Windows, so I'm not an expert on illumos either, so I am ultimately trying to get a program I don't know running on a platform I don't know well; fun times :)

So, here's where I'm at at the time of putting this together:


Let's talk about how I got there.

So, the first thing that goes wrong is:

error: could not compile `fs2` due to previous error
warning: build failed, waiting for other jobs to finish...
error[E0432]: unresolved import `self::os`
 --> /home/sklabnik/.cargo/registry/src/
2 | pub use self::os::*;
  |               ^^ could not find `os` in `self`

Okay, illumos is a weird unix, but I thought nix supported it. Let's check in on nix:

Oh, the latest release is 0.26, not 0.19. Let's see if we can update it:

$ cargo update -p nix
error: There are multiple `nix` packages in your project, and the specification `nix` is ambiguous.
Please re-run this command with `-p <spec>` where `<spec>` is one of the following:
  [email protected]
  [email protected]
  [email protected]
  [email protected]

Oh my.

Turns out 0.19 is being brought in by the dependency on fs2. I haven't heard of that package before. last commit in 2018. Uh oh. Here's a comment from 3 weeks ago on a PR: danburkert/fs2-rs#42 (comment)

for what it is worth, I submitted a PR to fs4 and they fixed it. I haven't seen any activity on fs2 for >5 years, but fs4 is a maintained fork that provides all of the same functionality.

Okay! Let's sub in fs4! If we do that...

error[E0425]: cannot find function `statvfs` in module `rustix::fs`
  --> src/
55 |     match rustix::fs::statvfs(path.as_ref()) {
   |                       ^^^^^^^ not found in `rustix::fs`
help: consider importing this function
45 | use crate::statvfs;
help: if you import `statvfs`, refer to it directly
55 -     match rustix::fs::statvfs(path.as_ref()) {
55 +     match statvfs(path.as_ref()) {

error[E0425]: cannot find function `allocate` in module `sys`
  --> src/file_ext/
70 |         sys::allocate(self, len)
   |              ^^^^^^^^ not found in `sys`

Not just that, but we're also now trying to build jemallocator, which says

  Invalid configuration `x86_64-unknown-illumos': OS `illumos' not recognized

I asked one of my co-workers about this, and he said

Were it me, I would probably try and port it with ifdefs to use instead for us

It's also possible jemalloc will work -- it has been around long enough that someone probably ported it to Solaris

I saw things in upstream jemalloc that suggest it might, but the jemallocator crate says three linux targets and one mac target, not even windows! (I hear you've got windows builds though; I haven't tried it on my local computer, that will come soon enough though)

But, it seems like jemalloc's statistics API isn't actually required to get a build off. So maybe we could conditionally include it, not on illumos. Problem: it doesn't seem like that configuration exists for workspaces, so we have to re-duplicate it into each cargo.toml. Gross but I just want this to build first, we can figure out something cleaner later...

... but that also doesn't seem to work, it's still trying to build jemallocator anyway. Either I'm making a big mistake or it just ignores more complex condtitionals.

I'm also getting errors about protobuffs failing to build. turns out that the pre-built stuff isn't supported upstream.

Well, that should be okay, I should be able to set BUCK2_BUILD_PROTOC_INCLUDE and BUCK2_BUILD_PROTOC to override this. But that doesn't work either. I realized that this is because the code as written unconditionally calls that method above in protoc-bin-vendored, and then unwraps it, so it always panics. The diff above includes a small change which I believe shouldn't make that happen anymore.

So, finally, building this gives:

error[E0425]: cannot find function `statvfs` in module `rustix::fs`
  --> /home/sklabnik/.cargo/registry/src/
55 |     match rustix::fs::statvfs(path.as_ref()) {
   |                       ^^^^^^^ not found in `rustix::fs`
help: consider importing this function
45 | use crate::statvfs;
help: if you import `statvfs`, refer to it directly
55 -     match rustix::fs::statvfs(path.as_ref()) {
55 +     match statvfs(path.as_ref()) {

error[E0425]: cannot find function `allocate` in module `sys`
  --> /home/sklabnik/.cargo/registry/src/
70 |         sys::allocate(self, len)
   |              ^^^^^^^^ not found in `sys`

error[E0432]: unresolved import `crate::cpu::cpu_times_percpu`
 --> /home/sklabnik/.cargo/registry/src/
6 | use crate::cpu::{cpu_times, cpu_times_percpu, CpuTimes};
  |                             ^^^^^^^^^^^^^^^^
  |                             |
  |                             no `cpu_times_percpu` in `cpu`
  |                             help: a similar name exists in the module: `cpu_times_percent`

error[E0432]: unresolved import `crate::network::net_io_counters_pernic`
 --> /home/sklabnik/.cargo/registry/src/
8 | use crate::network::net_io_counters_pernic;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `net_io_counters_pernic` in `network`
  error: failed to run custom build command for `jemalloc-sys v0.5.3+5.3.0-patched`

Caused by:
  process didn't exit successfully: `/home/sklabnik/buck2/target/debug/build/jemalloc-sys-0a4333827d733bae/build-script-build` (exit status: 101)
  --- stdout
  OPT_LEVEL = Some("1")
  TARGET = Some("x86_64-unknown-illumos")
  HOST = Some("x86_64-unknown-illumos")
  CC_x86_64-unknown-illumos = None
  CC_x86_64_unknown_illumos = None
  HOST_CC = None
  CC = None
  CFLAGS_x86_64-unknown-illumos = None
  CFLAGS_x86_64_unknown_illumos = None
  CFLAGS = None
  DEBUG = Some("true")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,llvm14-builtins-abi,sse,sse2")
  CFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall"
  running: cd "/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out/build" && CC="gcc" CFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" CPPFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" LDFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" "sh" "/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out/build/configure" "--disable-cxx" "--enable-doc=no" "--enable-shared=no" "--with-jemalloc-prefix=_rjem_" "--with-private-namespace=_rjem_" "--enable-prof" "--host=x86_64-unknown-illumos" "--build=x86_64-unknown-illumos" "--prefix=/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out"
  checking for xsltproc... /usr/bin/xsltproc
  checking for x86_64-unknown-illumos-gcc... gcc
  checking whether the C compiler works... yes
  checking for C compiler default output file name... a.out
  checking for suffix of executables...
  checking whether we are cross compiling... no
  checking for suffix of object files... o
  checking whether we are using the GNU C compiler... yes
  checking whether gcc accepts -g... yes
  checking for gcc option to accept ISO C89... none needed
  checking whether compiler is cray... no
  checking whether compiler supports -std=gnu11... yes
  checking whether compiler supports -Werror=unknown-warning-option... no
  checking whether compiler supports -Wall... yes
  checking whether compiler supports -Wextra... yes
  checking whether compiler supports -Wshorten-64-to-32... no
  checking whether compiler supports -Wsign-compare... yes
  checking whether compiler supports -Wundef... yes
  checking whether compiler supports -Wno-format-zero-length... yes
  checking whether compiler supports -Wpointer-arith... yes
  checking whether compiler supports -Wno-missing-braces... yes
  checking whether compiler supports -Wno-missing-field-initializers... yes
  checking whether compiler supports -Wno-missing-attributes... yes
  checking whether compiler supports -pipe... yes
  checking whether compiler supports -g3... yes
  checking how to run the C preprocessor... gcc -E
  checking for grep that handles long lines and -e... /usr/gnu/bin/grep
  checking for egrep... /usr/gnu/bin/grep -E
  checking for ANSI C header files... yes
  checking for sys/types.h... yes
  checking for sys/stat.h... yes
  checking for stdlib.h... yes
  checking for string.h... yes
  checking for memory.h... yes
  checking for strings.h... yes
  checking for inttypes.h... yes
  checking for stdint.h... yes
  checking for unistd.h... yes
  checking whether byte ordering is bigendian... no
  checking size of void *... 8
  checking size of int... 4
  checking size of long... 8
  checking size of long long... 8
  checking size of intmax_t... 8
  checking build system type... running: "tail" "-n" "100" "/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out/build/config.log"

  ## ----------- ##
  ## confdefs.h. ##
  ## ----------- ##

  /* confdefs.h */
  #define PACKAGE_NAME ""
  #define PACKAGE_TARNAME ""
  #define PACKAGE_VERSION ""
  #define PACKAGE_STRING ""
  #define PACKAGE_URL ""
  #define STDC_HEADERS 1
  #define HAVE_SYS_TYPES_H 1
  #define HAVE_SYS_STAT_H 1
  #define HAVE_STDLIB_H 1
  #define HAVE_STRING_H 1
  #define HAVE_MEMORY_H 1
  #define HAVE_STRINGS_H 1
  #define HAVE_INTTYPES_H 1
  #define HAVE_STDINT_H 1
  #define HAVE_UNISTD_H 1
  #define SIZEOF_VOID_P 8
  #define LG_SIZEOF_PTR 3
  #define SIZEOF_INT 4
  #define LG_SIZEOF_INT 2
  #define SIZEOF_LONG 8
  #define LG_SIZEOF_LONG 3
  #define SIZEOF_LONG_LONG 8
  #define SIZEOF_INTMAX_T 8
  #define LG_SIZEOF_INTMAX_T 3

  configure: exit 1

  --- stderr
  Invalid configuration `x86_64-unknown-illumos': OS `illumos' not recognized
  configure: error: /bin/sh build-aux/config.sub x86_64-unknown-illumos failed
  thread 'main' panicked at 'command did not execute successfully: cd "/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out/build" && CC="gcc" CFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" CPPFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" LDFLAGS="-O1 -ffunction-sections -fdata-sections -fPIC -g -fno-omit-frame-pointer -m64 -Wall" "sh" "/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out/build/configure" "--disable-cxx" "--enable-doc=no" "--enable-shared=no" "--with-jemalloc-prefix=_rjem_" "--with-private-namespace=_rjem_" "--enable-prof" "--host=x86_64-unknown-illumos" "--build=x86_64-unknown-illumos" "--prefix=/home/sklabnik/buck2/target/debug/build/jemalloc-sys-56f23bc0b6815c11/out"
  expected success, got: exit status: 1', /home/sklabnik/.cargo/registry/src/
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I am not sure how much time I'll be able to devote to figuring out this build, but will keep poking at it, and tracking how things seem to go.

no prelude rust example mysteriously fails

If I am in the "no prelude" example directory, building the Rust project works just fine.

buck2\examples\no_prelude〉buck2 build //rust:main
Build ID: 08d04cf1-ec8f-4450-87ec-b5674348fdce
Jobs completed: 3. Time elapsed: 0.0s.

If I copy that entire subdirectory into a new project:

buck-rust-hello〉git status
On branch main
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        modified:   .buckconfig
        new file:   .buckroot
        deleted:    .gitmodules
        modified:   BUCK
        new file:
        new file:   cpp/hello_world/BUCK
        new file:   cpp/hello_world/func.cpp
        new file:   cpp/hello_world/func.hpp
        new file:   cpp/hello_world/main.cpp
        new file:   cpp/library/BUCK
        new file:   cpp/library/library.cpp
        new file:   cpp/library/library.hpp
        new file:   cpp/rules.bzl
        new file:   go/BUCK
        new file:   go/go_binary.bzl
        new file:   go/main.go
        new file:   go/rules.bzl
        deleted:    prelude
        new file:   prelude.bzl
        new file:   prelude/prelude.bzl
        new file:   rust/BUCK
        new file:   rust/
        new file:   rust/rules.bzl
        modified:   toolchains/BUCK
        new file:   toolchains/cpp_toolchain.bzl
        new file:   toolchains/export_file.bzl
        new file:   toolchains/go_toolchain.bzl
        new file:   toolchains/rust_toolchain.bzl
        new file:   toolchains/symlink.bat

I get a failure:

buck2\examples\no_prelude〉buck2 build //rust:main
File changed: root//cpp/hello_world
File changed: root//cpp/library
File changed: root//.git/index.lock
70 additional file change events
Action failed: root//rust:main (compile)
Required outputs are missing: Action failed to produce outputs: `buck-out/v2/gen/root/6dd044292ff31ae1/rust/__main__/main`
Reproduce locally: `rustc "--crate-type=bin" "rust\\" -o "buck-out\\v2\\gen\\root\\6dd044292ff31ae1\\rust\\__main__\\main"`
Build ID: 0efa591f-0382-4df5-b9a5-bd623bf4a3b5
Jobs completed: 3. Time elapsed: 0.7s. Cache hits: 0%. Commands: 1 (cached: 0, remote: 0, local: 1)
Failed to build 'root//rust:main (<unspecified>)'

Now, why this is failing is very interesting: it's looking for a binary named main, but rustc will be producing one called main.exe. What's extra confusing to me, after scouring all of these rules for the past few hours, is that:

main.exe is being produced by the compiler in the new repo:

buck-rust-hello〉ls buck-out\\v2\\gen\\root\\6dd044292ff31ae1\\rust\\__main__\\
│ # │                             name                             │ type │  size   │   modified   │
│ 0 │ buck-out\v2\gen\root\6dd044292ff31ae1\rust\__main__\main.exe │ file │ 5.0 MiB │ a minute ago │
│ 1 │ buck-out\v2\gen\root\6dd044292ff31ae1\rust\__main__\main.pdb │ file │ 1.2 MiB │ 2 hours ago  │

but it is producing main with no extension in the demo sub-repo:

buck2\examples\no_prelude〉ls buck-out\v2\gen\root\6dd044292ff31ae1\rust\__main__\╭───┬──────────────────────────────────────────────────────────────┬──────┬───────────┬─────────────╮
│ # │                             name                             │ type │   size    │  modified   │
│ 0 │ buck-out\v2\gen\root\6dd044292ff31ae1\rust\__main__\main     │ file │ 159.5 KiB │ 2 hours ago │
│ 1 │ buck-out\v2\gen\root\6dd044292ff31ae1\rust\__main__\main.pdb │ file │   1.2 MiB │ 2 hours ago │

Does this behavior make sense? Is there some other bit of ambient configuration that I'm missing?

Third party dependencies?

What's the story for third party dependencies? As far as I can tell Buck2 supports http_archive() like Bazel, but that has various issues and is very unergonomic (in Bazel at least; I've never used Buck1).

Google developed Bzlmod as an improvement. Is there anything like this for Buck2, or plans to add support?

Probably better to do it sooner rather than later if you want Buck2 to be popular (it looks pretty great tbh so I hope it does become popular!). Bazel has a bit of a slow migration issue with bzlmod - there are loads of packages that still haven't migrated to it so it's not really that useful, which I imagine is causing people not to bother migrating their packages.

Generated `BUCK` file points to a URL that does not exist

When you generate a new project with buck2 init --git, the generated BUCK file contains this line:

# A list of available rules and their signatures can be found here:

However, this URL does not exist.

buck2_re_client server and config?

Hi - I can a config section for buck2_re_client in the code, and traced the various components like perform_cache_upload but it's not clear to me what that cache protocol looks like. Is this code usable in the open source release?

How to use multiple anon targets to build a graph?

First off, congrats on the Buck2 source release. I know it isn't ready yet, but I've been patiently, eagerly awaiting it for a while, and I'm so far pleased with my initial testing of it. I'm very interested in the more dynamic class of build systems that Buck2 is part of, with my previous "world champion" title holder in this category being Shake. (Also: Hi Neil!) I'm naturally already using this in fury for nothing important, which is probably a bad idea, but I hope I can give some feedback.

Here is some background. I have this file called toolchains.bzl that contains a big list of hashes, and dependencies a hash has; effectively this is just a DAG encoded as a dict. I would like to "build" each of these hashes, in practice that means doing some work to download it, then creating a symlink pointing to it (really pointing to /nix/store, but that isn't wholly relevant.)

Conceptually, the DAG of hashes is kind of like a series of "source files", where each hash is a source file, and should be "compiled" (downloaded) before any dependent "sources" are compiled. And many different targets can share these "source files". For example, here is the reverse-topologically sorted DAG for the hash 5jfg0xr0nkii0jr7v19ri9zl9fnb8cx8-rust-default-1.65.0, which you can compute yourself from the above toolchains file:


So you can read this list like so: if each line is a source input N (ranging from [0...N]), then you must build every source input [0...(N-1)] before building file N itself. Exactly what you expect.

Problem 1: anon_targets example seems broken

So two different hashes may have a common ancestor/set of dependencies, glibc is a good example because almost every hash it in its dependency tree. This seemed like a perfect use case for anonymous targets; it simply allows common work to be shared, introducing sharing that would be lost otherwise. In fact the example in that document is in some sense the same as this one; many "source files" are depended upon by multiple targets, but they don't know about the common structure between them. Therefore you can compile a single "source file" and build it once, rather than N times for each target.

But I simply can't get it to work, and because I'm new to Buck, I feel a bit lost on how to structure it. I think the problem is simply that the anon_targets function defined in there doesn't work. I have specialized it here in the below __anon_nix_targets function:

NixStoreOutputInfo = provider(fields = [ "path" ])

# this rule is run anonymously. its only job is to download a file and create a symlink to it as its sole output
def __nix_build_hash_0(ctx):
    out = ctx.actions.declare_output("{}".format(ctx.attrs.hash))
    storepath = "/nix/store/{}".format(ctx.attrs.hash)
        ["nix", "build", "--out-link", out.as_output(), storepath]
    ), category = "nix")
    return [ DefaultInfo(default_outputs = [out]), NixStoreOutputInfo(path = out) ]

__nix_build_hash = rule(
    impl = __nix_build_hash_0,
    attrs = { "hash": attrs.string() },

# this builds many anonymous targets with the previous __nix_build_hash rule
def __anon_nix_targets(ctx, xs, k=None):
    def f(hs, ps):
        if len(hs) == 0:
            return k(ctx, ps) if k else ps
            return ctx.actions.anon_target(__nix_build_hash, hs[0]).map(
                lambda p: f(hs[1:], ps+[p])
    return f(xs, [])

# this downloads a file, and symlinks it, but only after all the dependents are done
def __nix_build_toolchain_store_path_impl(ctx: "context"):
    hash = "5jfg0xr0nkii0jr7v19ri9zl9fnb8cx8-rust-default-1.65.0"
    deps = [
        # "sdsqayp3k5w5hqraa3bkp1bys613q7dc-libunistring-1.0",
        # "s0w6dz5ipv87n7fn808pmzgxa4hq4bil-libidn2-2.3.2",
        # "hsk71z8admvgykn7vzjy11dfnar9f4r1-glibc-2.35-163",
        # "x7h8sxz1cf5jrx1ixw5am4w300gbrjr1-cargo-1.65.0-x86_64-unknown-linux-gnu",
        # "n6mpg42fjx73y2kr1vl8ihj1ykmdhrbm-rustfmt-preview-1.65.0-x86_64-unknown-linux-gnu",
        # "nfgpn9av331q7zi1dl6d5qpir60y513s-bash-5.1-p16",
        # "k0wbm2panqbb0divlapqazbwlvcgv6m0-expand-response-params",
        # "2vqp383jfrsjb3yq0szzkirya257h1dp-gcc-11.3.0-lib",
        # "nwl7pzafadvagabksz61rg3b3cs58n9i-gmp-with-cxx-stage4-6.2.1",
        # "vv0xndc0ip83f72n0hz0wlcf3g8jhsjd-attr-2.5.1",
        # "6b882j01cn2s9xjfsxv44im4pm4b3jsr-acl-2.3.1",
        # "h48pjfgsjl75bm7f3nxcdcrqjkqwns7m-coreutils-9.1",
        # "lal84wf8mcz48srgfshj4ns1yadj1acs-zlib-1.2.13",
        # "92h8cksyz9gycda22dgbvvj2ksm01ca4-binutils-2.39",
        # "dj8gbkmgrkwndjghna8530hxavr7b5f4-linux-headers-6.0",
        # "2vbw0ga4hlxchc3hfb6443mv735h5gcp-glibc-2.35-163-bin",
        # "7p2s9z3hy317sdwfn0qc5r8qccgynlx1-glibc-2.35-163-dev",
        # "hz9w5kjpnwia847r4zvnd1dya6viqpz1-binutils-wrapper-2.39",
        # "gc7zr7wh575g1i5zs20lf3g45damwwbs-gcc-11.3.0",
        # "qga0k8h2dk8yszz1p4iz5p1awdq3ng4p-pcre-8.45",
        # "fnzj8zmxrq96vnigd0zc888qyys22jfv-gnugrep-3.7",
        # "k04h29hz6qs45pn0mzaqbyca63lrz2s0-gcc-wrapper-11.3.0",
        # "wrwx0zy8zblcsq8zwhdqbsxr2jv063fk-rustc-1.65.0-x86_64-unknown-linux-gnu",
        # "2s0sp14r5aaxhl0z16b99qcrrpfx7chi-clippy-preview-1.65.0-x86_64-unknown-linux-gnu",

    def k(ctx, ps):
        deps = [p[NixStoreOutputInfo].path for p in ps]
        out = ctx.actions.declare_output("{}".format(hash))
        storepath = "/nix/store/{}".format(hash)
            ["nix", "build", "--out-link", out.as_output(), storepath]
        ).hidden(deps), category = "nix")
        return [ DefaultInfo(default_outputs = deps + [out]), NixStoreOutputInfo(path = out) ]

    return __anon_nix_targets(ctx, [{"hash": d} for d in deps], k)

__build_toolchain_store_path_rule = rule(impl = __nix_build_toolchain_store_path_impl, attrs = {})

If the list deps has any number of entries > 1, then this example fails with any example TARGET:

austin@GANON:~/src/$ buck clean; buck build src/nix-depgraph:
server shutdown
Initialization complete, running the server.
When running analysis for `root//src/nix-depgraph:rust-stable (<unspecified>)`

Caused by:
    expected a list of Provider objects, got promise()
Build ID: 5612be60-9b84-4657-97f6-64e049aedada
Jobs completed: 4. Time elapsed: 0.3s.

However, if len(deps) == 1, i.e. you comment out the next-to-last line, then it works as expected. I think that the problem might be that if there is a single element type of type promise in a list, then Buck can figure it out. But if you have a list with many promises, it simply can't? Or something?

So I've simply been banging my head on this for a day or so, and can't find any reasonable way to make this example work that's intuitive or obvious... I actually had to fix several syntax errors in anon_targets example documentation when I took it from the example document (i.e. it uses the invalid slicing syntax xs[1...] instead of the correct xs[1:]) so I suspect it may have just been a quick example. I can send a PR to fix that, perhaps.

But it always comes back to this exact error: expected list of Providers, but got promise(). Some advice here would be nice; I feel this is close to working, though.

Problem 2: dependencies of anonymous targets need to (recursively) form a graph

The other problem in the above example, which could be handled after the previous problem, is that the dependencies don't properly form a graph structure. If we have the reverse-toposorted list:


Then the above example correctly specifies foobarbazqux as having all preceding entries as dependencies. But this isn't recursive: foobarbaz doesn't specify that it needs foo and foobar; foobar doesn't specify it needs foo and so on.

In my above example this isn't strictly required for correctness because the nix build command can automatically handle it. But it does mean that the graph Buck sees isn't really "complete" or accurate because it is missing the dependent structure between links.

So I don't really know the "best" way to structure this. I basically just need anonymous targets that are dependent on other anonymous targets, I guess. This is really a code structure question, and honestly I believe I'm overthinking it substantially, but I'd like some guidance to help ensure I'm doing things the "buck way" if at all possible.

It is worth noting here (or emphasizing) that the graph of dependencies above, the toolchains.bzl file, is always "statically known" up front, and is auto-generated. I could change its shape (the dict structure) as much as I want it if makes it easier, but really the list of dependencies is truly fully static, so this feels like something that should be very possible with Buck.

Other than that...

The above two problems are my current issues that I think would be amazing to solve, but I'm all ears to alternative solutions. It's possible this is an "X/Y problem" situation, I suppose.

But other than that: I'm pretty excited about Buck2 and can't wait to see it stabilize! Sorry for the long post; I realize you said "you'll probably have a bad time" using the repository currently, but you wanted feedback, and I'm happy to provide it!

Side note 1

I notice that there are no users of anon_target anywhere in the current Buck prelude, and no examples or tests of it; so perhaps something silently broke or isn't ready to be used yet? I don't know how it's used inside Meta, to be fair, so perhaps something is simply out of date.

Side note 2

I am not using the default Buck prelude, but my own prelude designed around Nix, so it would be nice if any solutions were "free standing" like the above code and didn't require the existing one.

Side note 3

The documentation notes that anon_targets could be a builtin with potentially more parallelism. While that's nice, I argue something like anon_targets should really be in the Prelude itself, even if it isn't a builtin, exactly so that people aren't left to do the error prone thing of copy/pasting it everywhere like I did above, and discovered several problems.

'clang++' is not recognized as an internal or external command, operable program or batch file.

Hey there!

Trying to get a buck "hello world" building on Windows. I have this BUCK file:

    name = "hello_world",
    srcs = [""],
    crate_root = "",

and this toolchains\BUCK:

load("@prelude//toolchains:rust.bzl", "system_rust_toolchain")
load("@prelude//toolchains:genrule.bzl", "system_genrule_toolchain")
load("@prelude//toolchains:cxx.bzl", "system_cxx_toolchain")
load("@prelude//toolchains:python.bzl", "system_python_bootstrap_toolchain")

    name = "genrule",
    visibility = ["PUBLIC"],

    name = "rust",
    default_edition = "2021",
    visibility = ["PUBLIC"],

    name = "cxx",
    visibility = ["PUBLIC"],

    name = "python_bootstrap",
    visibility = ["PUBLIC"],

When I try to buck2 build //:hello_world, I get this:

Local command returned non-zero exit code 1
Reproduce locally: `"buck-out\\v2\\gen\\prelude\\fb50fd37ce946800\\python_bootstrap\\tools\\__win_python_wrapper__\\win_ ...<omitted>... \\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\hello_world-link-diag.args" (run `buck2 log what-failed` to get the full command)`
error: linking with `buck-out\v2\gen\root\fb50fd37ce946800\__hello_world__\__linker_wrapper.bat` failed: exit code: 1
  = note: "cmd" "/c" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\__linker_wrapper.bat" "-fno-use-linker-plugin" "-Wl,--dynamicbase" "-Wl,--disable-auto-image-base" "-m64" "-Wl,--high-entropy-va" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\self-contained\\crt2.o" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\rsbegin.o" "C:\\Users\\steve\\Documents\\GitHub\\buck-rust-hello\\buck-out\\v2\\tmp\\root\\fb50fd37ce946800\\__hello_world__\\rustc\\_buck_309d5f84a6053e4a\\rustcciRgVe\\symbols.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.0.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.1.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.2.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.3.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.4.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.hello_world.aeb6c6e0-cgu.5.rcgu.o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\extras\\hello_world\\hello_world.1l3753u24tlzpzc9.rcgu.o" "-L" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link-deps-0" "-L" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib" "-Wl,-Bstatic" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libstd-e363be82127e72d4.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libpanic_unwind-271c0a4c2400bd0e.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libobject-3b3a88ddf57ad9b8.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libmemchr-c38acbaaa0512e61.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libaddr2line-a777dde688506f47.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libgimli-00e812c5215e2bb4.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\librustc_demangle-9824443ffde90bb7.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libstd_detect-c9cae9f57d72c5d8.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libhashbrown-80b5e088fad27661.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libminiz_oxide-25b744457ec6a6b9.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libadler-b662208514509737.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\librustc_std_workspace_alloc-70e1db2cbff7c5e3.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libunwind-bc622eac43f92150.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libcfg_if-da38528f9991ea5d.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\liblibc-0217604e5fc185ea.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\liballoc-094368c19a10127d.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\librustc_std_workspace_core-9310325d5d5607bd.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libcore-5c3fe6fc6388f93c.rlib" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\libcompiler_builtins-d765c9bc514400ee.rlib" "-Wl,-Bdynamic" "-lkernel32" "-ladvapi32" "-luserenv" "-lkernel32" "-lws2_32" "-lbcrypt" "-lgcc_eh" "-l:libpthread.a" "-lmsvcrt" "-lmingwex" "-lmingw32" "-lgcc" "-lmsvcrt" "-luser32" "-lkernel32" "-Wl,--nxcompat" "-nostartfiles" "-L" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib" "-L" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\self-contained" "-o" "buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\static_pic\\hello_world.exe" "-Wl,--gc-sections" "-no-pie" "-nodefaultlibs" "@buck-out\\v2\\gen\\root\\fb50fd37ce946800\\__hello_world__\\bin-pic-static_pic-link\\__hello_world-link_linker_args.txt" "C:\\Users\\steve\\.rustup\\toolchains\\stable-x86_64-pc-windows-gnu\\lib\\rustlib\\x86_64-pc-windows-gnu\\lib\\rsend.o"
  = note: 'clang++' is not recognized as an internal or external command,
          operable program or batch file.

error: aborting due to previous error

Build ID: 767daac4-04b8-4412-8f4b-9953bafed010
Jobs completed: 3. Time elapsed: 0.5s. Cache hits: 0%. Commands: 1 (cached: 0, remote: 0, local: 1)
Failed to build 'root//:hello_world (prelude//platforms:default#fb50fd37ce946800)'

Now, what's weird about this is that clang++ is in my path:

$ clang++
clang++: error: no input files

I am not sure what I am doing wrong here, but any pointers would be greatly appreciated.

jemalloc probably won't work well on aarch64-linux

Leaving this here while I'm using the laptop, so that I don't forget it. Maybe something can be done, or not. But this will probably come back to bite someone eventually, I suspect.

jemalloc seems to currently be a dependency as Buck2's global allocator. While I understand jemalloc is a big part of what makes Facebook tick, and it's excellent, there is a problem: jemalloc compiles the page size of the host operating system into the library, effectively making it part of its ABI. In other words, if you build jemalloc on a host with page size X, and then run it on an OS with page size Y, and X != Y, then things get bad; your program just crashes.

Normally, until relatively recently, this hasn't a problem. Why? Because most systems have mostly decided collectively that 4096 byte pages are good enough (that's wrong, but not important.) So almost everything uses that — except for the new fancy Apple Silicon M-series, such as my M2 MBA. These systems exclusively makes use of not 4k, but 16k pages. This page size is perfectly allowed by the architecture (actually, 4k, 8k, 16k, 32k, and 64k are all valid on aarch64) and 16k pages are a great choice for many platforms, especially client ones.

So the problem begins to crop up once people start building aarch64-linux binaries for their platforms; e.g. Arch Linux ARM or NixOS, which distribute aarch64 binaries. Until the advent of Apple Silicon, you could reasonably expect everything to use the same page size. But now we have this newly, reasonably popular platform using 16k pages. There's a weird thing happening here: most of the systems building packages for users are some weird ARM board (or VM) in a lab churning out binaries 24/7. They just need to run Linux and not set on fire. They aren't very fast and they typically are old CPUs, and often are running custom, hacked Linux kernsl that barely work. But most developers or end users? They want good performance and lots of features, with a stable kernel. For ARM platforms, the only options they reasonably have today for supported ARM systems are Raspberry Pis, Nvidia Jetson series, and now Apple Silicon. And Apple Silicon is, without comparison, the best bang for your buck and the highest performer. So there's a thing here where users are more likely to use one platform I feel, and it's becoming more popular — while systems churning out packages will use another, incompatible one.

This isn't a theoretical concern; Asahi Linux users like myself still (somewhat often) run into broken software. jemalloc isn't the only thing that doesn't support non-4k pages easily, it's just one of the more notorious and easy-to-spot culprits, and it turns otherwise working packages into non-working ones:

Right now, I'm building buck2 manually, so this isn't a concern. But it means my binaries aren't applicable to non-AS users, and vice versa.

So there are a few reasonable avenues of attack here:

  • Don't use a custom allocator at all, and rely on libc.
    • Probably not good; most libc's notoriously aim for "good" steady state performance, not peak performance under harsher conditions.
  • Turn off jemalloc only on aarch64
    • Maybe OK, though a weird incongruence.
  • Turn on jemalloc only when the user (e.g. internal FB builds) ask for it.
    • Maybe OK; at least you could make the argument y'all have enough money to support customized builds like this while the rest of us need something else.
    • You're already doing your own custom builds already, so maybe this isn't a big deal
  • Switch to another allocator, whole-sale
    • Could also make it a configurable toggle
    • Making it a toggle is potentially a footgun though; it's the kind of "useless knob" that people only bang on once the other ones don't work and they're desperate. This makes it more likely to bitrot, for it to lag in testing and performance eval, etc.
    • I've had very good experience with mimalloc; much like jemalloc it also has an excellent design, fun codebase, and respectable author (Daan Leijen fan club.)
      • But I haven't confirmed it avoids this particular quirk of jemalloc's design. Maybe a dead end.
    • It would probably require a bunch of testing on a large codebase to see what kind of impact this change has. I suspect the FB codebase is a good place to try. ;)

I don't know which one of these is the best option.

Access absolute output path

Really liking buck2 so far, great work!

I've been trying to implement a proto_library rule in starlark but am stuck on a small thing - when creating the command I need access to the absolute directory of an artifact in order to pass it to protoc, i.e. what do I pass for ??? in the snippet below? If I use e.g. . then protoc just writes into my current git checkout.

cmd = cmd_args(["protoc", "--cpp_out=???"])

I've tried various combinations of

  • $(location) => does not get substituted
  • declaring an output with dir=True => "conflicts with the following output paths"

The closest thing I can find in the code is in genrule.bzl but also doesn't appear to work (nor would I expect it to work based on the Rust code)

"GEN_DIR": cmd_args("GEN_DIR_DEPRECATED"),  # ctx.relpath(ctx.output_root_dir(), srcs_path)

Injecting module state inside BUILD files?

While reading the prelude I saw the oncall mechanism, but that isn't really relevant; it's the syntax that I like. Here's something that I was wondering it could be made to work:

load("@prelude//rust.bzl", "rust_binary")

license("MIT OR Apache-2.0")

    name = "main",
    file = "./",
    out = "hello",

The idea here is the license field specifies the license of all targets in the current BUILD module. But it could be arbitrary metadata in the module context, and the rust_binary rule — or any rule really — could look up this data and use it. This would require the ability for license to somehow inject state that rust_binary could read. I would also like it if this could be queryable via BXL.

Not sure what this would look like, though. But I'm wondering if something like this can be achieved today.

The workaround is fairly simple: just specify a license = "..." attribute on every target. So this is mostly a convenience, but the general mechanism could be used in other ways, perhaps.

building example fails with "Buck2 panicked and DICE may be responsible"

An attempt at building examples/prelude failed with a panick. Steps that caused it:

  • Checkout e7ab90e
  • $ cd examples/prelude
  • $ buck2 build ...
    WARNING: You are using Buck v2 compiled with `cargo`, not `buck`.
             Some operations may go slower and logging may be impaired.
    File changed: root//buck-out/v2/log/20220923-115200_7b50d74a-2e87-49b4-95f5-9613a98aeee1_events.proto.gz
    File changed: root//buck-out/v2/log
    24 additional file changes
    Buck2 panicked and DICE may be responsible. Please be patient as we try to dump DICE graph to `"/tmp/buck2-dumps/dice-dump-be0d66ae-cad5-498b-a48d-47f4d04d9239"`
    DICE graph dumped to `"/tmp/buck2-dumps/dice-dump-be0d66ae-cad5-498b-a48d-47f4d04d9239"`. DICE dumps can take up a lot of disk space, you should delete the dump after reporting.
    thread 'buck2-rt' panicked at 'a file/dir in the repo must have a parent, but `toolchains//` had none', buck2_common/src/dice/
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    Build ID: 7b50d74a-2e87-49b4-95f5-9613a98aeee1
    Jobs completed: 0. Time elapsed: 0.0s.
    Command failed: Buck daemon event bus encountered an error
    Caused by:
        0: status: Unknown, message: "error reading a body from connection: broken pipe", details: [], metadata: MetadataMap { headers: {} }
        1: error reading a body from connection: broken pipe
        2: broken pipe

The corresponding DICE dump is here.
And the event-log is here.

The panic does not occur after a second run. Before the crash a buck2 build ... was run at the state of 8b4f7e0.

Support fallback of buck2.file_watcher to 'notify' if 'watchman' doesn't work?

As I noted in #57 (comment), I found a way to enable watchman support via the config option buck2.file_watcher. See this corresponding commit, in particular the highlighted lines: thoughtpolice/buck2-nix@e47fc8f#diff-d33e979799a45c7c51752e9c8d96a3e452015d1a40b1e4b6ec6a98e92c4d8430R92-R105 — I also recommend the commit message for a summary.

It's very handy. In short, I use direnv to automatically use systemd-run --user to launch a transient user service for watchman, then export an appropriate WATCHMAN_SOCK variable for the repository. Works great! Except...

I forgot to enable it in CI, which caused this failure:

[2022-12-22T15:45:13.628+00:00] 2022-12-22T15:45:13.626994Z  WARN buck2_server::file_watcher::watchman::core: Connecting to Watchman failed (will re-attempt): Error reconnecting to Watchman: While invoking the watchman CLI to discover the server connection details: reader error while deserializing, stderr=`2022-12-22T15:45:13,625: [watchman] while computing sockname: failed to create /usr/local/var/run/watchman/runner-state: No such file or directory
[2022-12-22T15:45:13.630+00:00] `
[2022-12-22T15:45:13.663+00:00] Build ID: 6d28aa[89]([96](
[2022-12-22T15:45:13.665+00:00] Command failed: 
[2022-12-22T15:45:13.665+00:00] SyncableQueryHandler returned an error
[2022-12-22T15:45:13.665+00:00] Caused by:
[2022-12-22T15:45:13.665+00:00]     0: No Watchman connection
[2022-12-22T15:45:13.665+00:00]     1: Error reconnecting to Watchman
[2022-12-22T15:45:13.665+00:00]     2: Error reconnecting to Watchman
[2022-12-22T15:45:13.665+00:00]     3: While invoking the watchman CLI to discover the server connection details: reader error while deserializing, stderr=`2022-12-22T15:45:13,633: [watchman] while computing sockname: failed to create /usr/local/var/run/watchman/runner-state: No such file or directory
[2022-12-22T15:45:13.665+00:00]        `

So there's two things going on here:

  • I'm using a watchman binary from the repository, the Ubuntu 22.04 one. But I didn't install it with dpkg, I installed it with Nix, since you can't rely on the user having it. A consequence of this is that some directories, such as /usr/local/var/run/watchman/, don't exist, which cause spurious failures, since watchman binaries implicitly have a STATEDIR set to /usr/local at build time. And if there is no WATCHMAN_SOCK variable set, the watchman_client Rust library will query the CLI. Therefore calls like watchman get-sock or whatever will fail, because they always probe a non-existent statedir which causes part of the stack trace. I can work around this in some way with Nix, and will file a possible bug report with upstream watchman about it; but I just wanted to clarify this since it's in the above error.
    • Note that the transient systemd unit sets an explicit path to statefile, logfile, and pidfile, which are the 3 required variables that watchman needs; if these are set, the implicit STATEDIR is not used.
    • This is why I set WATCHMAN_SOCK instead with my setup, because if it is set, the watchman_client library uses it above all else, and doesn't need to query the CLI binary. So it "just works"
  • The build fails outright if the file_watcher can't be configured.

Would it be possible to optionally fallback to buck2.file_watcher=notify if watchman_client detects and captures a failure like above? This would at least allow users to continuing developing in the repository without a strange error if they don't enable watchman, even if they would detect file changes watchman would otherwise ignore. I'm trying to make my repository 'plug and play', and it feels bad if you cd $src; buck build ... and it immediately fails like this without watchman.

Providing a hermetic C/C++ toolchain

A self-contained C/C++ toolchain that doesn't assume pre-installed components on the system ensures a consistent build behavior across environments (CI or different developer machines). Traditionally, obtaining a self-contained C/C++ compiler distribution has been quite difficult. The closest commonly used one in the Bazel ecosystem is the LLVM toolchain created by Grail, however, this one still assumes a global or separately provided sysroot. An easier to use alternative is zig cc built on top of clang and already used for Bazel in bazel-zig-cc, which also has builtin support for cross-compilation.

Similar to #19

libomnibus changing weak symbol to undefined?

I'm building a cxx_python_extension, but when I'm trying to import it I'm getting an undefined symbol:

ImportError: ...buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/ undefined symbol: _ITM_registerTMCloneTable

AFAICT this is happens because libomnibus changes w to U. I think the nm command should probably exclude weak symbols (with --no-weak) but I don't understand whether this is by design or a bug?

example output from nm:

ls buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/*.so | xargs -tn1 nm | rg _ITM_registerTMCloneTable
nm buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/
                 w _ITM_registerTMCloneTable
nm buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/
                 U _ITM_registerTMCloneTable
nm buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/
                 w _ITM_registerTMCloneTable
nm buck-out/v2/gen/root/291b6c3a26d6a3e9/__hello__/hello#link-tree/
                 w _ITM_registerTMCloneTable

buck tracks files it shouldn't in a version control directory (sapling)

I'm using Sapling with buck. And if I do

buck build ...
buck build ...

The second buck build normally gives me something like:

File changed: root//src/hello/
File changed: root//.sl/runlog/3513vs2Gf9m4S2DA.lock
File changed: root//.sl/runlog/.tmpeqcGcc
939 additional file changes

because sl needs to talk with its daemon and do other things under the .sl/ directory.

This even goes further than that: if Sapling is in the middle of something like a rebase, it may make copies of .bzl files underneath .sl/ during that time, which then get picked up as part of the default cell for the project. This is really annoying. I've had several sl rebase commands fail due to conflicts, and then buck build ... picks up temporary files under .sl/ that are kept as a backup. So if something like a TARGETS file gets copied, buck build ... will fail in some spectacular and unpredictable fashion.

As a short quick fix, it would be nice if whatever logic exists for .git and buck-out to be ignored could be extended to a few other directories, like .sl/ and .hg/

In the long run, it might be nice to have something like a buckignore file for more "exotic" cases like this.

download_file cannot be used twice in the same rule because it cannot specify an identifier

The following example doesn't seem to work, though I think it should:


def __download_license_data_impl(ctx: "context") -> ["provider"]:
    base_url = lambda name: "{}/json/{}.json".format(ctx.attrs.revision, name)
    def dl_json_file(name, sha1):
        name = "{}.json".format(name)
        out = ctx.actions.declare_output(name)
        ctx.actions.download_file(out, base_url(name), sha1 = sha1)
        return out

    if len(ctx.attrs.sha1) != 2:
        fail("sha1 must be a list of two strings")

    if "licenses:" != ctx.attrs.sha1[0][:9]:
        fail("first sha1 hash must start with 'licenses:'")
    if "exceptions:" != ctx.attrs.sha1[1][:11]:
        fail("second sha1 hash must start with 'exceptions:'")

    licenses_sha1 = ctx.attrs.sha1[0][9:]
    exceptions_sha1 = ctx.attrs.sha1[1][11:]

    licenses_out = dl_json_file("licenses", ctx.attrs.sha1[0])
    exceptions_out = dl_json_file("exceptions", ctx.attrs.sha1[1])

    return [
        DefaultInfo(default_outputs = [ licenses_out, exceptions_out ])

download_license_data = rule(
    impl = __download_license_data_impl,
    attrs = {
        "revision": attrs.string(),
        "sha1": attrs.list(attrs.string()),

and BUILD:

load(":defs.bzl", "download_license_data")

    name = "spdx_license_data",
    revision = "v3.19",
    sha1 = [


When running analysis for `root//src/larry:spdx_license_data (prelude//platform:default-7b06d4530de034dc)`

Caused by:
    Analysis produced multiple actions with category `download_file` and no identifier. Add an identifier to these actions to disambiguate them

But download_file can't take an identifier!

Print output path after building a target

I'm playing around with Buck2 (long time advanced Bazel user). One thing that I immediately missed from Bazel is the ability to easily click the output path in my terminal and open the generated file in my editor. Buck2 doesn't currently print anything to help you find the built file, and I also haven't found any flags to enable that.

Parameterized load() statements?

I have:

load("@bxl//hello.bxl", _hello_main = "main", _hello_args = "args")
hello = bxl(impl = _hello_main, cli_args = _hello_args)

load("@bxl//licenses.bxl", _licenses_main = "main", _licenses_args = "args")
license_check = bxl(impl = _licenses_main, cli_args = _licenses_args)

# ad infinitum...

I want:

files = {
    "hello": "@bxl//hello.bxl",
    # ...

for (name, path) in files.items():
    load(path, _main = "main", _args = "args")
    load_symbols({ name: bxl(impl = _main, cli_args = _args) })

But this currently doesn't work. I suspect there are pretty good reasons for this. But would something like this ever be achievable?

Part of the issue I think is just that the API for 'load' is awkward here, because I think it's supposed to introduce the symbols into the module context, not the local scope. But I really just want locally bound names here. So having something like e = load_single_expr(path, name) would be nice. But again I assume there are reasons for this.

Toolchain for binutils sourcing

We currently have no mechanism for sourcing binutils such as ld, ar, nm, etc which are commonly used when building cxx projects.

Protoc failed: --experimental_allow_proto3_optional was not set

On Ubuntu 22.04.1 LTS with protobuf-compiler installed via apt, protoc --version is libprotoc 3.12.4 and I get the following errors from cargo check:

error: failed to run custom build command for `buck2_test_proto v0.1.0 (buck2/buck2_test_proto)`
Caused by:
  process didn't exit successfully: `buck2/target/debug/build/buck2_test_proto-d224bbb82df65000/build-script-build` (exit status: 1)
  --- stdout
  --- stderr
  Error: Custom { kind: Other, error: "protoc failed: test.proto: This file contains proto3 optional fields, but --experimental_allow_proto3_optional was not set.\n" }

error: failed to run custom build command for `buck2_data v0.1.0 (buck2/buck2_data)`
Caused by:
  process didn't exit successfully: `buck2/target/debug/build/buck2_data-db49426b304377a7/build-script-build` (exit status: 101)
  --- stdout
  --- stderr
  thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: "protoc failed: data.proto: This file contains proto3 optional fields, but --experimental_allow_proto3_optional was not set.\n" }', buck2_data/
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I found that this patch makes the build succeed:

diff --git a/buck2_data/ b/buck2_data/
index 08790b9a..acbc7c4e 100644
--- a/buck2_data/
+++ b/buck2_data/
@@ -16,6 +16,7 @@ fn main() -> io::Result<()> {
+        .protoc_arg("--experimental_allow_proto3_optional")
diff --git a/buck2_test_proto/ b/buck2_test_proto/
index 69f5b6fc..5401c498 100644
--- a/buck2_test_proto/
+++ b/buck2_test_proto/
@@ -6,7 +6,9 @@ fn main() -> io::Result<()> {
     // Tonic build uses PROTOC to determine the protoc path.
-    tonic_build::configure().compile(proto_files, &["."])?;
+    tonic_build::configure()
+        .protoc_arg("--experimental_allow_proto3_optional")
+        .compile(proto_files, &["."])?;
     // Tell Cargo that if the given file changes, to rerun this build script.
     for proto_file in proto_files {

feature request: pre-built binaries

Hey folks! Now that buck2 has been announced, do you plan on making pre-built binaries available for any platforms, or is it still too early for that? I'd really like to use buck2 at work, but asking everyone to get a Rust toolchain installed to make the build tool may be a bit much still.

Overriding a rule name (buck.type) to match its public, exported name?

Consider an API like the following for exposing rules to users:

__private_name = rule(...)

public_api = struct(
    rule01 = __private_name

The goal of an API like this is to just be easier to read and write; you don't need to know the public symbols coming out of a module and when applied consistently makes it a little easier to find things. It's nothing groundbreaking. Just me exploring the API design space.

So now when I write BUILD files I use public_api.rule01 to declare targets. Great.

But this falls down when querying target nodes. The buck.type of a rule is given by the name given to the global top-level rule, not based on what's exported from a module (which would be hard to understand/make sense of, in any case.) In other words, the following query fails:

buck cquery 'kind("public_api.rule01", ...)'

This works:

buck cquery 'kind("__private_name", ...)'

Which exposes the internal name from the module. This isn't necessarily specific to struct()-like APIs; a simple case like public_name = __private_name suffers from the same issue too, I think. The internal name is leaked through to the public user.

You can verify this with a cquery; in the following example the exported name of the rule for users is rust.binary, not __rust_binary:

austin@GANON:~/src/$ buck cquery src/hello: --output-all-attributes
Build ID: ab0ce60d-bea1-458b-b886-5c02c610306d
Jobs completed: 1. Time elapsed: 0.0s.
  "root//src/hello:main (prelude//platform:default-632fe5438d4aecc1)": {
    "buck.type": "__rust_binary",
    "buck.deps": [
      "prelude//toolchains/rust:rust-stable (prelude//platform:default-632fe5438d4aecc1)"
    "buck.package": "root//src/hello:TARGETS",
    "buck.oncall": null,
    "buck.target_configuration": "prelude//platform:default-632fe5438d4aecc1",
    "buck.execution_platform": "prelude//platform:default",
    "name": "main",
    "default_target_platform": null,
    "target_compatible_with": [],
    "compatible_with": [],
    "exec_compatible_with": [],
    "visibility": [],
    "tests": [],
    "_toolchain": "prelude//toolchains/rust:rust-stable (prelude//platform:default-632fe5438d4aecc1)",
    "file": "root//src/hello/",
    "out": "hello"

So the question is: would it be possible to allow a rule to specify its buck.type name in some manner so that queries and rule uses can be consistent? Perhaps just another parameter to rule() that must be a constant string? If that could work?

It's worth noting this is mostly a convenience so that a rule user isn't confused by the leaked name. I can keep the nice struct()-like API and have the __private_names follow a similar naming logic so that "translating" between the two isn't hard. Not that big a deal.

I can imagine this might be too deeply woven into the implementation to easily fix.

buck2 Documentation

This is a general tracking issue patching holes in the buck2 documentation regarding background information, general use, rule writing, etc. Some high level goals:

  • API docs for the prelude
    • providers
    • rules
  • buck2 for bazel users
  • glossary
  • quick start with and without the prelude
  • developer environment setup w/ extensions
  • toolchains and toolchain rules

Permission denied on Windows

 examples  cd .\hello_world\
 hello_world  buck2 init --git
Initialized empty Git repository in C:/Users/SteveFan/git/
Cloning into 'C:/Users/SteveFan/git/'...
remote: Enumerating objects: 5647, done.
remote: Counting objects: 100% (1679/1679), done.
remote: Compressing objects: 100% (607/607), done.
remote: Total 5647 (delta 1068), reused 1672 (delta 1062), pack-reused 3968
Receiving objects: 100% (5647/5647), 2.17 MiB | 5.74 MiB/s, done.
Resolving deltas: 100% (3699/3699), done.
warning: in the working copy of '.gitmodules', LF will be replaced by CRLF the next time Git touches it
 hello_world  buck2 build
Build ID: 74cc4593-2b17-499a-8c75-20ee1c8fdf9e
Jobs completed: 2. Time elapsed: 0.0s.
 hello_world  buck2 build //...
File changed: root//.git/fsmonitor--daemon/cookies
File changed: root//.git/fsmonitor--daemon/cookies/21900-3
File changed: root//.git/modules/prelude/fsmonitor--daemon/cookies
1 additional file change events
Action failed: root//:print (symlinked_dir buck-headers)
Internal error: symlink(original=../../../../../../../library.hpp, link=C:\Users\SteveFan\git\\facebookincubator\buck2\examples\hello_world\buck-out\v2\gen\root\fb50fd37ce946800\__print__\buck-headers\library.hpp): A required privilege is not held by the client. (os error 1314)
Build ID: 228f67cb-a339-403a-8d31-fd2f8f28233a
Jobs completed: 50. Time elapsed: 0.4s.
Failed to build 'root//:print (prelude//platforms:default#fb50fd37ce946800)'

Refer to external dependencies?

I'm trying to import

    name = "spdlog",
    header_dirs = ["/nix/store/yjqxa9782hpl59cg17h5ma4d4l0zh0ac-spdlog-1.10.0-dev/include/"],
    exported_headers = glob(["**/*.h"]),
    header_only = True,
    visibility = ["PUBLIC"],

I get Error when treated as a path: expected a relative path but got an absolute path instead:

I also tried using a relative path like ../../../nix/store/... which worked in buck1 IIRC but get

  Error when treated as a target: Invalid absolute target pattern `../../../../nix/store/yjqxa9782hpl59cg17h5ma4d4l0zh0ac-spdlog-1.10.0-dev/include/` is not allowed: Expected a `:`, a trailing `/...` or the literal `...`.
  Error when treated as a path: expected a normalized path but got an un-normalized path instead: `../../../../nix/store/yjqxa9782hpl59cg17h5ma4d4l0zh0ac-

Is there a recommended way to include external dependencies?

UI request: a shorthand for 'buck bxl' invocations (or: arbitrary 'buck foobar' commands)

Here's a nifty one-liner I put behind the name bxl:

exec buck2 bxl "bxl//top.bxl:$1" -- "''${@:2}"

This requires you to:

  • set bxl = path/to/some/dir in your root .buckconfig file [repository] stanza, and then
  • export a bunch of top level bxl() definitions from bxl//top.bxl which can be invoked

Then you can just run the command bxl hello --some_arg 123 in order to run an exported bxl() definition named hello. Pretty nice! And because the command invocation uses the bxl// cell to locate top.bxl, the bxl command can work in any repository that defines that cell, and the files can be located anywhere.

So my question is: could something like this be supported in a better way? The reason is pretty simple: it's just a lot easier to type and remember!

Perhaps the best prior art here I can think of is git, which allows git foobar to invoke the separate git-foobar binary, assuming it's in $PATH and has an executable bit set. We don't need to copy this exactly 1-to-1, but it's a generalized solution to this problem, and in fact it's arguable that being able to do a buck2 foobar subcommand is useful for the same reasons. So maybe that's a better place to start, and the bxl scripts could be a more specialized case of this.

The document containing the list of use cases for dynamic outputs is not readable

There is a document that apparently contains "a full list of worked out use cases" for dynamic dependencies:

linked from this page:

The document seems to be a Google Doc, but it's not readable by me (neither with my Google corp Google account nor with my personal one nor in an incognito window)

Building Rust on Apple Silicon w/ buck2

I was playing around with buck2 and trying to use it to build the prelude workspace seen in the examples folder of the repo and was running into issues. All of the other toolchains seem to be working except for the Rust one. For clarification, I'm running buck2 on an M1 Mac using homebrew for package management if that helps. I was wondering if anyone has any ideas on resolving the issues. Relevant error messages are provided below for reference.

buck2 build ...
File changed: root//.git/objects/maintenance.lock
File changed: root//.git/modules/prelude/FETCH_HEAD
File changed: root//.git/modules/prelude/objects/maintenance.lock
Action failed: root//rust:main (rustc bin-pic-static_pic-link/main-link bin,pic,link [diag])
Local command returned non-zero exit code 1
Reproduce locally: /usr/bin/env "PYTHONPATH=buck-out/v2/gen/prelude/213ed1b7ab869379/rust/tools/__rustc_action__/__rust ...<omitted>... .py @buck-out/v2/gen/root/213ed1b7ab869379/rust/__main__/bin-pic-static_pic-link/main-link-diag.args (run buck2 log what-failed to get the full command)
arch: posix_spawnp: rustc: Bad CPU type in executable

Build ID: dc3e3c7d-a1d0-42cd-8399-392a63445c30
Jobs completed: 9. Time elapsed: 0.3s. Cache hits: 0%. Commands: 1 (cached: 0, remote: 0, local: 1)
Failed to build 'root//rust:main (prelude//platforms:default#213ed1b7ab869379)'

Output from log what-failed

buck2 log what-failed
Showing commands from: buck2 build ...
build root//rust:main (prelude//platforms:default#213ed1b7ab869379) (rustc bin-pic-static_pic-link/main-link bin,pic,link [diag]) local env -- "TMPDIR=~/buck2/examples/prelude/buck-out/v2/tmp/root/213ed1b7ab869379/rust/main/rustc/_buck_130845f0f81d454f" "BUCK2_DAEMON_UUID=2833dbcc-04d4-4384-a79c-6a57c3c7866d" /usr/bin/env "PYTHONPATH=buck-out/v2/gen/prelude/213ed1b7ab869379/rust/tools/rustc_action/rustc_action" python3 buck-out/v2/gen/prelude/213ed1b7ab869379/rust/tools/rustc_action/ @buck-out/v2/gen/root/213ed1b7ab869379/rust/main/bin-pic-static_pic-link/main-link-diag.args

Not sure if I need to change some sort of configuration for rustc in the context of buck2 or what a possible solution would be. Appreciate any ideas, thanks.

Failed to build on Windows

   Compiling buck2 v0.1.0 (C:\Users\steve\scoop\persist\rustup\.cargo\git\checkouts\buck2-881d6af740402932\9178ac5\app\buck2)
warning: error finalizing incremental compilation session directory `\\?\C:\Users\steve\AppData\Local\Temp\cargo-installeWlLuT\release\incremental\buck2_build_api-3bpejjqwe8ssu\s-gjx1f497xi-18v2t4g-working`: The system cannot find the file specified. (os error 2)

error: internal compiler error: no errors encountered even though `delay_span_bug` issued

error: internal compiler error: broken MIR in Item(WithOptConstParam { did: DefId(0:7299 ~ buck2_build_api[daa8]::artifact_groups::calculation::_assert_ensure_artifact_group_future_size::{closure#0}), const_param_did: None }) (after phase change to runtime-optimized) at bb2[1]:
                                Cannot transmute to non-`Sized` type impl futures::Future<Output = std::result::Result<artifact_groups::calculation::EnsureArtifactGroupReady, anyhow::Error>> + '_
   --> app\buck2_build_api\src\artifact_groups\
238 |     static_assertions::assert_eq_size_ptr!(&v, &e);
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = note: delayed at    0: std::backtrace::Backtrace::disabled
               1: std::backtrace::Backtrace::force_capture
               2: <rustc_errors::HandlerInner>::emit_diagnostic
               3: <rustc_const_eval::transform::check_consts::qualifs::CustomEq as rustc_const_eval::transform::check_consts::qualifs::Qualif>::in_qualifs
               4: rustc_const_eval::transform::promote_consts::is_const_fn_in_array_repeat_expression
               5: <rustc_const_eval::transform::validate::Validator as rustc_middle::mir::MirPass>::run_pass
               6: <rustc_mir_transform::large_enums::EnumSizeOpt as rustc_middle::mir::MirPass>::is_enabled
               7: <rustc_mir_transform::remove_noop_landing_pads::RemoveNoopLandingPads as rustc_middle::mir::MirPass>::run_pass
               8: <&rustc_index::vec::IndexVec<rustc_middle::mir::Promoted, rustc_middle::mir::Body> as rustc_serialize::serialize::Decodable<rustc_query_impl::on_disk_cache::CacheDecoder>>::decode
               9: <rustc_span::def_id::DefId as rustc_serialize::serialize::Encodable<rustc_query_impl::on_disk_cache::CacheEncoder>>::encode
              10: <rustc_query_impl::Queries as rustc_middle::ty::query::QueryEngine>::as_any
              11: <rustc_metadata::creader::CStore>::from_tcx
              12: rustc_metadata::rmeta::encoder::encode_metadata
              13: rustc_metadata::rmeta::decoder::cstore_impl::provide_extern
              14: rustc_metadata::rmeta::encoder::encode_metadata
              15: rustc_metadata::fs::encode_and_write_metadata
              16: rustc_interface::passes::start_codegen
              17: rustc_interface::proc_macro_decls::provide
              18: <rustc_interface::queries::Queries>::ongoing_codegen
              19: <rustc_middle::ty::SymbolName as core::fmt::Display>::fmt
              20: rustc_driver_impl::args::arg_expand_all
              21: rustc_driver_impl::main
              22: rustc_driver_impl::args::arg_expand_all
              23: rustc_driver_impl::args::arg_expand_all
              24: std::sys::windows::thread::Thread::new
              25: BaseThreadInitThunk
              26: RtlUserThreadStart

    = note: this error: internal compiler error originates in the macro `static_assertions::assert_eq_size_ptr` (in Nightly builds, run with -Z macro-backtrace for more info)

note: we would appreciate a bug report:

note: rustc 1.70.0-nightly (23ee2af2f 2023-04-07) running on x86_64-pc-windows-msvc

note: compiler flags: --crate-type lib -C opt-level=3 -C panic=abort -C embed-bitcode=no -C incremental=[REDACTED]

note: some of the compiler flags provided by cargo are hidden

query stack during panic:
end of query stack
warning: `buck2_build_api` (lib) generated 1 warning
error: could not compile `buck2_build_api` (lib); 1 warning emitted
error: failed to compile `buck2 v0.1.0 (`, intermediate artifacts can be found at `C:\Users\steve\AppData\Local\Temp\cargo-installeWlLuT`

Strip release executable?

Hi everyone- I'm starting to play around with buck2 and it looks very interesting!

I noticed that the release build (cargo build -r) clocks in at 75MB on my M1 MBP. But, strip knocks a solid 23MB off of it! I'm guessing that the buck2 executable isn't meant to be dynamically linked against, so maybe it's safe to add a release post-build step that runs strip?

Screenshot 2023-04-09 at 10 07 29 AM

Might be a free win?

Additionally, though, have you done any size auditing? 52MB is still pretty massive for a natively-compiled executable. I wonder if there are more wins lurking in the map file...

Providing a hermetic python toolchain

For serious use-cases we would benefit from a system-independent python toolchain that can source an interpreter. At the basic level this involves downloading a copy of CPython for the current platform that can run code. An issue with this is that basic cpython depends on dynamic libraries such as libssl and libsqlite, so we need to either provide a mechanism for building those consistently (c / cpp compiler) or use an interpreter with static linking such as

Potential learnings from bazel

  • dependencies are managed in a repo rule, meaning they are not using the same toolchain that bazel uses, but use the local toolchain
  • cross compilation is tough (broken)

It should be possible to override protoc-bin-vendored

As of #60, buck2 no longer depends on an external protoc binary, but uses a binary vendored from @stepancheg's rust-protoc-bin-vendored.

While reducing friction is a noble goal I support fully, this had an unintended side effect: it breaks the ability to build buck using tools like Nix, without tiny workarounds

Problem: dynamically linked binaries require /usr/lib

The basic problem is that when you build something with Nix, paths like /usr are not available during the build process. Every dependency must be declared specifically, and all dependencies are located within /nix/store. This includes, in fact, pthreads, and everything else.

austin@GANON:~/src/rust-protoc-bin-vendored$ ldd protoc-bin-vendored-linux-x86_64/bin/protoc (0x00007ffcf97b1000) => /lib/x86_64-linux-gnu/ (0x00007f53ce17d000) => /lib/x86_64-linux-gnu/ (0x00007f53ce096000) => /lib/x86_64-linux-gnu/ (0x00007f53cde6e000)
        /lib64/ (0x00007f53ce18d000)

This means that this ELF binary requires /lib64/ to be available, but it won't be available during the build phase. Therefore during the buck2 build process, when attempting to invoke protoc, it will fail, because the binary cannot have its dependencies satsified by the dynamic system linker Here's what the Nix-ified output looks like:

a@link:~/src/rust-protoc-bin-vendored/ > ldd protoc-bin-vendored-linux-x86_64/bin/protoc (0x00007ffeaf4ab000) => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/ (0x00007f04dc75f000) => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/ (0x00007f04dc67f000) => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/ (0x00007f04dc400000)
        /lib64/ => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ (0x00007f04dc766000)

Can't we fix the binaries?

There is a fun tool called patchelf that could fix this for us, and is designed to handle this exact problem for Nix machines; you can just patchelf binaries to get them to work. And the binaries would work if we could just patch the paths. However it would require us to inject patchelf into the cargo build process for buck, so it's not exactly trivial, and looked quite painful for me to pull off reliably.

The binaries could also be statically linked, but I don't know how much of a PITA that is.

Quick fix patch

The easiest fix I found was just to set PROTOC and PROTOC_INCLUDE manually in the build environment to point to my working copies of the compiler and its include code, and then just comment out the code that uses protoc-bin-vendored

The following patch was sufficient for me to keep working as if nothing happened:

commit e92d67e4568c5fa6bcfbbd7ee6e16b9f132114a9
Author: Austin Seipp <[email protected]>
Date:   Mon Jan 2 00:46:31 2023 -0600

    hack(protoc): do not use protoc-bin-vendored
    This package doesn't work under the Nix sandbox when using Cargo,
    because the vendored binaries can't easily be patchelf'd to look at the
    correct libc paths.
    Instead, just rely on PROTOC and PROTOC_INCLUDE being set manually.
    Signed-off-by: Austin Seipp <[email protected]>

diff --git a/app/buck2_protoc_dev/src/ b/app/buck2_protoc_dev/src/
index 69980b9a..fecdde46 100644
--- a/app/buck2_protoc_dev/src/
+++ b/app/buck2_protoc_dev/src/
@@ -80,8 +80,8 @@ impl Builder {
     pub fn setup_protoc(self) -> Self {
         // It would be great if there were on the config rather than an env variables...
-        maybe_set_protoc();
-        maybe_set_protoc_include();
+        //maybe_set_protoc();
+        //maybe_set_protoc_include();

This is small enough that I'm happy to carry it for now, but a solution to support both would be nice, with vendoring being the default. website doesn't redirect HTTP to HTTPS website doesn't redirect HTTP requests to HTTPS even though there is a valid HTTPS endpoint (valid TLS certificate).

$ curl -vvv
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.86.0
> Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sun, 12 Feb 2023 12:41:29 GMT
< Content-Type: text/html; charset=utf-8

It can be fixed by enforcing HTTPS. Enforcing HTTPS for your GitHub Pages site.

where can I ask questions / get help?

Hey! I have a bunch of beginner-level questions. What's an appropriate place to ask them? I don't want to clutter this repo with a bunch of small questions whose answer is probably "that works like Buck 1" (which I haven't used 😅)

Allow release version of Rust Analyzer to work on the code

It's not currently possible to use Rust Analyzer at the root of the repo. This holds back open-source collaboration.

This is because Rust Analzyer requires builds to build with one version of the toolchain, and by default, that's the stable version. This repo isn't built with the stable toolchain.

It's possible via Settings to get the Analyzer to use a particular nightly toolchain version, but even that is not sufficient, as there are crates used that themselves specify a different nightly toolchain version to the main toolchain being used.

Suggested fix: converge the code on the stable version of the toolchain, or, failing that, on a particular nightly version, so that Rust Analyzer can work.

Cargo install's --git flag: "member of the wrong workspace"

Normally cargo install is able to install binaries from a subdirectory of a git repo. It will find all workspace roots, resolve the requested package name in each one, fail if there is an ambiguity, and otherwise install it.

In buck2's case it seems to go wrong because 2 different workspaces both claim to contain some of the same crates.

$ cargo install cli --git --bin buck2
    Updating git repository ``
error: package `/home/david/.cargo/git/checkouts/buck2-4c0ac0340bde8e6a/c117530/allocative/allocative/Cargo.toml` is a member of the wrong workspace
expected: /home/david/.cargo/git/checkouts/buck2-4c0ac0340bde8e6a/c117530/Cargo.toml
actual:   /home/david/.cargo/git/checkouts/buck2-4c0ac0340bde8e6a/c117530/allocative/Cargo.toml

It's hard for me to tell whether this arrangement is intentional, or just an oversight. In any case, it leads to issues.

Multiple build configurations?

I want to try buck2 in a C++ project, and I want to be able to:

  • use different compilers (gcc/clang)
  • use different build configurations (release/debug/sanitizers)

What's the suggested way to implement this? Multiple toolchains? How does cxx_library or cxx_binary know what toolchain to use?
How would command line look like? Is something like buck2 test //... --debug --asan and buck2 run //:main --release --gcc possible?

I found this page about configurations, and it seems like one should be able to select() things based on config values. Do I write a single toolchain that is parameterized by config values instead? buck2 build -c cpp.compiler=gcc -c cpp.debug=true //...?

I also found this RFC:, but it doesn't specify how to structure the project in order to make buck2 build //foo:bar@release+gcc possible.

Can't build via documented `cargo` commands

With Ubuntu 20.04.5 LTS and rustup, cargo, protoc installed and on path, building via cargo with the supplied command:

cargo build --bin=buck2 --release

eventually gives error text of:

error: failed to run custom build command for `buck2_data v0.1.0 (/home/philip/work/buck2/buck2_data)`

Caused by:
  process didn't exit successfully: `/home/philip/work/buck2/target/release/build/buck2_data-aee8f8c0a5c9a583/build-script-build` (exit status: 1)
  --- stdout

  --- stderr
  INFO: Not setting $PROTOC to "../../../third-party/protobuf/dotslash/protoc", as path does not exist
  INFO: Not setting $PROTOC_INCLUDE to "../../../third-party/protobuf/protobuf/src", as path does not exist
  Error: Custom { kind: Other, error: "protoc failed: Unknown flag: --experimental_allow_proto3_optional\n" }
warning: build failed, waiting for other jobs to finish...

Feature request: buck2 vscode extension

(This is totally not meant as a "do this today" thing, but a placeholder here for future possible work.)

It would be nice for there to be a Buck2 vscode extension. Syntax highlighting for Buck2-flavored Starlark, support for running buck2 commands later on, maybe some nice auto-import, that kind of stuff.

The official Bazel VSCode extension is similar enough to actually cause interesting problems. Namely, that, right now, popping open a Buck2 Skylark .bzl file will trigger the loading of the Bazel extension which will, in turn, emit errors and such about missing WORKSPACE files and so on that are irrelevant in Buck2-land.

It'd be nice to get Starlark syntax highlighting, at least, without the noise, even if we don't get all the rest.

Self-hosting open-source buck2

AFAIK the open source version of buck2 can currently only be built with cargo.
Making it self-hosting, i.e. to be able to build the open source buck2 with itself, could be a good milestone on the way to making it useful in open source use-cases.
This came up in discussion with @arlyon on how to build the open source version of buck2.

Test Execution docs page links to Meta intranet page

Hi folks, congrats on the launch. I use Buck on an iOS project and was researching how Buck2 could be used in similar ways. While researching how Buck2 handles xctool for test running (and finding that it does not), I noticed that there is a link to a tool called Tpx on this documentation page that directs towards an site. Will you be making more information about this tool available soon, or open sourcing it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.