Git Product home page Git Product logo

bazeldnf's Introduction

bazeldnf

Bazel library which allows dealing with the whole RPM dependency lifecycle solely with pure go rules and a static go binary.

Bazel rules

rpm rule

The rpm rule represents a pure RPM dependency. This dependency is not processed in any way. They can be added to your WORKSPACE file like this:

load("@bazeldnf//:deps.bzl", "rpm")

rpm(
    name = "libvirt-devel-6.1.0-2.fc32.x86_64.rpm",
    sha256 = "2ebb715341b57a74759aff415e0ff53df528c49abaa7ba5b794b4047461fa8d6",
    urls = [
        "https://download-ib01.fedoraproject.org/pub/fedora/linux/releases/32/Everything/x86_64/os/Packages/l/libvirt-devel-6.1.0-2.fc32.x86_64.rpm",
        "https://storage.googleapis.com/builddeps/2ebb715341b57a74759aff415e0ff53df528c49abaa7ba5b794b4047461fa8d6",
    ],
)

rpmtree

rpmtree Takes a list of rpm dependencies and merges them into a single tar package. rpmtree rules can be added like this to your BUILD files:

load("@bazeldnf//:deps.bzl", "rpmtree")

rpmtree(
    name = "rpmarchive",
    rpms = [
        "@libvirt-libs-6.1.0-2.fc32.x86_64.rpm//rpm",
        "@libvirt-devel-6.1.0-2.fc32.x86_64.rpm//rpm",
    ],
)

Since rpmarchive is just a tar archive, it can be put into a container immediately:

container_layer(
    name = "gcloud-layer",
    tars = [
        ":rpmarchive",
    ],
)

rpmtrees allow injecting relative symlinks (pkg_tar can only inject absolute symlinks) and xattrs capabilities. The following example adds a relative link and gives one binary the cap_net_bind_service capability to connect to privileged ports:

rpmtree(
    name = "rpmarchive",
    rpms = [
        "@libvirt-libs-6.1.0-2.fc32.x86_64.rpm//rpm",
        "@libvirt-devel-6.1.0-2.fc32.x86_64.rpm//rpm",
    ],
    symlinks = {
        "/var/run": "../run",
    },
    capabilities = {
        "/usr/libexec/qemu-kvm": [
            "cap_net_bind_service",
        ],
    },
)

Running bazeldnf with bazel

The bazeldnf repository needs to be added to your WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
    name = "bazeldnf",
    sha256 = "fb24d80ad9edad0f7bd3000e8cffcfbba89cc07e495c47a7d3b1f803bd527a40",
    urls = [
        "https://github.com/rmohr/bazeldnf/releases/download/v0.5.9/bazeldnf-v0.5.9.tar.gz",
    ],
)

load("@bazeldnf//:deps.bzl", "bazeldnf_dependencies")

bazeldnf_dependencies()

Define the bazeldnf executable rule in your BUILD.bazel file:

load("@bazeldnf//:def.bzl", "bazeldnf")

bazeldnf(name = "bazeldnf")

After adding this code, you can run bazeldnf with Bazel:

bazel run //:bazeldnf -- --help

Libraries and Headers

One important use-case is to expose headers and libraries inside the RPMs to build targets in bazel. If we would just blindly expose all libraries to build targets, bazel would try to link any one of them to our binary. This would obviously not work. Therefore we need a mediator between cc_library and rpmtree. This mediator is the tar2files target. This target allows extracting a subset of libraries and headers and providing them to cc_library targets.

An example:

load("@bazeldnf//:deps.bzl", "rpm", "rpmtree", "tar2files")

tar2files(
    name = "libvirt-libs",
    files = {
        "/usr/include/libvirt": [
            "libvirt-admin.h",
            "libvirt-common.h",
            "libvirt-domain-checkpoint.h",
            "libvirt-domain-snapshot.h",
            "libvirt-domain.h",
            "libvirt-event.h",
        ],
        "/usr/lib64": [
            "libacl.so.1",
            "libacl.so.1.1.2253",
            "libattr.so.1",
        ],
    },
    tar = ":libvirt-devel",
    visibility = ["//visibility:public"],
)

tar can take any input which is a tar archive. Conveniently this is what rpmtree creates as the default target. So any rpmtree can be used here. The files section contains then files per folder which one wants to expose to cc_library:

cc_library(
    name = "rpmlibs",
    srcs = [
        ":libvirt-libs/usr/lib64",
    ],
    hdrs = [
        ":libvirt-libs/usr/include/libvirt",
    ],
    strip_include_prefix="/libvirt-libs/",
    prefix= "libvirt",
)

At this point source code linking to these libraries can be compiled, but unit tests would only work if we would manually list any transitive library. This would be tedious and error prone. However bazeldnf can introspect for you shared libraries and create tar2files rules for you, based on a provided set of libraries.

First define a target like this:

load("@bazeldnf//:deps.bzl", "rpm", "rpmtree", "tar2files")
load("@bazeldnf//:def.bzl", "bazeldnf")

bazeldnf(
    name = "ldd",
    command = "ldd",
    libs = [
        "/usr/lib64/libvirt-lxc.so.0",
        "/usr/lib64/libvirt-qemu.so.0",
        "/usr/lib64/libvirt.so.0",
    ],
    rpmtree = ":libvirt-devel",
    rulename = "libvirt-libs",
)

rulename containes the tar2files target name, rpmtree references a given rpmtree and libs contains libraries which one wants to link. When now executing the target like this:

bazel run //:ldd

the tar2files target will be updated with all transitive library dependencies for the specified libraries. In addition, all header directories are updated too for convenience.

Dependency resolution

One key part of managing RPM dependencies and RPM repository updates via bazel is the ability to resolve RPM dependencies from repos without external tools like dnf or yum and write the resolved dependencies to your WORKSPACE.

Here an example on how to add libvirt and bash to your WORKSPACE and BUILD files.

First write the repo.yaml file which contains some basic rpm repos to query:

bazeldnf init --fc 32 # write a repo.yaml file containing the usual release and update repos for fc32

Then write a rpmtree rule called libvirttree to your BUILD file and all corresponding RPM dependencies into your WORKSPACE for libvirt:

bazeldnf rpmtree --workspace /my/WORKSPACE --buildfile /my/BUILD.bazel --name libvirttree libvirt

Do the same for bash with a bashrpmtree target:

bazeldnf rpmtree --workspace /my/WORKSPACE --buildfile /my/BUILD.bazel --name bashtree bash

Finally prune all unreferenced old RPM files:

bazeldnf prune --workspace /my/WORKSPACE --buildfile /my/BUILD.bazel

By default bazeldnf rpmtree will try to find a solution which only contains the newest packages of all involved repositories. The only exception are pinned versions themselves. If pinned version require other outdated packages, the --nobest option can be supplied. With this option all packages are considered. Newest packages will have the higest weight but it may not always be able to choose them and older packages may be pulled in instead.

Dependency resolution limitations

Missing features
  • Resolving requires entries which contain boolean logic like (gcc if something)
Deliberately not supported

The goal is to build minimal containers with RPMs based on scratch containers. Therefore the following RPM repository hints will be ignored:

  • recommends
  • supplements
  • suggests
  • enhances

bazeldnf's People

Contributors

andreabolognani avatar boleynsu avatar jean-edouard avatar jschintag avatar leoluk avatar malt3 avatar manuelnaranjo avatar maya-r avatar pomodorox avatar riptl avatar rmohr avatar wade-arista avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bazeldnf's Issues

Support local GPG keys

gpgkey in repo.yaml currently does not support pointing it to a local file.

It would be more secure and, in the case of Fedora, convenient to be able to point it to a file stored in the repository.

Checksums are assumed to be in sha256 format

A colleague has reported hitting the following error:

INFO[0004] Resolving repomd.xml from http://<foo>/repodata/repomd.xml 
INFO[0004] Loading primary file from http://<foo>/repodata/<foo>-primary.xml.gz 
Error: failed to fetch primary.xml for <foo>: failed to get sha256sum of file: no sha256 found

On further inspection, this is caused by the fact that repomd.xml contains a bunch of

<checksum type='sha'>

elements, while bazeldnf expects all checksums to be in sha256 format.

Adding sha1 (and sha512?) support doesn't look too difficult from a quick look, but I can't allocate time to the task right now.

feature idea: use bazeldnf as sysroot for hermetic C/C++ toolchains

Some C/C++ toolchains use sysroots for cross compilation: https://github.com/grailbio/bazel-toolchain
It should be possible to use bazeldnf to provide the sysroot directly from RPMs.
I tried this naively by specifying a tar2files rule that provides the shared objects I want:

tar2files(
    name = "sysroot-files",
    files = {
        "": [
            "usr/lib64/ld-linux-x86-64.so.2",
            "usr/lib64/libc.so.6",
            "usr/lib64/libpthread.so.0",
            "usr/lib64/libm.so.6",
        ],
    },
    tar = ":sysroot",
    visibility = ["//visibility:public"],
)

... but this ended in a dependency cycle:

ERROR: /home/malte/.cache/bazel/_bazel_malte/378578e4f433132b2cb209458a9b56bc/external/llvm_toolchain_with_sysroot/BUILD.bazel:148:10: in filegroup rule @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux: cycle in dependency graph:
    //3rdparty/bazel/com_github_google_go_tpm_tools/placeholder:ms_tpm_20_ref_disabled (3e8e717f4669c9418374bff8949496244fb3fd0c20f44265c211f83bca6cf692)
    @llvm_toolchain_with_sysroot//:cc-clang-x86_64-linux (3e8e717f4669c9418374bff8949496244fb3fd0c20f44265c211f83bca6cf692)
.-> @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @llvm_toolchain_with_sysroot//:linker-components-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @llvm_toolchain_with_sysroot//:sysroot-components-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   //rpm:sysroot-files (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @bazeldnf//cmd:cmd (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @io_bazel_rules_go//:go_context_data (e93d1c23410f0ff85d23a7db057f8bc335ef39c02262590d5174b118f115d0b7)
|   @io_bazel_rules_go//:stdlib (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @io_bazel_rules_go//:cgo_context_data (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
|   @llvm_toolchain_with_sysroot//:cc-clang-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)
`-- @llvm_toolchain_with_sysroot//:linker-files-x86_64-linux (ec13253c4a496ceaa20361e96ceb1da2e09453ee5422e5738007f3d4a26bebe4)

I would like to work on this but lack some of the understanding of how this really works under the hood. I would really appreciate if anyone has ideas and maybe some pointers where to look.

Test suite improvements

The current test suite is lacking in a number of ways.

Some of the packages (repo, rpm) use gomega only partially or don't use it at all.

Some of the packages (api, ldd, reducer) have no associated tests.

One of the tests (pkg/repo/repo_test.go) tries to open the non-existing repo.yaml file, but doesn't consider the failure to be fatal and happily reports a success status.

We also have a large number of references to obsolete content (Fedora 32). We should get those updated and document the process so that we can keep them reasonably fresh over time.

I'd be willing to work on some of these items, but I will probably need guidance because a lot of the inner workings of bazeldnf are still quite fuzzy in my head.

Request: Pre-built binaries

Feature Request

Please provide pre-built static binaries in GitHub releases (ideally built on tag with Bazel via a GitHub Actions pipeline).

Happy to work on the CI pipelines for this if the maintainer agrees with this approach.

Motivation

When using a non-Go Bazel project with bazeldnf, bazeldnf currently forces the project to import a Go toolchain in its WORKSPACE. This is ~300MB additional toolchain footprint. It'd be great to be able to opt-in to fetch static binaries instead (maybe for targets Linux x86_64-v2, Linux aarch64, Linux ppc64le)

Also, thank you for working on bazeldnf.

rpm2tar produces non-deterministic output

rpm2tar stores symlinks, capabilities and selinuxLabels in maps:

bazeldnf/cmd/rpm2tar.go

Lines 17 to 19 in 6a9f524

symlinks map[string]string
capabilities map[string]string
selinuxLabels map[string]string

This leads to non-deterministic order of tar header generated in the resulting tar file.
In particular, the symlinks are traversed in non-deterministic order and tar headers are generated:

for k, v := range rpm2taropts.symlinks {

I'm thinking about either using slices to store the symlinks, capabilities and selinuxLabels (while preserving the order given by the flags) or sorting the map keys during traversal. I can provide a fix but would be interested in what is preferred.

Put repository rules into a separate .bzl file

In large projects, it's good practice to keep WORKSPACE clean of individual dependencies and instead include them from elsewhere.

It would be useful if bazeldnf could write its repository rules to a library file:

# repositories.bzl
load("@bazeldnf//:deps.bzl", "rpm")

def sandbox_dependencies():
    rpm(
        name = "acpica-tools-0__20220331-4.fc37.x86_64",
        sha256 = "ab044a35844bf56c0a217d298faa7f390915c72be1f5d9fd241877aa76ccb9b7",
        urls = [
            # ...
        ],
    )
    # ...

The main WORKSPACE would then just include them:

load("//third_party/sandboxroot:repositories.bzl", "sandbox_dependencies")

sandbox_dependencies()

try to use rpmtree for euleros

cat euler/repo.yaml

repositories:

bazelisk run //:bazeldnf -- rpmtree --buildfile euler/BUILD.bazel --repofile euler/repo.yaml --basesystem openEuler-release --name libvirttree libvirt

INFO: Analyzed target //:bazeldnf (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:bazeldnf up-to-date:
bazel-bin/bazeldnf.bash
INFO: Elapsed time: 0.404s, Critical Path: 0.09s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
INFO[0000] Loading packages.
Error: strconv.ParseInt: parsing "None": invalid syntax

### how can i debug this error?

Support bzlmod / bazel central registry

I'm in the process of planning a migration of my teams monorepo from bazel WORKSPACE to bzlmod.
Bazeldnf is one dependency that does not yet support bzlmod.

If you are interested in supporting bzlmod (and publishing releases to bcr), I'd like to take a stab at this in the upcoming months.

Here are the guidelines to publish a ruleset to BCR. Publishing new releases can be automated with a github app.

Design spec for module extension

We need to create a module extension for rpm rules (rpm rules are currently repo rules that create new repositories. This works differently under bzlmod).

The dominant pattern for bzlmod is to read a domain specific lockfile and generate repositories from that.
A good example is how rules_go does it:

https://github.com/bazelbuild/rules_go/blob/master/docs/go/core/bzlmod.md

Basically, they parse a go.mod and go.sum file and generate repositories from them.

For bazeldnf, I think this could be great as well.
Instead of having rpm rules that are bazeldnf specific, we could try to standardize on a lockfile format for rpms.
One example of such a format is implemented here: https://github.com/microsoft/rpmoci

This means bazeldnf could share the same lockfile as rpmoci and there is no potential for rpm rules to be out of sync with the lockfile.
The actual usage could look similar to this:

rpm_deps = use_extension("@bazeldnf//:extensions.bzl", "rpm_deps")
rpm_deps.from_file(rpmlock = "//:rpmoci.lock")

# All *direct* rpm dependencies of the module have to be listed explicitly.
use_repo(
    rpm_deps,
    "curl-minimal",
    "systemd",
)

This would then result in rpm rules during module resolution. Note that we do not specify all packages here, but only direct dependencies. The lockfile pins the transitive closure of all required rpms and bazeldnf would generate the real set of rpm rules (including transitive deps) during module resolution.

Another advantage of this is improved support for vendoring / mirroring rpms.
Since the urls of the rpm rules are not commited in the source code, they could be easily influenced to point to:

  • a base url of my own yum mirror
  • a content addressable store (base url + hash of rpm)
  • a fallback list of public mirrors
  • a local vendor directory

This would make the actual dependency management of rpms a lot simpler.

Repo Certificate Authority field for private repos

Public repos pass certificate verification using the system's trusted root certificates. Private repos may require a Certificate Authority that is not in the system's trusted root certificates.

Although some users may be able to add CAs system-wide, it is easiest and cleanest to allow setting CAs on a per-repo basis. This way no change system-wide change is necessary in order to use private repos with custom CAs.

The repo.yaml syntax can be extended with a CA string field so users can provide the CA together with the repository information.

@andreabolognani

WORKSPACE RPM URL does not take into account repo metadata

I have a repo where rpm files are kept at a separate location from the metadata:

repositories:
- arch: x86_64
  baseurl: https://REPO_URL/
  name: qemu-kvm-x86_64
  gpgcheck: 0
  repo_gpgcheck: 0

The repo metadata is available at the baseurl, but the actual rpm files are located elsewhere and referenced in the primary.xml.gz file:

<package ...>...<location xml:base="https://RPM_SERVER_URL" href="qemu-guest-agent-8.2.0-5.el9.x86_64.rpm"/>

Notice that the repo baseurl and the rpm file URLs are two different domain names.

bazeldnf currently generates the WORKSPACE rpm file URL by concatenating the baseurl with the rpm filename. This approach does not work since the rpm files are not alongside the repo metadata.

Is it possible to update bazeldnf to honor the <location xml:base> attribute when building URLs?

Shoutout to @andreabolognani who I discussed this issue with.

No solution found trying to install buildah

With the following reproducing script:

cat << EOF > repo.yaml
repositories:
- arch: x86_64
  baseurl: http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/
  name: centos/stream9-baseos-x86_64
  gpgkey: https://www.stream.centos.org/keys/RPM-GPG-KEY-CentOS-Official
- arch: x86_64
  baseurl: http://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/
  name: centos/stream9-appstream-x86_64
  gpgkey: https://www.stream.centos.org/keys/RPM-GPG-KEY-CentOS-Official
EOF

bazel run //:bazeldnf -- fetch
bazel run //:bazeldnf -- rpmtree --public --name testimage --basesystem centos-stream-release buildah

INFO[0004] Loading the Partial weighted MAXSAT problem.
INFO[0004] Solving the Partial weighted MAXSAT problem.
INFO[0004] No solution found.
Error: no solution found

I can force a solution with --force-ignore-with-dependencies containers-common so I suspect this is the problematic package - maybe because new versions are conflicting with old versions of the same package? but I am not sure how to correct bazeldnf for this.

An RPM ate my `/bin/*`

I know using large base images is not very good, and relying on base image binaries as well as RPMs I wanna install is maybe an anti-pattern, but I'm trying to build an image with some RPMs and it ate /bin/bash ๐Ÿ˜ญ

What I would've expected, was all the /bin/* executables to live in peaceful harmony.

Not sure if we could easily solve this in the rules in a general way?

Before we stumbled upon your Bazel rules, I tried to solve this in a fork of the rules_docker scripts around building tars, but it's pretty atrocious:
https://sourcegraph.com/github.com/zoidbergwill/rules_docker@zoid/rpms/-/blob/container/build_tar.py?L289-318

(If we wanted to support something like that, I guess it'd need to be behind an opt-in, so it doesn't break things for people)

Having a minimal reproducible repo would probably be useful, so I'll try strip it down and see if I can reproduce it with some public RPMs sometime.

I hope that all makes sense though!

If I recall correctly, the issue was that /bin is a symlink on CentOS 7, so then some of the tars / layering overrides it at some point, or something.

I probably have internal chat history somewhere from when I first tried to fix this a year or so ago. I've managed to avoid it again since then.

How to deal with duplicate files?

Duplicate files in rpmtree tar files can be quite problematic when exporting them as container image layers!
This is what happens when I try to load a container image created from an rpmtree layer that contains duplicate files:

docker load --input /path/to/docker/image.tar
31ff0fbf1732: Loading layer [=================================>                 ]  607.7MB/916.2MB
Error processing tar file(duplicates of file paths not supported):

When looking at the image layer and searching for duplicate entries, I find these:

tar -tf layer.tar | sort | uniq -d
./usr/share/licenses/systemd/LICENSE.LGPL2.1
./usr/share/man/man3/nfs4_uid_to_name.3.gz

In my case they come from

  • systemd-251.13-6.fc37.x86_64.rpm + systemd-libs-251.13-6.fc37.x86_64.rpm (both containing the LICENSE file)
  • libnfsidmap-2.6.2-1.rc2.fc37.x86_64.rpm + nfs-utils-2.6.2-1.rc2.fc37.x86_64.rpm (both containing the nfs4_uid_to_name.3.gz file)

Can I somehow filter for duplicates or remove one duplicate manually?
I believe that this may be an issue with the rpms that I should report upstream (not a redhat engineer, but having two packages that may be installed together ship the same files sounds like a bug). But this is something that can always happen whenever you create large trees.

EDIT: upstream bzs:

Feedback from the bz:

Sorry, but that will just need to be fixed in those tools. RPMs are allowed to contain the same paths, as long as the contents are the same. [...] There are countless rpms out there that do this.

v0.5.9-rc2 breaks tar2files (ldd) targets on crossbuilds

When upgrading to bazeldnf v0.5.9-rc2 cross building for different arches fails to find included header files.
I specifically tested this when building kubevirt with kubevirt/kubevirt#10490:

INFO: From ImageLayer cmd/libguestfs/version-container-layer.tar:
Duplicate file in archive: ./etc/passwd, picking first occurrence
ERROR: /root/go/src/kubevirt.io/kubevirt/vendor/libvirt.org/go/libvirt/BUILD.bazel:3:11: GoCompilePkg vendor/libvirt.org/go/libvirt/go_default_library.a failed: (Exit 1): builder failed: error executing command bazel-out/k8-opt-exec-2B5CBBC6/bin/external/go_sdk/builder_reset/builder compilepkg -sdk external/go_sdk -installsuffix linux_s390x -tags selinux,selinux -src vendor/libvirt.org/go/libvirt/callbacks.go ... (remaining 214 arguments skipped)

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from vendor/libvirt.org/go/libvirt/connect_helper.h:30,
                 from kubevirt/vendor/libvirt.org/go/libvirt/connect.go:42:
vendor/libvirt.org/go/libvirt/libvirt_generated.h:35:10: fatal error: libvirt/libvirt.h: No such file or directory
   35 | #include <libvirt/libvirt.h>
      |          ^~~~~~~~~~~~~~~~~~~
compilation terminated.
compilepkg: error running subcommand external/go_sdk/pkg/tool/linux_amd64/cgo: exit status 2

This only happens when crosscompiling, most likely because the libraries are otherwise natively installed.

I could verify that the referenced PR works fine with bazeldnf v0.5.8, so the error is introduced somewhere afterwards.

Development instructions needed

I wanted to add a feature for different allowed checksum types (such as sha1) that might be used for certain RPM repos. However I am unable to build all the targets on main.

Steps to reproduce

  1. Open a new codespace.
  2. Install bazelisk and bazel
  3. run bazel build //...
  4. Bazel errors out
Starting local Bazel server and connecting to it...
INFO: Repository bazel_gazelle_go_repository_tools instantiated at:
  /workspaces/bazeldnf/WORKSPACE:63:21: in <toplevel>
  /home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/bazel_gazelle/deps.bzl:75:24: in gazelle_dependencies
Repository rule go_repository_tools defined at:
  /home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/bazel_gazelle/internal/go_repository_tools.bzl:117:38: in <toplevel>
ERROR: An error occurred during the fetch of repository 'bazel_gazelle_go_repository_tools':
   Traceback (most recent call last):
        File "/home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/bazel_gazelle/internal/go_repository_tools.bzl", line 103, column 13, in _go_repository_tools_impl
                fail("failed to build tools: " + result.stderr)
Error in fail: failed to build tools: go build github.com/bazelbuild/bazel-gazelle/language/proto: /home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/go_sdk/pkg/tool/linux_amd64/compile: signal: killed
ERROR: /workspaces/bazeldnf/WORKSPACE:63:21: fetching go_repository_tools rule //external:bazel_gazelle_go_repository_tools: Traceback (most recent call last):
        File "/home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/bazel_gazelle/internal/go_repository_tools.bzl", line 103, column 13, in _go_repository_tools_impl
                fail("failed to build tools: " + result.stderr)
Error in fail: failed to build tools: go build github.com/bazelbuild/bazel-gazelle/language/proto: /home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/go_sdk/pkg/tool/linux_amd64/compile: signal: killed
ERROR: error loading package '': Encountered error while reading extension file 'buildifier/def.bzl': no such package '@com_github_bazelbuild_buildtools//buildifier': no such package '@bazel_gazelle_go_repository_config//': no such package '@bazel_gazelle_go_repository_tools//': failed to build tools: go build github.com/bazelbuild/bazel-gazelle/language/proto: /home/codespace/.cache/bazel/_bazel_codespace/41a0baffe935881d80be720d73400e18/external/go_sdk/pkg/tool/linux_amd64/compile: signal: killed
INFO: Elapsed time: 50.129s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (13 packages loaded)
    currently loading: 
    Fetching @com_github_bazelbuild_buildtools; Restarting. 40s
    Fetching @bazel_gazelle_go_repository_config; Restarting. 40s

A development README would be helpful in this case since I'm not sure how to build all the targets.

How to deal with alternatives?

How to handle alternatives?

In a minimal example that just depends on f37 binutils:

rpmtree(
    name = "sandbox",
    rpms = [
        "@binutils-0__2.38-25.fc37.x86_64//rpm",
    ],
    visibility = ["//visibility:public"],
)

.../usr/bin/ld ends up being the GOLD linker:

$ .bazeldnf/sandbox/default/root/bin/ld --version 
GNU gold (version 2.38-25.fc37) 1.16

In the binutils spec, BFD is the default through the alternatives mechanism: https://src.fedoraproject.org/rpms/binutils/blob/f36/f/binutils.spec#_455

A clean Fedora 37 container uses GNU ld by default even though binutils-gold is installed:

$ ld --version 
GNU ld version 2.38-25.fc37

How can I replicate this using bazeldnf?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.