Git Product home page Git Product logo

manylinux's Introduction

manylinux

Older archives: https://groups.google.com/forum/#!forum/manylinux-discuss

The goal of the manylinux project is to provide a convenient way to distribute binary Python extensions as wheels on Linux. This effort has produced PEP 513 (manylinux1), PEP 571 (manylinux2010), PEP 599 (manylinux2014), PEP 600 (manylinux_x_y) and PEP 656 (musllinux_x_y).

PEP 513 defined manylinux1_x86_64 and manylinux1_i686 platform tags and the wheels were built on Centos5. Centos5 reached End of Life (EOL) on March 31st, 2017.

PEP 571 defined manylinux2010_x86_64 and manylinux2010_i686 platform tags and the wheels were built on Centos6. Centos6 reached End of Life (EOL) on November 30th, 2020.

PEP 599 defines the following platform tags: manylinux2014_x86_64, manylinux2014_i686, manylinux2014_aarch64, manylinux2014_armv7l, manylinux2014_ppc64, manylinux2014_ppc64le and manylinux2014_s390x. Wheels are built on CentOS 7 which will reach End of Life (EOL) on June 30th, 2024.

PEP 600 has been designed to be "future-proof" and does not enforce specific symbols and a specific distro to build. It only states that a wheel tagged manylinux_x_y shall work on any distro based on glibc>=x.y. PEP 656 added musllinux_x_y tags for musl>=x.y.

An overview of distros per glibc version is available at pep600_compliance.

The manylinux project supports:

  • manylinux2014 images for x86_64, i686, aarch64, ppc64le and s390x.
  • manylinux_2_28 images for x86_64, aarch64, ppc64le and s390x.
  • musllinux_1_1 images for x86_64, i686, aarch64, ppc64le and s390x.
  • musllinux_1_2 images for x86_64, i686, aarch64, ppc64le and s390x.

Wheel packages compliant with those tags can be uploaded to PyPI (for instance with twine) and can be installed with pip:

manylinux tag Client-side pip version required CPython (sources) version embedding a compatible pip Distribution default pip compatibility
manylinux_x_y pip >= 20.3 3.8.10+, 3.9.5+, 3.10.0+ ALT Linux 10+, RHEL 9+, Debian 11+, Fedora 34+, Mageia 8+, Photon OS 3.0 with updates, Ubuntu 21.04+
manylinux2014 pip >= 19.3 3.7.8+, 3.8.4+, 3.9.0+ CentOS 7 rh-python38, CentOS 8 python38, Fedora 32+, Mageia 8+, openSUSE 15.3+, Photon OS 4.0+ (3.0+ with updates), Ubuntu 20.04+
manylinux2010 pip >= 19.0 3.7.3+, 3.8.0+ ALT Linux 9+, CentOS 7 rh-python38, CentOS 8 python38, Fedora 30+, Mageia 7+, openSUSE 15.3+, Photon OS 4.0+ (3.0+ with updates), Ubuntu 20.04+
manylinux1 pip >= 8.1.0 3.5.2+, 3.6.0+ ALT Linux 8+, Amazon Linux 1+, CentOS 7+, Debian 9+, Fedora 25+, openSUSE 15.2+, Mageia 7+, Photon OS 1.0+, Ubuntu 16.04+

The various manylinux tags allow projects to distribute wheels that are automatically installed (and work!) on the vast majority of desktop and server Linux distributions.

This repository hosts several manylinux-related things:

Docker images

Building manylinux-compatible wheels is not trivial; as a general rule, binaries built on one Linux distro will only work on other Linux distros that are the same age or newer. Therefore, if we want to make binaries that run on most Linux distros, we have to use an old enough distro.

Rather than forcing you to install an old distro yourself, install Python, etc., we provide Docker images where we've done the work for you. The images are uploaded to quay.io and are tagged for repeatable builds.

manylinux_2_28 (AlmaLinux 8 based)

Toolchain: GCC 12

  • x86_64 image: quay.io/pypa/manylinux_2_28_x86_64
  • aarch64 image: quay.io/pypa/manylinux_2_28_aarch64
  • ppc64le image: quay.io/pypa/manylinux_2_28_ppc64le
  • s390x image: quay.io/pypa/manylinux_2_28_s390x

Built wheels are also expected to be compatible with other distros using glibc 2.28 or later, including:

  • Debian 10+
  • Ubuntu 18.10+
  • Fedora 29+
  • CentOS/RHEL 8+

manylinux2014 (CentOS 7 based, glibc 2.17)

Toolchain: GCC 10

  • x86_64 image: quay.io/pypa/manylinux2014_x86_64
  • i686 image: quay.io/pypa/manylinux2014_i686
  • aarch64 image: quay.io/pypa/manylinux2014_aarch64
  • ppc64le image: quay.io/pypa/manylinux2014_ppc64le
  • s390x image: quay.io/pypa/manylinux2014_s390x

Built wheels are also expected to be compatible with other distros using glibc 2.17 or later, including:

  • Debian 8+
  • Ubuntu 13.10+
  • Fedora 19+
  • RHEL 7+

manylinux_2_24 (Debian 9 based) - EOL

Support for manylinux_2_24 has ended on January 1st, 2023.

These images have some caveats mentioned in different issues.

Toolchain: GCC 6

  • x86_64 image: quay.io/pypa/manylinux_2_24_x86_64
  • i686 image: quay.io/pypa/manylinux_2_24_i686
  • aarch64 image: quay.io/pypa/manylinux_2_24_aarch64
  • ppc64le image: quay.io/pypa/manylinux_2_24_ppc64le
  • s390x image: quay.io/pypa/manylinux_2_24_s390x

manylinux2010 (CentOS 6 based, glibc 2.12 - EOL)

Support for manylinux2010 has ended on August 1st, 2022.

Toolchain: GCC 8

  • x86-64 image: quay.io/pypa/manylinux2010_x86_64
  • i686 image: quay.io/pypa/manylinux2010_i686

manylinux1 (CentOS 5 based, glibc 2.5 - EOL)

Code and details regarding manylinux1 can be found in the manylinux1 tag.

Support for manylinux1 has ended on January 1st, 2022.

Toolchain: GCC 4.8

  • x86-64 image: quay.io/pypa/manylinux1_x86_64
  • i686 image: quay.io/pypa/manylinux1_i686

musllinux_1_2 (Alpine Linux 3.19 based, 3.13+ compatible)

Toolchain: GCC 13

  • x86_64 image: quay.io/pypa/musllinux_1_2_x86_64
  • i686 image: quay.io/pypa/musllinux_1_2_i686
  • aarch64 image: quay.io/pypa/musllinux_1_2_aarch64
  • ppc64le image: quay.io/pypa/musllinux_1_2_ppc64le
  • s390x image: quay.io/pypa/musllinux_1_2_s390x

musllinux_1_1 (Alpine Linux 2.12 based)

Toolchain: GCC 9

  • x86_64 image: quay.io/pypa/musllinux_1_1_x86_64
  • i686 image: quay.io/pypa/musllinux_1_1_i686
  • aarch64 image: quay.io/pypa/musllinux_1_1_aarch64
  • ppc64le image: quay.io/pypa/musllinux_1_1_ppc64le
  • s390x image: quay.io/pypa/musllinux_1_1_s390x

All images are rebuilt using GitHub Actions / Travis-CI on every commit to this repository; see the docker/ directory for source code.

Image content

All images currently contain:

  • CPython 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.13t and PyPy 3.7, 3.8, 3.9, 3.10 installed in /opt/python/<python tag>-<abi tag>. The directories are named after the PEP 425 tags for each environment -- e.g. /opt/python/cp37-cp37m contains a CPython 3.7 build, and can be used to produce wheels named like <pkg>-<version>-cp37-cp37m-<arch>.whl.

  • Development packages for all the libraries that PEP 571/599 list. One should not assume the presence of any other development package.

  • The following development tools, installed via pipx (which is also available):
  • All Python interpreters have the following packages pre-installed:
  • The manylinux-interpreters tool which allows to list all available interpreters & install ones missing from the image

    3 commands are available:

    • manylinux-interpreters list

      usage: manylinux-interpreters list [-h] [-v] [-i] [--format {text,json}]
      
      list available or installed interpreters
      
      options:
        -h, --help            show this help message and exit
        -v, --verbose         display additional information (--format=text only, ignored for --format=json)
        -i, --installed       only list installed interpreters
        --format {text,json}  text is not meant to be machine readable (i.e. the format is not stable)
    • manylinux-interpreters ensure-all

      usage: manylinux-interpreters ensure-all [-h]
      
      make sure all interpreters are installed
      
      options:
        -h, --help  show this help message and exit
    • manylinux-interpreters ensure

      usage: manylinux-interpreters ensure [-h] TAG [TAG ...]
      
      make sure a list of interpreters are installed
      
      positional arguments:
        TAG         tag with format '<python tag>-<abi tag>' e.g. 'pp310-pypy310_pp73'
      
      options:
        -h, --help  show this help message and exit

Note that less common or virtually unheard of flag combinations (such as --with-pydebug (d) and --without-pymalloc (absence of m)) are not provided.

Note that starting with CPython 3.8, default sys.abiflags became an empty string: the m flag for pymalloc became useless (builds with and without pymalloc are ABI compatible) and so has been removed. (e.g. /opt/python/cp38-cp38)

Note that PyPy is not available on ppc64le & s390x or on the musllinux images.

Building Docker images

To build the Docker images, please run the following command from the current (root) directory:

$ PLATFORM=$(uname -m) POLICY=manylinux2014 COMMIT_SHA=latest ./build.sh

Please note that the default Docker build is using buildx. Other frontends can be selected by defining MANYLINUX_BUILD_FRONTEND. See build.sh for details.

Updating the requirements

The requirement files are pinned and controlled by uv compile. To update the pins, run:

$ nox -s update_python_dependencies

Updating the native dependencies

Native dependencies are all pinned in the Dockerfile. To update the pins, run the dedicated nox session. This will add a commit for each update. If you only want to see what would be updated, you can do a dry run:

$ nox -s update_native_dependencies [-- --dry-run]

Example

An example project which builds x86_64 wheels for each Python interpreter version can be found here: https://github.com/pypa/python-manylinux-demo. The repository also contains demo to build i686 and x86_64 wheels with manylinux1 tags.

This demonstrates how to use these docker images in conjunction with auditwheel to build manylinux-compatible wheels using the free travis ci continuous integration service.

(NB: for the i686 images running on a x86_64 host machine, it's necessary to run everything under the command line program linux32, which changes reported architecture in new program environment. See this example invocation)

The PEP itself

The official version of PEP 513 is stored in the PEP repository, but we also have our own copy here. This is where the PEP was originally written, so if for some reason you really want to see the full history of edits it went through, then this is the place to look.

The proposal to upgrade manylinux1 to manylinux2010 after Centos5 reached EOL was discussed in PEP 571.

The proposal to upgrade manylinux2010 to manylinux2014 was discussed in PEP 599.

The proposal for a "future-proof" manylinux_x_y definition was discussed in PEP 600.

This repo also has some analysis code that was used when putting together the original proposal in the policy-info/ directory.

If you want to read the full discussion that led to the original policy, then lots of that is here: https://groups.google.com/forum/#!forum/manylinux-discuss

The distutils-sig archives for January 2016 also contain several threads.

Code of Conduct

Everyone interacting in the manylinux project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PSF Code of Conduct.

manylinux's People

Contributors

auvipy avatar dependabot-preview[bot] avatar ehashman avatar geoffreyblake avatar github-actions[bot] avatar henryiii avatar ijl avatar jcfr avatar joerick avatar jschueller avatar lelit avatar lkollar avatar manselmi avatar manylinux-bot[bot] avatar marcelotduarte avatar matthew-brett avatar mattip avatar mayeut avatar michael-anselmi avatar natefoo avatar njsmith avatar ogrisel avatar pyup-bot avatar radarhere avatar rdb avatar reaperhulk avatar rmcgibbo avatar segevfiner avatar takluyver avatar trishankatdatadog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manylinux's Issues

Install cmake 2.8.11 by default?

I need cmake to build my manylinux library and yumming it takes quite a bit of time. Any chance of installing it in the the main image?

Discussion

Hey Nathaniel. Thanks for inviting me.

Do you want to do discussion here or on the google group?

Update auditwheel in the Docker containers

Hi there,

While using the suggested Docker containers I ran into a problem, I couldn't upload some of the wheels it had generated. Turns out, the problem was a broken rename by auditwheel 1.3.0, the version installed in the container. Updating to 1.4.0 solves it (looks like this commit, so it might be a good idea to update the container image.

FTR, the broken renamed wheels looked like this:

 $  twine upload -r pypi wheelhouse/*.whl
Uploading distributions to https://pypi.python.org/pypi
Uploading pyuv-1.3.0-cp27-cp27m-linux_i686.manylinux1__i686.whl
HTTPError: 400 Client Error: Binary wheel for an unsupported platform for url: https://pypi.python.org/pypi

Error bulding manylinux wheels requiring signalfd.h, pipe2, SOCK_NONBLOCK, O_CLOEXEC

I am attempting to build a manylinux wheel from the nupic.core project in github. I am using the docker image quay.io/pypa/manylinux1_x86_64. nupic.core builds and statically links against the capnproto library, which relies on signalfd.h by default. Unfortunately, the docker image quay.io/pypa/manylinux1_x86_64 does not provide signalfd.h, so my build fails like this:

Linking CXX static library libkj.a
[ 27%] Built target kj
[ 29%] Building CXX object src/kj/CMakeFiles/kj-async.dir/async.c++.o
[ 30%] Building CXX object src/kj/CMakeFiles/kj-async.dir/async-unix.c++.o
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-unix.c++:36:26: fatal error: sys/signalfd.h: No such file or directory
 #include <sys/signalfd.h>

I even tried a capnproto-specific work-around to steer capnproto from using signalfd.h by setting -DKJ_USE_EPOLL=0. However, in that case, the build failed due to lack of other functions and constants (in glibc?):

/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++: In member function โ€˜int kj::{anonymous}::SocketAddress::socket(int) constโ€™:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:348:13: error: โ€˜SOCK_NONBLOCKโ€™ was not declared in this scope
     type |= SOCK_NONBLOCK | SOCK_CLOEXEC;
             ^
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:348:29: error: โ€˜SOCK_CLOEXECโ€™ was not declared in this scope
     type |= SOCK_NONBLOCK | SOCK_CLOEXEC;
                             ^
In file included from /nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:24:0:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++: In lambda function:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:648:38: error: โ€˜O_CLOEXECโ€™ was not declared in this scope
   KJ_SYSCALL(pipe2(fds, O_NONBLOCK | O_CLOEXEC));
                                      ^
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:648:47: error: โ€˜pipe2โ€™ was not declared in this scope
   KJ_SYSCALL(pipe2(fds, O_NONBLOCK | O_CLOEXEC));

I recursively grep'ed for these symbols in the docker image's /usr/include/ directory, and they were all absent.

Regarding pipe2, linux man has this to say:

pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9.

Regarding signalfd:

signalfd() is available on Linux since kernel 2.6.22. Working support is provided in glibc since version 2.8. The signalfd4() system call (see NOTES) is available on Linux since kernel 2.6.27

Updating autoconf is insufficient

A newer version of autoconf was added through #72 unfortunately that is insufficient to build some libraries, leading to errors such as error: Libtool library used but 'LIBTOOL' is undefined.

Installing newer versions of automake and libtool fixes this.

I'm currently using the following as a workaround in my build script:

wget -q https://ftp.gnu.org/gnu/autoconf/autoconf-latest.tar.gz && tar zxf autoconf-latest.tar.gz && cd autoconf* && ./configure > /dev/null && make install > /dev/null && cd ..
wget -q https://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz && tar zxf automake-*.tar.gz && cd automake* && ./configure > /dev/null && make install > /dev/null && cd ..
wget -q https://ftp.gnu.org/gnu/libtool/libtool-2.4.5.tar.gz && tar zxf libtool-*.tar.gz && cd libtool* && ./configure > /dev/null && make install > /dev/null && cd ..

C99 inline semantics

Some projects quite reasonably assume that C99 inline semantics are used when compiling in -std=c99 or -std=c11 modes, which are well supported by GCC 4.8.

However, the default for the compiler seems to be -fgnu89-inline, which forces incompatible pre-C99 inline semantics even in C99/C11 mode, which can cause quite some confusing "multiple definition" errors in libraries that rely on the correct semantics.

Unfortunately, adding -fno-gnu89-inline isn't quite enough to fix this, because the libc headers in /usr/include are apparently not compatible with C99 semantics. More recent versions of the headers explicitly force gnu89 inline semantics for these headers by using the __gnu_inline__ compiler attribute, which was introduced around the time that C99 support was.

So, besides adding -fno-gnu89-inline to my CFLAGS, I have to add this monkey patch to my dockerfile to make libraries like OpenAL compile:
find /usr/include/ -type f -exec sed -i 's/\bextern _*inline_*\b/extern __inline __attribute__ ((__gnu_inline__))/g' {} +

Assuming we no longer need to support GCC 4.2, perhaps we could add this command to the manylinux dockerfile? It would still only be half of the solution, since I believe it is still rather odd for GCC to apply gnu89 inline semantics in C99/C11 modes, but the latter is much easier to work around by adding said flag to CFLAGS.

Alternatively, would it be possible to update the libc headers without breaking compatibility?

How should we arrange the docker image namespace?

There are repositories, and tags within those repositories, and it's all a bit confusing.

I'm currently inclined towards making multiple repositories: pypa/manylinux1_x86_64, pypa/manylinux1_i686 (and then later maybe there will be pypa/manylinux2_x86_64, etc.), and avoiding routine use of any tags except latest.

But, as previously stated, I have no idea what I'm doing with docker, so would welcome any thoughts :-)

autoconf version

The version of autoconf shipping with CentOS 5 is sufficiently out of date to prevent building various projects (among them libffi). It's easy to install it yourself, but since this seems likely to be a common problem for users of the manylinux1 image I'd be happy to send a PR to install the latest autoconf (2.69, released 4 years ago) as part of the build process if people think it belongs here.

(It really just looks like RUN wget http://ftp.gnu.org/gnu/autoconf/autoconf-latest.tar.gz && tar zxvf autoconf-latest.tar.gz && cd autoconf* && ./configure && make install)

sqlite3 is broken under Python 3.6

Here is the relevant snippet from the build log:

*** WARNING: renaming "_sqlite3" since importing it failed: build/lib.linux-x86_64-3.6/_sqlite3.cpython-36m-x86_64-linux-gnu.so: undefined symbol: sqlite3_stmt_readonly
The following modules found by detect_modules() in setup.py, have been
built by the Makefile instead, as configured by the Setup files:
atexit                pwd                   time               

Following modules built successfully but were removed because they could not be imported:
_sqlite3                                                       

This issue is being discussed here: ghaering/pysqlite#85

The problem is that apparently Python 3.6 requires a more recent version of libsqlite3 that the one available on old centos. We could build or own sqlite3 from source in the docker image but the configure script currently does not make it possible to pass a path to a custom sqlite3 install.

Compile x86_64 Python with -fPIC

I am getting the following linking error when trying to build on with the latest Docker image:

/opt/rh/devtoolset-2/root/usr/libexec/gcc/x86_64-CentOS-linux/4.8.2/ld: /opt/2.7mu/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making a shared object; recompile with -fPIC
/opt/2.7mu/lib/libpython2.7.a: could not read symbols: Bad value

Would it be possible to keep libpython.a?

The application I'm trying to build has a particular need to embed the interpreter into an executable, which requires static linking with libpython.

Would it perhaps be possible not to delete libpython.a from the image, but keep it in the _internal directory for those who are absolutely certain they need to use it?

Introduce image versioning

It would be good to have images with a version rather than always overwriting the latest version.

Thus, one could use explicit versions.

My proposal would be to use the date as a version, e.g.

  • pypa/manylinux1_x86_64:20170502

Manylinux wheel not working on self-compiled python

I've found a situation where manylinux wheels don't work. I think it might be bug. To reproduce:

(On cpython source directory cloned from github)

$ git checkout v3.5.3
$ ./configure --with-pydebug
$ make -j 7
$ ./python -m venv test_env
$ source test_env/bin/activate

Then install some package that has a manylinux wheel, for example sip and pillow:

(test_env) $ pip install sip
Collecting sip
  Could not find a version that satisfies the requirement sip (from versions: )
No matching distribution found for sip
(test_env) $ pip install pillow
Collecting pillow
  Downloading Pillow-4.0.0.tar.gz (11.1MB)
  (the source package is being downloaded, not the manylinux wheel)

The same packages can be installed via manylinux wheel if using the system's python on the same system, only fails on a self-compiled python. (My system is Arch Linux x86_64)

ARM builds?

This is a very speculative issue. I'm aware that PEP 513 says manylinux1 is only defined for i686 and x86_64. But it would be nice if instructions telling people to pip install compiled modules worked easily and quickly on things like the Raspberry Pi too. It might also be valuable to the few brave souls trying to make Python work on mobile platforms.

So this issue is a place to work out what would be needed.

  1. Architecture requirements: there are a lot of different versions of ARM, and there are confusingly similar looking version schemes for the architectures (e.g. ARMv7) and cores (e.g. ARM11). There may also be optional modules. Debian supports three buckets of ARM architectures: armhf requires ARMv7 and a floating point unit (hf = hardware float). armel supports older architectures, and cores without an FPU. arm64 is for the newer 64-bit processors. In terms of raspis:
  • Raspi 0 and 1 have ARMv6 architecture, so Debian armhf does not support them. They do have FPUs, though, so Raspian is compiled to make use of this.
  • Raspi 2 has ARMv7
  • Raspi 3 has a 64-bit ARMv8 architecture.
  1. Minimum library versions: These probably don't need to be all that old, because the systems I think this would target are likely to be running a relatively recent distro. But we'd need to work out what they are.

  2. Distro for building: What distro would be the equivalent of the Centos 5.11 we use for building x86 and x64 manylinux wheels?

  3. Build platform: Are there any CI services which offer ARM builds? Is virtualisation fast enough? Can we make it easy for people to build wheels on their own Raspberry Pi or similar device.

Plans for python3.6 image?

I'm curious whether there are any plans to have a Python3.6 manylinux image out soon after the final release next week.

Enable hardening of binaries included in the wheels built for manylinux

We package our Python software and all its dependencies as a Debian package for easy installation and distribution.

We run lintian on the deb package files and recently it began issuing the following warning:

W: clusterhq-python-flocker: hardening-no-relro opt/flocker/lib/python2.7/site-packages/msgpack/_packer.so
W: clusterhq-python-flocker: hardening-no-relro opt/flocker/lib/python2.7/site-packages/msgpack/_unpacker.so

(Our ref https://clusterhq.atlassian.net/browse/FLOC-4383)

We think it's because we recently updated to pip==8.1.1 which installs manylinux binary wheel files.
And the binaries in these wheels are not compiled with the hardening features that are required of binaries in Debian packages:

(venv)root@6dcaee731129:/# hardening-check /tmp/venv/lib/python2.7/site-packages/msgpack/*.so
/tmp/venv/lib/python2.7/site-packages/msgpack/_packer.so:
 Position Independent Executable: no, regular shared library (ignored)
 Stack protected: no, not found!
 Fortify Source functions: no, only unprotected functions found!
 Read-only relocations: no, not found!
 Immediate binding: no, not found!
/tmp/venv/lib/python2.7/site-packages/msgpack/_unpacker.so:
 Position Independent Executable: no, regular shared library (ignored)
 Stack protected: no, not found!
 Fortify Source functions: no, only unprotected functions found!
 Read-only relocations: no, not found!
 Immediate binding: no, not found!

Perhaps the manylinux build environment could set the necessary environment variables e.g:

dpkg-buildflags --export
export CFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export CPPFLAGS="-D_FORTIFY_SOURCE=2"
export CXXFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export FFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4"
export GCJFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4"
export LDFLAGS="-Wl,-Bsymbolic-functions -Wl,-z,relro"

Or use http://manpages.ubuntu.com/manpages/wily/man1/hardening-wrapper.1.html

Maybe related to #46

manylinux docker image producing broken/misnamed wheels?

Possibly related to #29 (?), gevent got a report in gevent/gevent#789 about bad unicode symbols from the wheel downloaded from PyPI. The 1.1.0 wheels worked, but the 1.1.1 wheels, built on an image that supported both ABIs, didn't. I'm not enough of an ABI expert to know exactly what's going on at first glance.

Since this wheel was just built and uploaded with the default manylinux docker image, I'm hoping somebody here might have an idea. Thanks!

CC @woozyking

Vault repo config is still broken

Due to the EOL-ing of Centos 5, #102 recently patched the repo config. However, this still doesn't work:

$ docker run -it quay.io/pypa/manylinux1_x86_64 bash
[root@dac4bb86fa5b /]# yum update
http://107.158.252.35/centos/5/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again
[root@dac4bb86fa5b /]# 

There are two issues I can see:

  • the URL results in a 404; it looks like some things were shuffled around in the repo
  • pointing to a specific IP does not seem very reliable and defeats the purpose of DNS; why not just point to vault.centos.org?

Patching coming shortly.

Instructions to use the docker and make wheels without Travis

I saw https://github.com/pypa/python-manylinux-demo which is neat.
But it would also be nice to have instructions that I can simply run on my computer to create the wheels.

Something like:

Step 1: Install docker from the instructions at https://docs.docker.com/engine/installation/
Step 2: Pull the docker image: docker pull quay.io/pypa/manylinux1_x86_64 (For 64bit)
And so on ...

The Travis build example at https://github.com/pypa/python-manylinux-demo isn't as helpful in case I want to do it locally.

python 26 builds return 'linux4' for sys.platform!!

$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp26-cp26m/bin/python -c "import sys; print(sys.platform)"
linux4
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp26-cp26mu/bin/python -c "import sys; print(sys.platform)"
linux4
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp27-cp27m/bin/python -c "import sys; print(sys.platform)"
linux2
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp27-cp27mu/bin/python -c "import sys; print(sys.platform)"
linux2
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp33-cp33m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp34-cp34m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp35-cp35m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp36-cp36m/bin/python -c "import sys; print(sys.platform)"
linux

Linking to libpython.so (or not)

First of all, apologies in advance -- I'm new to building wheels, so this might be a very basic question but...

I'm trying to build a manylinux wheel for OpenCV, but the OpenCV build process for python, requires PYTHON_LIBRARY to be set to your libpython<version>.so.

As far as I understand, it is not recommended to link to libpython when building wheels. But what would be the alternative?

Is it the case that, as long as OpenCV links their library to libpython, it will not be possible to build a manylinux wheel for it?

python-config scripts contain flags to link with libpython

I understand from #69 that it is recommended that extension modules be compiled with references to the CPython ABI left undefined, and that libpythonX.Y.so is therefore intentionally left omitted from the docker image.

However, the python-config scripts in the Python installations still contain -lpythonX.Y options. It would be great if this could be removed so that build scripts that use python-config to detect Python installations work correctly.

SciPy 2017 tutorial proposal on packaging

Greetings,

I'm going to be putting in a proposal to teach a tutorial at SciPy this year on package building. I'm hoping to cover everything, including setup.py, wheels, auditwheel/manylinux, binary compatibility, conda-build, and conda-forge. Would anyone like to also join in on that proposal?

I would like to submit this proposal by Friday, Feb 24.

Cross-link similar issue at conda-forge: conda-forge/conda-forge.github.io#338

manylinux1_i686 uses linux_x86_64 tag

hello,

i was trying to use the quay.io/pypa/manylinux1_i686 container to build 32bit python binaries. However, unfortunately the wheels created were marked with the

Tag: cp*-cp*m-linux_x86_64

(with * being the version being compiled against). Now I would have been expecting:

linux_i686

is that supposed to happen?

thanks
Frank

[Question] Any plans for musl based distros?

What are the plans if any for musl based distributions? Musl based distributions are popular in container world because they are much lighter. Docker announced that they will be redoing all their official images to be Alpine Linux based. Will there be a separate tag?

Security hardening for the manylinux1 docker creation/distribution process

I am opening this issue to keep track of the open questions raised in the discussion at #44 (comment).

An attacker might find ways to silently install a rootkit in the binaries (especially the gcc and patchelf commands) of our quay.io hosted docker images. The attack could happen on quay.io, on github.com, on the travis build machine or on one of the third party resources we fetch software from in our build scripts (centos repositories, patchelf source repository and others). At the moment we have no easy way to detect such attacks.

One way we could at least detect that something is wrong would be to compute the sha256sum of all the binaries of our docker images and store that list of hashes offline and maybe a also hash the hash list could be pushed to an independent append-only time-stamped public log (for instance a dedicated twitter account).

We should also probably setup some automated CI bot to periodically recompute the sha256sum list of all the files in the public quay.io hosted images and compare them to the matching entry of the append-only time-stamped public log.

Travis timeouts breaking updates

It looks like #72 was merged a month ago, but still hasn't landed on the quay.io image, because the build timed out. I restarted it last night, and it timed out again. I just restarted it a second time -- who knows, maybe it'll work.

Offending build: https://travis-ci.org/pypa/manylinux/builds/138611073

The limit appears to be just under 50 minutes, and all of our recent master builds (including the successful ones) have been right up against this limit: https://travis-ci.org/pypa/manylinux/builds

I'm not sure what the right solution is but we need to do something about this :-(

auditwheel is cloned from less-official repo

The manylinux/auditwheel repo was copied to pypa/auditwheel, but the build script in this pypa/manylinux repo still clone from manylinux/auditwheel. Right now they are synchronized, but it would feel more official to clone from the pypa repo, and would allow removing the other repo later.

trouble with auditwheel repair step since october 14

I have the manylinux1_x86_64 image to create python wheels and using this docker file

FROM quay.io/pypa/manylinux1_x86_64

ENV GLPK_VER="4.60"
RUN wget http://ftp.gnu.org/gnu/glpk/glpk-${GLPK_VER}.tar.gz -O - | tar xz
WORKDIR glpk-${GLPK_VER}
RUN ./configure && make install

and last week I could with this test.sh

#!/bin/bash
auditwheel show /io/wheelhouse/cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl

run

docker run --rm -v `pwd`:/io cobrapy_builder /io/test.sh

to get expected output

cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl is consistent with the
following platform tag: "linux_x86_64".

The wheel references the following external versioned symbols in
system-provided shared libraries: GLIBC_2.4.

The following external shared libraries are required by the wheel:
{
    "libc.so.6": "/lib64/libc-2.5.so",
    "libglpk.so.40": "/usr/local/lib/libglpk.so.40.1.0",
    "libm.so.6": "/lib64/libm-2.5.so",
    "libpthread.so.0": "/lib64/libpthread-2.5.so"
}

In order to achieve the tag platform tag "manylinux1_x86_64" the
following shared library dependencies will need to be eliminated:

libglpk.so.40

now however, I only get

nothing to do

cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl is consistent with the
following platform tag: "manylinux1_x86_64".

The wheel references no external versioned symbols from system-
provided shared libraries.

The wheel requires no external shared libraries! :)

I can circumvent that by installing pyelftools==0.23 (replacing the 0.24 version) but that only fixes the problem for python 3.5 not 3.4 or 2.7 (can't reproduce that, seems to fix for all python's), i.e.

#!/bin/bash
/opt/python/cp35-cp35m/bin/pip install pyelftools==0.23
auditwheel show /io/wheelhouse/cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl

any ideas? the wheel file I try with is here
https://drive.google.com/open?id=0B9Jk-Vpwjhb0ME1HNEx0OFNNbVU

Wheel builds broken by CentOS repository going offline

CentOS5 just went end-of-life - see #96 .

In consequence (I guess) the repositories that the docker image expect have disappeared, and any manylinux build trying to access these repositories has broken - e.g.:

It looks like we need to update the docker image to use still-existing repos such as

http://vault.centos.org/5.11/

See: https://mail.python.org/pipermail/distutils-sig/2017-April/030362.html

newer openssl required by cryptography

I'm trying to build cryptography using the docker image. The openssl version that is installed there is 0.9.8u, which generates the following error in cryptography:

$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from cryptography.hazmat.backends import default_backend
>>> default_backend()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 35, in default_backend
    _default_backend = MultiBackend(_available_backends())
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 22, in _available_backends
    "cryptography.backends"
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2235, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
    from cryptography.hazmat.backends.openssl.backend import backend
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 47, in <module>
    from cryptography.hazmat.bindings.openssl import binding
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 250, in <module>
    _verify_openssl_version(Binding.lib.SSLeay())
  File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 230, in _verify_openssl_version
    "You are linking against OpenSSL 0.9.8, which is no longer "
RuntimeError: You are linking against OpenSSL 0.9.8, which is no longer support by the OpenSSL project. You need to upgrade to a newer version of OpenSSL.

Running yum install openssl is unhelpful, as 0.9.8e is the latest that's available for Centos 5.

I noticed that the image build script already builds a newer openssl (for curl), but then deletes it. Maybe this can stay in the image and be available for modules like cryptography?

Provide a 32-bit docker image

I guess it would be good at some point to provide a 32-bit manylinux1 image.

Actually arranging this will be a pain in the butt, because we'll definitely want to share build scripts between the 32-bit and 64-bit builds, but docker has a strong idea that every docker "build context" must be totally self-contained. Like, you're not even allowed to symlink out of it, because that would be bad. Maybe this will get better in the future, but AFAICT right now the only way to make this work reasonably well given docker + quay.io's limitations is:

  • Make git repository git-common for the actual build scripts
  • Make git repository git-64 for the 64-bit Dockerfile, and include git-common as a git submodule
  • Make git repository git-32 for the 32-bit Dockerfile, and include git-common as a git submodule
  • Make quay.io repository quay-64 pointing at git repository git-64, and have it trigger a rebuild whenever there's a commit to git-common or git-64
  • Make quay.io repository quay-32 pointing at git repository git-32, and have it rebuild whenever there's a commit to git-common or git-32

In theory the git-64 and git-32 repositories could be the same, with the Dockerfiles in different subdirectories. But we'd still need git-common to be separate (so that it could be checked out into both subdirectories), and we'd still need quay-64 and quay-32 to be separate so that users can specify which docker image they want to pull.

[Hopefully in the future quay.io will start allowing different Dockerfiles to share an overlapping build context (currently impossible - hopefully they'll update this doc page if that changes :-)), and then we could combine all the git repositories into one git repo that feeds multiple quay.io repos.]

This is all doable enough, I guess. The biggest hassle is that getting CI working will be difficult with everything spread out over multiple repos.

Double library references after auditwheel repair

Sorry if this is obvious, but I tried for fun to manually create an anylinux wheel for h5py from a shell inside the provided x64 Docker container. However, it seems the rpath fixup didn't go too well for libz:

After the fixed up wheel was installed:

[root@a6722ca968b0 h5py-master]# ldd /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/h5a.cpython-34m.so
        linux-vdso.so.1 =>  (0x00007ffe2095d000)
        libhdf5-eb619e28.so.100.0.0 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/libhdf5-eb619e28.so.100.0.0 (0x00007f3e80eb2000)
        libhdf5_hl-250248ee.so.100.0.0 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/libhdf5_hl-250248ee.so.100.0.0 (0x00007f3e80c89000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3e80a67000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f3e8070e000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f3e80504000)
        libz-a147dcb0.so.1.2.3 => not found
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f3e80300000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f3e8007c000)
        libz-a147dcb0.so.1.2.3 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/./libz-a147dcb0.so.1.2.3 (0x00007f3e7fe67000)
        /lib64/ld-linux-x86-64.so.2 (0x00005606a1cff000)

Notice the two references to libz-a147dcb0.so.1.2.3, one of them found, the other one not. Any idea what could have gone wrong, or how I can debug further? The bundling of libhdf5 seems to have worked fine.

Maintainer for manylinux image?

Are there one or more official maintainers for this image? I ask because I am getting the impression maintenance is starting to drift, and it would be good to establish whether there is anyone who feels able to take responsibility for maintenance.

manylinux1 packages contain unstripped, but unstrippable, shared libraries

There are manylinux1 packages on PyPI that contain unstripped shared libraries that cause strip to fail.

dev@devenv:~$ virtualenv manylinux1_relocatable_test
New python executable in /home/dev/manylinux1_relocatable_test/bin/python
Installing setuptools, pip, wheel...done.
dev@devenv:~$ . manylinux1_relocatable_test/bin/activate
(manylinux1_relocatable_test) dev@devenv:~$ pip install cffi numpy
Collecting cffi
  Using cached cffi-1.7.0-cp27-cp27mu-manylinux1_x86_64.whl
Collecting numpy
  Using cached numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whl
Collecting pycparser (from cffi)
Installing collected packages: pycparser, cffi, numpy
Successfully installed cffi-1.7.0 numpy-1.11.1 pycparser-2.14
(manylinux1_relocatable_test) dev@devenv:~$ file manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so
manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so:                             ELF 64-bit LSB  shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=d272756d64640cc3e6584d1a3db4ae0ae2990b48, not stripped
manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so: ELF 64-bit LSB  shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=85e6780ba62dd077bd2fbe1e765763d30ae25f10, not stripped
(manylinux1_relocatable_test) dev@devenv:~$ strip manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj[.note.gnu.build-id]: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C[.note.gnu.build-id]: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Bad value

This behavior has been observed on both Debian 8, Ubuntu 14.04, and Ubuntu 16.04.

If this is not a general artifact of the manylinux build process (and therefor not applicable to this project specifically), but a common (mis)configuration that affects multiple packages' manylinux1 build configuration/settings (eg cffi, numpy), then please let me know and I'll close this issue (and reopen it against the relevant projects).

Missing libpython with Debian default Python install

I am testing manylinux numpy wheels.

In particular, I am testing this guy: http://nipy.bic.berkeley.edu/manylinux/numpy-1.10.4-cp27-none-linux_x86_64.whl

With a default install, test of Python, starting with either Wheezy or Jessie:

docker run -ti --rm -v $PWD:/io debian:latest /bin/bash
docker run -ti --rm -v $PWD:/io tianon/debian:wheezy /bin/bash

and running this script:

apt-get update 
apt-get install -y python curl 
curl -sLO https://bootstrap.pypa.io/get-pip.py 
python get-pip.py 
pip install -f https://nipy.bic.berkeley.edu/manylinux numpy nose 
python -c "import numpy; numpy.test()" 

I get this:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in <module>
    from . import add_newdocs
  File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
    from numpy.lib import add_newdoc
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
    from .type_check import *
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
    import numpy.core.numeric as _nx
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 14, in <module>
    from . import multiarray
ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

Sure enough, in order for the wheel to work, I need:

apt-get install libpython2.7

Maybe we need to add a check / informative error message for the presence of libpython?

system certificate authorities file not loaded by default

Our wheel building process contacts an https server, but fails with an [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed error.

Looking into it a bit more into detail I found that the system has a CA file at /etc/pki/tls/certs/ca-bundle.crt, but it's not loaded by default, probably because the Pythons expect it at a different location?

As a workaround I call export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt, but it would be great to wire it up such that the system CA file is used by default.

Unicode SOABI-related changes

Given discussion here and here, it sounds like we can/should drop the UCS4-only requirement from PEP 513 after all.

If that's right, then I guess there are a few things to do:

  • Update PEP text
  • Update pip pull request
  • Update our check-manylinux.py script
  • Update auditwheel to remove the UCS2 check (but possibly it should unconditionally require the use of an SOABI tag, just to nudge people into doing the right thing?)
  • Update the docker image to include both UCS2 and UCS4 builds for python 2.6 / 2.7. (Also, make sure that it includes a recent-enough wheel package.)

For the docker images, I propose the following layout:

/opt/2.7.11-ucs2
/opt/2.7.11-ucs4
/opt/2.7-ucs2 -> 2.7.11-ucs2
/opt/2.7-ucs4 -> 2.7.11-ucs4
/opt/2.7 -> 2.7-ucs4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.