rapidsai / integration Goto Github PK
View Code? Open in Web Editor NEWRAPIDS - combined conda package & integration tests for all of RAPIDS libraries
License: Apache License 2.0
RAPIDS - combined conda package & integration tests for all of RAPIDS libraries
License: Apache License 2.0
Just to report that I noticed NV copyright in files while reviewing a PR, not sure if that's something we want to update for this repo
OS: Ubuntu 22.04.1 LTS on bare metal
CPU: 11th Gen Intel® Core™ i7-11700K @ 3.60GHz × 16
GPU: NVIDIA GeForce RTX 3060.
I followed default instructions from RAPIDS release selector:
conda create -n rapids-23.02 -c rapidsai-nightly -c conda-forge -c nvidia rapids=23.02 python=3.9 cudatoolkit=11.5
Conda installation raises UnsatisfiableError
:
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package libstdcxx-ng conflicts for:
python=3.9 -> libstdcxx-ng[version='>=11.2.0|>=7.5.0|>=9.3.0|>=7.3.0']
python=3.9 -> libffi[version='>=3.4,<4.0a0'] -> libstdcxx-ng[version='>=4.9|>=9.4.0|>=7.2.0']
Package python conflicts for:
rapids=23.02 -> python[version='>=3.10,<3.11.0a0|>=3.8,<3.9.0a0']
rapids=23.02 -> cucim=23.02 -> python[version='3.10.*|3.8.*|>=3.11,<3.12.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.8|>=3.6|>=3.7|>=3.6,<3.7.0a0']
python=3.9
Package _openmp_mutex conflicts for:
rapids=23.02 -> numba[version='>=0.56.2'] -> _openmp_mutex[version='>=5.1']
python=3.9 -> libgcc-ng[version='>=12'] -> _openmp_mutex[version='>=4.5']
cudatoolkit=11.5 -> libgcc-ng[version='>=12'] -> _openmp_mutex[version='>=4.5']
Package cudatoolkit conflicts for:
rapids=23.02 -> cucim=23.02 -> cudatoolkit[version='10.0|10.0.*|10.1|10.1.*|10.2|10.2.*|11.0|11.0.*|11.1|11.1.*|>=11,<12.0a0|>=11.2,<12|9.2|9.2.*|11.4|11.4.*|>=11.2,<12.0a0|>=11.0,<=11.8|>=11.0,<=11.7|>=11.0,<=11.6|11.0.*|11.1.*|10.2.*']
cudatoolkit=11.5
rapids=23.02 -> cudatoolkit=11
Package _libgcc_mutex conflicts for:
python=3.9 -> libgcc-ng[version='>=12'] -> _libgcc_mutex[version='*|0.1',build='main|main|conda_forge']
cudatoolkit=11.5 -> libgcc-ng[version='>=12'] -> _libgcc_mutex[version='*|0.1',build='main|main|conda_forge']
Package libgcc-ng conflicts for:
python=3.9 -> libgcc-ng[version='>=10.3.0|>=12|>=9.4.0|>=9.3.0|>=7.5.0|>=11.2.0|>=7.3.0']
python=3.9 -> zlib[version='>=1.2.11,<1.3.0a0'] -> libgcc-ng[version='>=4.9|>=7.2.0']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.35=0
- feature:|@/linux-64::__glibc==2.35=0
- cudatoolkit=11.5 -> __glibc[version='>=2.17,<3.0.a0']
- cudatoolkit=11.5 -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- rapids=23.02 -> cucim=23.02 -> __glibc[version='>=2.17|>=2.17,<3.0.a0']
Your installed version is: 2.35
conda info
:
active environment : None
shell level : 0
user config file : /home/mauricio/.condarc
populated config files : /home/mauricio/.condarc
conda version : 23.1.0
conda-build version : not installed
python version : 3.10.8.final.0
virtual packages : __archspec=1=x86_64
__cuda=12.0=0
__glibc=2.35=0
__linux=5.15.0=0
__unix=0=0
base environment : /home/mauricio/miniconda3 (writable)
conda av data dir : /home/mauricio/miniconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/mauricio/miniconda3/pkgs
/home/mauricio/.conda/pkgs
envs directories : /home/mauricio/miniconda3/envs
/home/mauricio/.conda/envs
platform : linux-64
user-agent : conda/23.1.0 requests/2.28.1 CPython/3.10.8 Linux/5.15.0-58-generic ubuntu/22.04.1 glibc/2.35
UID:GID : 1000:1000
netrc file : None
offline mode : False
Follows #193
Given the gpuCI jobs need to be updated for this to work this should be done after release to not impact the open PRs that still have the variable defined
Hi everyone,
following the medium article and PyPI package description [1, 2] I would like to ask if there is any plan to reevaluate publishing built wheel files to PyPI.
I see this repo is watched just by 3 people so hopefully, this issue finds some response. Thanks in advance.
[1] https://medium.com/rapids-ai/rapids-0-7-release-drops-pip-packages-47fc966e9472
[2] https://pypi.org/project/rapidsai/
As RAPIDS deployment scenarios have grown, we have customers integrating with Apache Hive & Apache Kafka.
I believe we decided we didn't want GPU CI to have to spin up "heavy" services for integration tests, and so they needed someplace else to live.
The name rapidsai/integration
suggests it's a potential home for them, but it looks like integration tests are limited to software installable from conda.
Can we consider adding Docker container tests here, or should we continue self-supporting these Hive and Kafka tests?
Hi RAPIDSAI team,
Not sure where to file this one, but here goes.
I'm having trouble getting pytorch and cuml to play nice together in a conda environment.
$ conda create -n cuml-test python=3.8 pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch -c conda-forge
$ conda activate cuml-test
$ conda install -n rapids-0.18 -c rapidsai -c nvidia -c conda-forge -c defaults cuml=0.18 cudatoolkit=11.0
Collecting package metadata (current_repodata.json): done
Solving environment: -
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- rapidsai/linux-64::ucx==1.9.0+gcd9efd3=cuda11.0_0
- conda-forge/linux-64::libfaiss==1.6.3=h328c4c8_3_cuda
- rapidsai/linux-64::libcugraph==0.18.0=cuda11.0_g65ec965f_0
- rapidsai/linux-64::cugraph==0.18.0=py38_g65ec965f_0
- rapidsai/linux-64::cuml==0.18.0=cuda11.0_py38_gb5f59e005_0
- rapidsai/linux-64::ucx-py==0.18.0=py38_gcd9efd3_0
- rapidsai/linux-64::rapids==0.18.0=cuda11.0_py38_g334c31e_223
- rapidsai/linux-64::libcuml==0.18.0=cuda11.0_gb5f59e005_0
done
==> WARNING: A newer version of conda exists. <==
current version: 4.8.3
latest version: 4.9.2
Please update conda by running
$ conda update -n base -c defaults conda
# All requested packages already installed.
$ conda list | grep -i cuml (cuml-test)
# packages in environment at /home/willprice/.conda/envs/cuml-test:
I cannot install cuml into this basic environment created by following the pytorch set up as described on https://pytorch.org/
Weirdly conda says that "all requested packages are already installed", except cuml isn't installed, as evidenced by the conda list
command.
Any tips on how to get pytorch and cuml to play nicely?
Thanks
We should add testing of the conda-pack.sh
scripts to PR CI.
This will require some refactoring since the script currently always uploads to S3 which probably isn't necessary for PRs.
Current work on rapidsai/cudf#5782 requires that python-confluent-kafka
be present in the conda build environment.
Currently all of the different RAPIDS libraries have conda environment files to specify a development environment where you can build the library. For example here's one from cudf: https://github.com/rapidsai/cudf/blob/branch-0.16/conda/environments/cudf_dev_cuda10.2.yml
Additionally, there's conda recipes: https://github.com/rapidsai/cudf/blob/branch-0.16/conda/recipes/cudf/meta.yaml
On top of that, we have the integration repo to define a common build environment for all RAPIDS libraries, and in it we have a build metapackage: https://github.com/rapidsai/integration/blob/branch-0.16/conda/recipes/rapids-build-env/meta.yaml which grabs its versions from https://github.com/rapidsai/integration/blob/branch-0.16/conda/recipes/versions.yaml.
This is non-ideal as all of the dependency versions are currently defined in 3 different places, and then redefined across libraries as well.
This issue is to brainstorm a solution to centralizing the dependency versions as well as maintaining a superset list of all dependencies needed to build all RAPIDS libraries from source. In addition to centralizing the dependency versions, our goal should be to streamline developers to be able to build a development environment as well as updating dependencies in the development environment.
Dears,
The current stable rapids-22.12 conda installation instruction is:
conda create -n rapids-22.12 -c rapidsai -c conda-forge -c nvidia 22.12 python=3.9 cudatoolkit=11.5
That instruction uses a recipe that has an upper bound on networkx>=2.5.1,<=2.6.3
.
mauricio@rig:~$ conda activate rapids-22.12
(rapids-22.12) mauricio@rig:~$ conda list "networkx|matplotlib"
# packages in environment at /home/mauricio/miniconda3/envs/rapids-22.12:
#
# Name Version Build Channel
matplotlib-base 3.6.3 py39he190548_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
networkx 2.6.3 pyhd8ed1ab_1 conda-forge
Notice rapids-22.12 has no upper bound to the current conda-forge matplotilib-3.6.
That specific networkx-2.6.3 has an old known bug related to the matplotlib-3.6
which was fixed on networkx-2.8.6
.
That bug prevents any user of rapids-22.12 to draw any networkx graphs. I still don't even know why rapids uses networkx properly. As far as I understood, its "just" related to cugraph tests
If one wants to reproduce the networkx-2.6.3 error:
import matplotlib.pyplot as plt
import networkx as nx
G = nx.complete_graph(5)
nx.draw(G)
plt.show()
I already noticed that the rapids-nightly
conda channel has a rapids-23.02
recipe that already removed the networkx version upper bound, thus, solving this issue I raised.
As I can't use networkx with that stable recipe and also can't move to rapids-nightly, I kindly ask you to fix rapids-22.12 recipe by either removing the networkx version upper bound or at least raising that limit to networkx-2.8.6.
Thank you.
Numba is currently pinned at 0.45.1:
0.46 has a critical bug fix for UCX-cuDF interactions. Would it be possible to relax the numba pinning to numba >= 0.45.1
?
cuSpatial no longer depends on GDAL. This repo still does, but I don't know if that dependency comes from cuSpatial or another repo. How can we determine this? If it is from cuSpatial, then can we please remove the following?
integration/conda/recipes/versions.yaml
Line 75 in 599e788
And
Is your feature request related to a problem? Please describe.
Issue rapidsai/cudf#6096 caused an expensive fire drill involving core members across multiple projects
Assuming the current stable release breaks again in the future for one of many reasons, and stable releases are out for ~6w, a useful thing may be something along the lines of daily build tests of last 2 releases (3mo)
Describe the solution you'd like
I'm less clear on the relative value of the full gamut of going across CUDA etc versions. AFAICT, the 80% is probably around conda: Python verison x set of RAPIDS packages
There are two rapids packages, one from the rapidsai channel, and another from the conda-forge channel. For some reason, conda wants to install the latter rather than the former in some cases.
Currently getting this issue when trying to install rapids alongside python 3.10. Is rapids really incompatible or is this a packaging issue on the Conda end? The 3.1 request strikes me as an error.
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
ResolvePackageNotFound:
- python=3.1
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system:
Your installed version is: 2.31
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.