Comments (5)
See this CI run for full details
Oh interesting. Thanks for running a test PR @jrbourbeau ! I'll look into a fix.
from dask.
Think the larger context here is some breakage that cropped up in cuDF this week around libarrow 16.1.0 that should generally be resolved now:
So wouldn't expect to see this warning in GPU CI on subsequent runs.
Beyond that specific breakage, is there preferable behavior we'd want from pytest here? IMO, I prefer this noisier output when the module is available but broken versus the more obfuscated errors we see when the broken package is silently imported and used in testing.
from dask.
Thanks @charlesbluca. Running CI over in #11143 just to double check that things look good.
IMO, I prefer this noisier output when the module is available but broken versus the more obfuscated errors we see when the broken package is silently imported and used in testing.
Same. Given that's what pytest
is doing now, I don't think we need any code changes on our end
from dask.
Indeed the libarrow
issue has been resolved (🎉 ) but there are still a couple of other failures like this
10:26:47 ____________________ test_groupby_grouper_dispatch[tasks-b] ____________________
10:26:47 [gw1] linux -- Python 3.10.14 /opt/conda/envs/dask/bin/python3.10
10:26:47
10:26:47 key = 'b'
10:26:47
10:26:47 @pytest.mark.gpu
10:26:47 @pytest.mark.parametrize("key", ["a", "b"])
10:26:47 def test_groupby_grouper_dispatch(key):
10:26:47 cudf = pytest.importorskip("cudf")
10:26:47
10:26:47 # not directly used but must be imported
10:26:47 pytest.importorskip("dask_cudf") # noqa: F841
10:26:47
10:26:47 pdf = pd.DataFrame(
10:26:47 {
10:26:47 "a": ["a", "b", "c", "d", "e", "f", "g", "h"],
10:26:47 "b": [1, 2, 3, 4, 5, 6, 7, 8],
10:26:47 "c": [1.0, 2.0, 3.5, 4.1, 5.5, 6.6, 7.9, 8.8],
10:26:47 }
10:26:47 )
10:26:47 gdf = cudf.from_pandas(pdf)
10:26:47
10:26:47 pd_grouper = grouper_dispatch(pdf)(key=key)
10:26:47 gd_grouper = grouper_dispatch(gdf)(key=key)
10:26:47
10:26:47 # cuDF's numeric behavior aligns with numeric_only=True
10:26:47 expect = pdf.groupby(pd_grouper).sum(numeric_only=True)
10:26:47 > got = gdf.groupby(gd_grouper).sum()
10:26:47
10:26:47 dask/dataframe/tests/test_groupby.py:2996:
10:26:47 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
10:26:47 /opt/conda/envs/dask/lib/python3.10/site-packages/cudf/core/mixins/mixin_factory.py:11: in wrapper
10:26:47 return method(self, *args1, *args2, **kwargs1, **kwargs2)
10:26:47 /opt/conda/envs/dask/lib/python3.10/site-packages/cudf/core/groupby/groupby.py:759: in _reduce
10:26:47 return self.agg(op)
10:26:47 /opt/conda/envs/dask/lib/python3.10/site-packages/nvtx/nvtx.py:116: in inner
10:26:47 result = func(*args, **kwargs)
10:26:47 /opt/conda/envs/dask/lib/python3.10/site-packages/cudf/core/groupby/groupby.py:631: in agg
10:26:47 ) = self._groupby.aggregate(columns, normalized_aggs)
10:26:47 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
10:26:47
10:26:47 > ???
10:26:47 E TypeError: function is not supported for this dtype: sum
10:26:47
10:26:47 groupby.pyx:192: TypeError
See this CI run for full details
from dask.
Hopefully #11144 will resolve the "real" gpuci failure.
from dask.
Related Issues (20)
- Can not process datasets created by the older version of Dask HOT 9
- P2P rechunking of ERA-5 from spatial to temporal dimension is failing hard HOT 15
- Improve documentation for `dd.from_map(...)` HOT 1
- AssertionError: DataFrame are different with dask 2024.5.1 and python 3.12 HOT 3
- `test_quantile` flaky
- Shuffle not raising exception when `on` does not exist HOT 1
- [FEA] Add official mechanism to check if query-planning is enabled in ``dask.dataframe`` HOT 3
- UnboundLocalError in test_dt_accessor when dd._dask_expr_enabled is False HOT 2
- Error with the default tokenizer. HOT 4
- Cannot bind async delayed
- Most tests in `test_parquet.py` fail on s390x (big-endian) HOT 4
- cumsum/cumprod issue with empty partitions HOT 2
- Bug in map_blocks when iterating over multiple arrays
- Error when processing JSONL with excluded null values using dataframe.read_json HOT 1
- Turn `fail_on_warning` in docs back on again
- 404 Not Found for "Dask distributed IPython docs" HOT 3
- Backend dispatch fails to re-raise BotoCoreError HOT 1
- Quantile on pyarrow data types fails with type error
- dask.delayed: Dataframe partition with map_overlap gets randomly converted to Pandas Dataframe or Pandas Series. HOT 1
- `map_blocks` with different `chunks` and `new_axis` collapses a dimension when summed. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dask.