Comments (12)
Do you know if float128 would be sufficient? The biggest int type in numpy (and therefore pandas, the target of fastparquet) is 64-bit. It seems to me like a bad idea to construct an object array of big-ints, although if that's your only option...
I can't follow your traceback, above, which only seems to be executing within ibis
- perhaps it cannot understand numbers of such size either?
from fastparquet.
It is surprising and unfortunate that NumPy doesn't implement a Decimal DType. How does Parquet encode decimals? Perhaps we can encode and decode these specially to and from something?
@jreback, I take it that Pandas uses float
for decimals?
from fastparquet.
we don't handle Decimal type very well either
from fastparquet.
In practice do people use floats or Python Decimal objects?
I guess this is one problem for which we probably wait for the underlying stack to improve (pandas 2.0 or some numpy evolution thing)
from fastparquet.
For me np.float32 would be sufficient, I won't be able to use bigger types anyway, since I want to create a huge feature matrix from the numbers. My problem is that I can't read the parquet files of my table. The int.frombytes call kind of works, but I wondered if there's a better way to convert an numpy array of dtype |S16 into a float array. The parquet files are created by an ibis table expression, but then I download the files directly via hdfs.get.
from fastparquet.
Is this helpful?
In [1]: import numpy as np
In [2]: x = np.array(['1.1', '2.2', '3.3'], dtype='S3')
In [3]: x.astype(np.float32)
Out[3]: array([ 1.10000002, 2.20000005, 3.29999995], dtype=float32)
from fastparquet.
Throws a value error. I don't know exactly how decimals are encoded in parquet, but the array of bytes can be decoded like this:
x = np.array([ b'', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1e\\', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1d\\', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\r{', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x19)'], dtype='|S32') np.array([int.from_bytes(d, byteorder='big', signed=False) for d in x]) * 0.1
from fastparquet.
@martindurant would know more, but this might be helpful: https://github.com/Parquet/parquet-format/blob/master/LogicalTypes.md#decimal
The primitive type stores an unscaled integer value. For byte arrays, binary and fixed, the unscaled number must be encoded as two's complement using big-endian byte order (the most significant byte is the zeroth element). The scale stores the number of digits of that value that are to the right of the decimal point, and the precision stores the maximum number of digits supported in the unscaled value.
from fastparquet.
OK, now I follow you - what's missing is a conversion where the basic data is byte arrays rather than int or float. I can certainly add that.
from fastparquet.
Is there any chance you can send me an extract of your data, or something similar to it, so it can be included in a test? The correct values of the bytes array would do.
^ @ephes
from fastparquet.
@martindurant Oh, that would be cool :) - the correct result is array([ 0. , 777.2, 751.6, 345.1, 644.1])
from fastparquet.
@ephes , I am using your solution, because it works generally. However, it will be very slow for a large number of values. There are faster numpy alternatives that would work when the int size is <=8bytes. However, I would expect those to be stored as actual integers rather than byte-strings, so I am not implementing it until necessary. Note that in your case, the values are 32-byte strings (although it says 16 in the schema element), but 2-byte integers would have done. Perhaps there are options in the software that generated them (ibis?) that can produce the more useful int type representation.
from fastparquet.
Related Issues (20)
- fastparquet encoding issue. HOT 20
- BUG: reading boolean column with RLE encoding gives wrong values HOT 4
- fastparquet cannot read a categorical column that contains NaNs only HOT 2
- to_pandas(): cramjam.DecompressionError: snappy: output buffer (size = 262144) is smaller than required (size = 1048576) HOT 1
- BUG: dataframe.empty with non-nano pd.DatetimeTZDtype HOT 2
- a python-3.12 windows wheel HOT 13
- Some `fastparquet`-related tests are failing on Python 3.10 HOT 10
- Regression due to `_from_sequence` HOT 1
- attrs persistance for Pandas HOT 1
- Nullable types for 1 row vs multiple rows HOT 3
- update_file_custom_metadata error when file has no properties.
- schema evolution when writing the row groups does not work HOT 4
- Bug loading parquet files with timezone information HOT 6
- When changing to a larger dtype, its size must be a advisor of the total size in bytes of the last axis of the array HOT 6
- PyArrow will become a required dependency with pandas 3.0
- Option to not close() after write() when writing to buffer HOT 3
- Support zoneinfo.ZoneInfo timezones
- Loading List of List of Strings leads to nans HOT 6
- Upcoming pandas (>2.2.0) raises "read-only" errors HOT 3
- Categorical dtype not preserved with fastparquet-write, pyarrow-read HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastparquet.