Comments (9)
Another use case we might want to think about is "what happens if the user finds out later it needs another variable"? Will the user need to download the data again (since we have not persisted the original files)?
from icepyx.
@fspaolo mostly that the end user would only see the end result (the data they want in the form they want). I think it would make using ICESat-2 data a little bit more approachable (if outputting the "valid" data in a simple form). Could even split into 6 beam-level datasets to work with captoolkit. Basically a little work on the "back end" to make things easier for people on the "front end". I think the captoolkit interface aspect could be pretty useful. It's a good question though if this would be wanted by anyone.
from icepyx.
that is a good point. With the slimmed down "valid" versions that certainly would be an issue. But I guess if the users want to "level up" and get more variables that'd be their next step. Having these slimmed down forms might also allow for some more exploratory uses of the data. This certainly could be interesting (I am thinking of an animation from @joemacgregor that used an early version of the NSIDC subsetting API to look at glacier termini with ATM). On the other end of the spectrum for cloud computing purposes, if outputting as zarr might not want to store 2 full copies of the data.
from icepyx.
I think this can also be mitigated by providing some guidance on the "most common variables" for standard use cases, and perhaps also suggesting potential additional variables for more complex use cases.
from icepyx.
Here's a couple of the videos Tyler mentioned for Greenland termini:
All Greenland 1993-2016: https://www.youtube.com/watch?v=8o4DnXLIhxc
Just Helheim: https://www.youtube.com/watch?v=BZR4czv2Kag
Used early NSIDC subsetter informed by termini traced from satellite imagery. Integrated it all in MATLAB. Would be cool to show this with ICESat-2 as well.
from icepyx.
This is a great discussion and a critical one for where icepyx goes next. It will be important to have a way for people to use/interact with data locally that is not dependent on them having just downloaded it, which raises a few questions about where/when some of these subsetting and conversion operations should happen and what files are ultimately stored for the user. The modus operandi I've been using can be summarized as "make most of these decisions automatically for the user based on best practices and recommendations from the science team, assuming users just want some basic data without having to make many decisions, but implement those defaults in a way (i.e. with flags and keywords) that make it easy for the heavy-data user to choose something different". For instance, this is the idea behind the default automatic use of the NSIDC subsetter for spatial and temporal subsetting - most people don't need full granules if they've already created a region of interest, so we only give them data where they've asked for it, but if they really want full granules, it's easy to get them.
from icepyx.
@tsutterley I haven't used bytesIO before. What would be the advantage over simply working on the downloaded HDF5s (in the standard way)?
from icepyx.
@tsutterley and @JessicaS11 so from our discussion, we can try a (very simple) example having BytesIO functionality in granules.py calling a function from (?) that should do the same thing with an object in-memory or on-disk. The minimal steps that would be nice to have working are:
A) Request a granule specifying variables ['x', 'y', 'z'] / get an HDF5 with /x, /y, /z
B) Request a granule / get granule / call function( ) asking for ['x', 'y', 'z'] / get an HDF5 with /x, /y, /z
So for the above operations I can see 3 scenarios:
- Download only selected variables
region = icepyx.icesat2data.Icesat2Data(
variables=["x", "y", "z"],
...
)
- Get selected variables from downloaded granules
region.get_vars(variables=["x", "y", "z"])
- Get selected variables from existing folder/file
icepyx.get_vars(fname='/path/to/data', variables=["x", "y", "z"])
Can we really split the functionality/code in this way?
from icepyx.
By further inspection of the codebase, I noticed there is an intention of having a "data" object similar to the region
object above. So in order to implement a data
object to represent and operate local files, I think we need to first define the scope of the functionalities we want icepyx
to have for local files (what should this data
object do?). This will then define the structure and methods of the data
object, which should be different from the already-implemented request/download object.
from icepyx.
Related Issues (20)
- hang time and time outs HOT 1
- CMR returning duplicate results HOT 8
- updates now that cloud data access is public
- Download fails when ATL06 data larger than around 500 MB HOT 2
- accept all vector file inputs
- order_vars Issue HOT 5
- ds = reader.load() HOT 3
- geolocation variables reset? HOT 7
- cannot import 'icesat2data' from 'icepyx' HOT 3
- ipx.Read.load() not working with ATL09 beam profile names HOT 4
- reinstate ATL13 visualization module tests HOT 1
- ATL11 granule ID empty list HOT 3
- IndexError: array index out of range HOT 13
- create interactive data access
- SSLError HOT 6
- add automatic authorization to order/download HOT 1
- update ICESat-2 resource guide HOT 1
- improve Zenodo releases
- EarthData authentication HOT 1
- improve temporal module
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from icepyx.