Git Product home page Git Product logo

dask-image's Introduction

Dask

Build Status Coverage status Documentation Status Discuss Dask-related things and ask for help Version Status NumFOCUS

Dask is a flexible parallel computing library for analytics. See documentation for more information.

LICENSE

New BSD. See License File.

dask-image's People

Contributors

abhisht51 avatar akhalighi avatar anlavandier avatar charlesbluca avatar dependabot[bot] avatar dstansby avatar fbunt avatar feiming avatar genevievebuckley avatar grlee77 avatar hmaarrfk avatar holmgren825 avatar jacobtomlinson avatar jakirkham avatar jermenkoo avatar jni avatar jsignell avatar k-monty avatar ku-ya avatar m-albert avatar martinschorb avatar mause avatar scharlottej13 avatar sommerc avatar thewtex avatar timbo8 avatar tkoyama010 avatar volkerh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dask-image's Issues

API design question: default value of `index` for ndmeasure functions

This is a question about the API design for label indices in dask_image.ndmeasure functions.

Where labels is given but index is None, the label array is overwritten and becomes a mask image. This was not very intuitive for me, and means the behaviour of the ndmeasure functions is inconsistent. In some cases (index=None) you get an aggregate value, and in others you get values for each individual label (even if there are multiple indices).

I think there's an argument to be made that if labels is given and index=None the default value should be the range of all non-zero labels (eg: [1, 2, 3, ..., n]). This would mean (a) no nasty surprise aggregations, and (b) you wouldn't need to near-constantly write index=da.arange(da.max(labels)) or revert to the slightly clunkier label_comprehension() syntax (I can never remember the six input arguments).

Questions

  1. Are the majority of use cases different than what I imagine here? If what I expect to be the most common use scenario is actually pretty uncommon, I may need to rethink my opinion.
  2. What is your opinion on replacing:
def _norm_input_labels_index(input, labels=None, index=None):
    ...
    elif index is None:
        labels = (labels > 0).astype(int)
        index = dask.array.ones(tuple(), dtype=int, chunks=tuple())

with this instead:

def _norm_input_labels_index(input, labels=None, index=None):
    ...
    elif index is None:
        index = dask.array.arange(dask.array.max(labels) + 1)[1:]

and making a separate mask() convenience function available.

In my view it's much clearer that area(input, mask(labels)) is expected to return an aggregate value, compared to area(input, labels, index=None).

Support positive semi-definite n in fourier_*

From @jakirkham on May 9, 2017 3:13

Currently fourier_* functions only accept -1 for the n argument. Though in SciPy's implementations, it is possible to use positive semi-definite values for n. As this wasn't really a priority for the initial implementations, this was merely marked as unsupported. Though it would be nice to be able to provide equivalent support to the SciPy implementation of these functions w.r.t. other values for n.

Copied from original issue: dask-image/dask-ndfourier#5

Migrate work from dask-image org

Would be good to consolidate the work from the dask-image org libraries in this library. While it made sense to keep those libraries small and subject-oriented in the beginning to keep work focused and allow it progress incrementally, it now makes more sense to have one library that consolidates this functionality under a single API. Should improve usability and visibility of this work. Also should help by providing a singular point to engage users and developers alike.

Add imsave function

I found myself reaching for an imsave function to compliment imread. Presumably this would have similar semantics, and would effectively map over the skimage.io.imsave function, or something else in pims.

I don't have a concrete need though, this just came up when writing up an example.

Test other image formats

From @jakirkham on June 7, 2017 20:19

Currently we have some tests for TIFFs. However there are no tests for any other image formats. Might be worth covering a couple basic ones. Though don't want to go too far into the weeds. Might be best to see some use cases before digging in.

Edit: Also might need to switch to something like imageio for testing so as to generate a large variety of test data.

Copied from original issue: dask-image/dask-imread#8

dask_image.ndfilters.gaussian_filter

  • dask-image version: 0.20
  • Python version: 3.6
  • Operating System: ubuntu 18.10
  • numpy version: 1.16.2

running on conda enviroment

Description

i get an error when i try to execute dask_image.ndfilters.gaussian_filter

What I Did

this is the code:

for i, pt in enumerate(pts):
        pt2d = np.zeros(gt.shape, dtype=np.float32)
        pt2d[pt[1],pt[0]] = 1.
        if gt_count > 1:
            sigma = (distances[i][1]+distances[i][2]+distances[i][3])*0.1
        else:
            sigma = np.average(np.array(gt.shape))/2./2. #case: 1 point
        density += dask_image.ndfilters.gaussian_filter(pt2d, sigma)

and this is the error:


AttributeError Traceback (most recent call last)
in
----> 1 gaussian_filter_density_fast(k)

<ipython-input-18-b49a9fa75c29> in gaussian_filter_density_fast(gt)
     23         else:
     24             sigma = np.average(np.array(gt.shape))/2./2. #case: 1 point
---> 25         density += dask_image.ndfilters.gaussian_filter(pt2d, sigma)
     26     print ('done.')
     27     return density

~/anaconda3/lib/python3.6/site-packages/dask_image/ndfilters/_gaussian.py in gaussian_filter(input, sigma, order, mode, cval, truncate)
     58     depth, boundary = _utils._get_depth_boundary(input.ndim, depth, "none")
     59 
---> 60     result = input.map_overlap(
     61         scipy.ndimage.filters.gaussian_filter,
     62         depth=depth,

AttributeError: 'numpy.ndarray' object has no attribute 'map_overlap'

Other SciPy N-D generic filters

From @jakirkham on May 4, 2017 21:3

Pulled from the listing in issue ( dask-image/dask-ndfilters#13 ).

Some of the generic filters from SciPy N-D filters' module scipy.ndimage.filters were skipped initially. They are listed below. The intent it to wrap them using the map_overlap method of Dask Arrays with appropriate halos. Will also include tests to verify these retain the behavior that they would using ordinary NumPy Arrays. This issue exists as a reminder that these still need to be done.

Note: Have excluded 1-D filters from this for now. These have been added to a separate issue to be discussed and addressed later. ( dask-image/dask-ndfilters#14 )

Copied from original issue: dask-image/dask-ndfilters#22

Example image data for dask-image

dask-image example datasets

We need some good example data for tutorials with dask-image.

This issue is a place for discussion and suggestions. If you have links, add them here!

Ideally this data should:

  • Have a permissive license
  • Be easily downloaded on demand by the user
  • Be big, but not too big. We want something that will automatically be spread over a few dask chunks, but not too large to download. 1 - 2 GB? What makes sense here?

It would be nice to have

  • scientific images (microscopy, astronomy, satellite/geo-spatial images, maybe histology slides)
  • a filetype we don't need another third party library to open (we have pims already, something it could handle would be ideal)
  • the data hosted by someone else, but in a stable situation where we can reasonably count on that continuing

What we want to avoid:

  • Websites that make you register (even for free) before you can download data.

EDIT:
napari/napari#316

Just saw this tweet announcing a human brain MRI at 100µm isotropic resolution. This could be a very cool dataset to use as a napari demo. I suggest we use this issue to keep track of datasets that we could put in napari once we have proper data downloading. Please just edit the checklist below to add your preferred demo data.

* [ ]  100µm resolution human brain: https://twitter.com/ComaRecoveryLab/status/1134436231775961088

* [ ]  10m resolution vegetation cover in Victoria: http://francois-petitjean.com/Research/MonashVegMap/info.php and https://labo.obs-mip.fr/multitemp/mapping-a-part-of-australia-at-10-m-resolution/

* [ ]  correlative superres https://www.biorxiv.org/content/10.1101/773986v1.abstract

* [x]  SARS-CoV2 in gut epithelium https://twitter.com/notjustmoore/status/1256232842755014656

* [ ]  developing sea squirt https://www.nytimes.com/2020/07/09/science/sea-squirts-embryos.html

* [ ]  mechanobiology of intestinal organoids https://twitter.com/XavierTrepat/status/1308026944349450241

* [ ]  tracking of particles on astral microtubules ([paper](https://www.biorxiv.org/content/10.1101/2020.06.17.154260v1), [tweet (😍)](https://twitter.com/the_Node/status/1341050276011237379)), could make a really neat demo for the tracks layer.

* [ ]  [Sentinel-2 1y Cloud optimised geotiff dataset](https://medium.com/sentinel-hub/digital-twin-sandbox-sentinel-2-collection-available-to-everyone-20f3b5de846e)

* [ ]  Calcium imaging in the Drosophila ellipsoid body ([2013](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3830704/) and [2015](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704792/))

* [ ]  Janelia FlyLight data ([AWS](https://registry.opendata.aws/janelia-flylight/))

* [ ]  Fruit Fly Brain Observatory (FFBO) ([tweet](https://twitter.com/FlyBrainObs/status/1369496338266750977))

* [ ]  Janelia [Open Organelle datasets](https://openorganelle.janelia.org/)

* [ ]  CZBiohub [Open Cell](https://opencell.czbiohub.org/about)

* [ ]  [CoCo](https://cocodataset.org/#download) + [Voxel51 datasets](https://voxel51.com/docs/fiftyone/user_guide/using_datasets.html)

* [ ]  @tlambert03's lattice light sheet dataset used in the dask application post https://www.ebi.ac.uk/biostudies/studies/S-BSST435?query=Talley%20lambert

two cryo-ET datasets to add to the pile

There is also some more developing Tribolium embryos, mouse brain slices and mouse colon volumes:

https://zenodo.org/record/4276076#.YYJKMWDMJaR

napari/napari#316 (comment)

... and this 3D cell tracking dataset is gorgeous! http://celltrackingchallenge.net/

C.elegans developing embryo
Waterston Lab, University of Washington, Seattle, WA, USA
Training dataset: http://data.celltrackingchallenge.net/training-datasets/Fluo-N3DH-CE.zip✱ (3.1 GB)
Challenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/Fluo-N3DH-CE.zip (1.7 GB)

Microscope: Zeiss LSM 510 Meta
Objective lens: Plan-Apochromat 63x/1.4 (oil)
Voxel size (microns): 0.09 x 0.09 x 1.0
Time step (min): 1 (1.5)
Additional information: Nature Methods, 2008

Fix Documentation

Here are a few notes about the documentation.

  • Replace stock "Welcome to dask-image’s documentation!" with something like "Image Processing with Dask"
  • Add contents to the main page other than the TOC
  • README has TODO entries and references to cookiecutter
  • Usage file is non-informative, maybe just remove it?
  • Examples in docstrings refer to scipy, maybe they should be reworked or removed?
  • core and imread API docs are empty

Center_of_mass returns int instead of float

  • dask-image version: 0.2.0
  • Python version: 3.7.3
  • Operating System: Linux

In scipy.ndimage.measurements.center_of_mass, the function returns indices of type float, whereas dask_image.ndmeasure.center_of_mass returns indices of type int. It seems like a relatively small difference, but to duplicate functionality it would be nice if this function could return floats.

Drop slicerator CI install hack (once fixed)

From @jakirkham on June 7, 2017 20:3

We had to add a hack for the CIs in commit ( dask-image/dask-imread@ffd6cc5 ) to pip install slicerator in addition to conda installing it as pip could not recognize that it was already installed. This seems to be some consequence of the choice to use Python modules and not a Python package to hold the library content.

Have proposed a fix upstream in PR ( soft-matter/slicerator#26 ), which seems to resolve the issue. Once upstream merges that fix and releases, we should drop this hack. Though admittedly that will hinge on dropping Python 3.4 as we won't have a new Python 3.4 package to work with from conda-forge.

Copied from original issue: dask-image/dask-imread#7

FEAT: fourier_ellipsoid

We want to implement a dask-image version of scipy.ndimage.fourier_ellipsoid (see the docs here).

This is expected to be a little more difficult than the rest of the functions implemented in #21. That issue has some discussion of implementation details.

Add to Dask webpage

It would be good to add this project to the Dask webpage. Also would be nice if the docs could be served there as well. Any advice/help in doing this would be greatly appreciated.

contribution: Image warping routine?

Hi @jakirkham,

I have some potential contributions to make here as I am currently using dask to write a parallel image registration library.

Right now I have an implentation of image warping that you might be interested in. It allows warping with a vector field of pixel displacements (like skimage.transform.warp) by creating a large linear operator matrix using scipy sparse and then multiplying with the flattened image. It can only go as fast as scipy's sparse matmul implementation, but I couldn't find a faster way because of the horrible nature of the memory access pattern for arbitrary image warping.

If you are interested please let me know and I'll shape it into a PR.

Intake integration?

I was just starting to work on a PIL plugin for intake to support the handling of image stacks. I was hoping to make something that takes as input: paths, file objects, url, or s3 at a minimum. The output I think would be xarray dask arrays.

Do you think this project is sturdy enough to be built on top of in that way or is it moving too quickly?

Investigate resource handling on Windows

From @jakirkham on June 9, 2017 3:18

After attempts to build a package of dask-imread for conda-forge, there appeared to be errors caused by deleting the temporary directory at the end of the tests. The issue was resolved by dropping that temporary directory deletion from the tests in PR ( dask-image/dask-imread#11 ). Not entirely sure why this occurs, but there is some speculation that something is not being cleaned up properly (an open file handle perhaps). Will require some debugging on Windows to know for sure. Also these tips may help.

Copied from original issue: dask-image/dask-imread#12

Add find_objects

Would be useful to have an implementation of find_objects for dask-image. Based on a conversation with @jni, we should be able to do this by performing find_objects with map_blocks and then resolving any large spanning objects across chunks in a subsequent step. Would also need to handle the cases where a label is missing from a chunk (i.e. find_objects returns None instead of a bounding box).

Support for python 3.8

Tracking issue for adding support for Python 3.8

Now that major dependencies like scipy and dask added/are adding support for Python 3.8 we should think about doing the same.

Progress tracking issues:
scipy scipy/scipy#10927
dask dask/dask#5493

Adding chunk argument (instead of nframes)

From @jakirkham on June 7, 2017 19:54

Instead of specifying the number of frames, nframes for imread it might make more sense to have a chunks argument. We could handle this in a couple of ways.

  1. Require the non-frame dimensions to be one chunk. (so either None or the full length)
  2. Allow chunking along other dimensions, but only apply it with rechunk afterwards.
  3. Chop the data in _read_frame somehow so as to get chunks.

xref: dask-image/dask-imread#2 (comment)

Copied from original issue: dask-image/dask-imread#6

Thresholding

Raised by @jni during the sprint, it would be nice to support some common thresholding methods. For the most part these should be pretty straightforward to implement. Would be good to gather some ideas about which methods are useful to include/support.

Flatten namespace

I think that in order to use the imread function I need to dive within the imread module today

import dask_image.imread

x = dask_image.imread.imread('...')

This might be more user friendly if it was just

import dask_image

x = dask_image.imread('...')

Separately, it also looks like this is the only function in the imread module. I wonder if maybe having a seaprate module here is overkill for now?

IPython autocomplete of maximum_filter, median_filter, minimum_filter

From @jakirkham on June 9, 2017 0:17

When trying to do autocompletion with IPython, it treats the maximum_filter, median_filter, minimum_filter functions as if they are variables. As they are constructed differently from the other filters, this is not surprising. Still it would be nice if we could have them behave more like functions in this regard. Screenshot below to elucidate this.

screen shot 2017-06-08 at 20 16 10

Copied from original issue: dask-image/dask-ndfilters#31

SciPy N-D interpolation functions

From @jakirkham on May 16, 2017 18:35

The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.interpolation. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.

Copied from original issue: dask-image/dask-ndinterp#3

SciPy N-D Morphological Operations

From @jakirkham on June 2, 2017 14:17

The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.morphology. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.

Copied from original issue: dask-image/dask-ndmorph#12

Loading stack of images slow

I have about 20k 500px500p images stored in a directory.

My ultimate goal is to reshape them into only a few images, blocking the 500s into the few larger images.

I first wanted to see how fast I could load the images.

arr=dask_image.imread.imread('*.png')[:100].compute()

versus

from PIL import Image
arr=[Image.open(img) for img in glob.glob('*.png')[:100]]

The second task is performing much much much faster than the first task (first task hangs).

Can you help me understand why this is so and what I can do to speed this up?

Chunk size is (1,500,500,3) but I changed it to (25,500,500,3) and it didn't really make a difference. I would normally expect this to be a fast and easy Dask operation. I even started processing and threaded distributed clients with no luck.

Help is very much appreciated. Thanks!

Read images from cloud storage

I was wondering what the relationship between this package and the function dask.array.image.imread that's already part of dask.

Especially as I detected that dask.array.image.imread doesn't actually make use of the remote data, so I couldn't give it a s3:// protocol.

Parity release

This now has the content from dask-image org copied over thanks to PR ( #10 ). Would be good to have a parity release here to make it easy for users to transition from the old packages from the dask-image org to the new dask-image package before potential changes here potentially make that difficult.

NumPy 1.13.0 boolean array subtraction breakage

From @jakirkham on June 24, 2017 0:6

Appears the NumPy 1.13.0 made Boolean array subtraction raise a TypeError. The net result is the scipy.ndimage.morphology is broken. Since we use scipy.ndimage.morphology here, this may also suffer from the same breakage with NumPy 1.13.0. FWICT based on some quick local testing with NumPy 1.13.0, am not seeing this breakage, but should keep an eye out should this breakage occur.

xref: scipy/scipy#7493

Copied from original issue: dask-image/dask-ndmorph#21

DOC: How to create testing environments

Add a note to CONTRIBUTING.rst under the headings 'Fix bugs' and 'Implement features' to say how to create the development conda environment (they're hiding in the hidden folders for travis, appveyor and circleCI). As an example, PR ( #90 ) shows the text added for testing the docs.

Release 0.3?

Looks like there have been various bugfixes and improvements since the last release. Perhaps the most interesting one is the new distributed implementation of ndmeasure.label() from @jni and @jakirkham (#94).

Is it time for a new release? I suppose you'll want to bump straight to 0.3, since Python 2.7 has been dropped (#119).

Python 2 is really slow

From @jakirkham on August 7, 2017 23:15

Not entirely sure what the cause of this issue is, but using dask-ndmeasure on Python 2 is pretty slow compared to using it on Python 3. For instance, running the test suite ends up being ~50% or more slower on Python 2 compared to Python 3. Given that Python 2 is legacy at this point, maybe this isn't a priority, but the issue is worth being aware of.

Copied from original issue: dask-image/dask-ndmeasure#73

test__norm_input_labels_index fails with newer versions of Dask

  • dask-image version: 0.1.0
  • Python version: 3.6.6
  • Operating System: CentOS 6 64-bit

Description

Seeing this test failure in this build.

_______________________________________________________________ test__norm_input_labels_index _______________________________________________________________

    def test__norm_input_labels_index():
        shape = (15, 16)
        chunks = (4, 5)
        ind = None
    
        a = np.random.random(shape)
        d = da.from_array(a, chunks=chunks)
    
        lbls = (a < 0.5).astype(int)
        d_lbls = da.from_array(lbls, chunks=d.chunks)
    
        d_n, d_lbls_n, ind_n = dask_image.ndmeasure._utils._norm_input_labels_index(
            d, d_lbls, ind
        )
    
        assert isinstance(d_n, da.Array)
        assert isinstance(d_lbls_n, da.Array)
        assert isinstance(ind_n, da.Array)
    
        dau.assert_eq(d_n, d)
        dau.assert_eq(d_lbls_n, d_lbls)
>       dau.assert_eq(ind_n, np.array([1], dtype=int))

tests/test_dask_image/test_ndmeasure/test__utils.py:57: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = array(1), b = array([1]), check_shape = True, kwargs = {}, a_original = dask.array<ones, shape=(), dtype=int64, chunksize=()>, b_original = array([1])
adt = dtype('int64'), bdt = dtype('int64')

    def assert_eq(a, b, check_shape=True, **kwargs):
        a_original = a
        b_original = b
        if isinstance(a, Array):
            assert a.dtype is not None
            adt = a.dtype
            _check_dsk(a.dask)
            a = a.compute(scheduler='sync')
            if hasattr(a, 'todense'):
                a = a.todense()
            if not hasattr(a, 'dtype'):
                a = np.array(a, dtype='O')
            if _not_empty(a):
                assert a.dtype == a_original.dtype
            if check_shape:
                assert_eq_shape(a_original.shape, a.shape, check_nan=False)
        else:
            if not hasattr(a, 'dtype'):
                a = np.array(a, dtype='O')
            adt = getattr(a, 'dtype', None)
    
        if isinstance(b, Array):
            assert b.dtype is not None
            bdt = b.dtype
            _check_dsk(b.dask)
            b = b.compute(scheduler='sync')
            if not hasattr(b, 'dtype'):
                b = np.array(b, dtype='O')
            if hasattr(b, 'todense'):
                b = b.todense()
            if _not_empty(b):
                assert b.dtype == b_original.dtype
            if check_shape:
                assert_eq_shape(b_original.shape, b.shape, check_nan=False)
        else:
            if not hasattr(b, 'dtype'):
                b = np.array(b, dtype='O')
            bdt = getattr(b, 'dtype', None)
    
        if str(adt) != str(bdt):
            diff = difflib.ndiff(str(adt).splitlines(), str(bdt).splitlines())
            raise AssertionError('string repr are different' + os.linesep +
                                 os.linesep.join(diff))
    
        try:
>           assert a.shape == b.shape
E           AssertionError

../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/python3.6/site-packages/dask/array/utils.py:112: AssertionError

What I Did

Ran pytest on CI using the latest copy of Dask.

Coding style conventions/guide

We had some discussions on #94 about coding style. @jakirkham suggested we open a specific issue to discuss them.

I have three specific things that I find "inelegant" (yes, I know, it's imprecise) as currently implemented in dask_image.

  1. fully specified imports, e.g. dask.array instead of da, numpy instead of np. Some of these abbreviations are very ingrained in the community and the full specification is surprising to read. (Not to mention annoying to write.)
  2. Code definitions in __init__.py. Again, this is unconventional. scikit-image doesn't use these and still gets a nice API generation from sphinx, so I don't think it's a major hurdle to get sphinx to behave well for a more conventional code structure.
  3. The import name with an underscore in it. ;)

I'll throw a bonus idea in here: we should rename input in all the ndimage functions. It was a bad choice from SciPy, and they're stuck with it, but I don't think dask-image should be bound by it.

Anyway, whatever is decided with the above points, it should probably be codified somewhere, and preferably it should reference the style guides for bigger projects, so that these aren't "just for dask-image" conventions.

SciPy N-D Fourier filters

From @jakirkham on May 8, 2017 15:55

The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.fourier. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.

Copied from original issue: dask-image/dask-ndfourier#2

examples?

I'm trying to work through the functionality. Do you have any examples available outside of the test harness?

Example data - EM dataset

As a good large dataset to work with and show examples on, it might be nice to have an EM dataset. Hopefully others can point us to something on cloud storage that would be easy to access with friendly licensing.

cc @stephenplaza @perlman

Feature request: convenience function to calculate label area

Description

Feature request: a convenience function for calculating the area of label regions in an image.

I was thinking about creating a function dask_image.ndmeasure.area() to calculate the area of label regions given a labelled image (Juan suggests I base it around using dask.array.bincount()). Measuring the area of segmented objects is a common to many types of quantitative image analysis, so I think it makes sense to have a convenience function for it.

This issue is for discussion, including discussion on the consistency with the rest of the dask-image API.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.