Dask is a flexible parallel computing library for analytics. See documentation for more information.
New BSD. See License File.
Distributed image processing
Home Page: http://image.dask.org/en/latest/
License: BSD 3-Clause "New" or "Revised" License
Dask is a flexible parallel computing library for analytics. See documentation for more information.
New BSD. See License File.
We want to use a histology slide image from the 2016 Camelyon dataset (CC0): https://camelyon17.grand-challenge.org/Data/
This thread contains details specific to this dataset.
Related to the larger discussion here: #107
This is merely here for reference at this point.
Appears we are running into some sort of issue building the docs with RTD. Have raised upstream as issue ( readthedocs/readthedocs.org#3769 ).
I'm trying to work through the functionality. Do you have any examples available outside of the test harness?
We want this link added to the docs: https://github.com/dask/dask-examples/blob/master/applications/image-processing.ipynb
This link should go in dask-image/docs/quickstart.rst
under the 'Dask Examples' sub-heading.
Discussion here: #111
Also see: https://github.com/dask/dask-image/pull/111/files
From @jakirkham on August 2, 2017 14:24
Would be good to include some tests for having nan
values in the input
of different functions provided here.
Copied from original issue: dask-image/dask-ndmeasure#45
From @jakirkham on June 2, 2017 14:17
The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.morphology
. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.
while
-loop in Dask to work.)while
-loop in Dask to work.)Copied from original issue: dask-image/dask-ndmorph#12
From @jakirkham on June 24, 2017 0:6
Appears the NumPy 1.13.0 made Boolean array subtraction raise a TypeError
. The net result is the scipy.ndimage.morphology
is broken. Since we use scipy.ndimage.morphology
here, this may also suffer from the same breakage with NumPy 1.13.0. FWICT based on some quick local testing with NumPy 1.13.0, am not seeing this breakage, but should keep an eye out should this breakage occur.
xref: scipy/scipy#7493
Copied from original issue: dask-image/dask-ndmorph#21
Would be useful to have an implementation of find_objects
for dask-image. Based on a conversation with @jni, we should be able to do this by performing find_objects
with map_blocks
and then resolving any large spanning objects across chunks in a subsequent step. Would also need to handle the cases where a label is missing from a chunk (i.e. find_objects
returns None
instead of a bounding box).
From @jakirkham on November 24, 2017 23:43
Currently we are using delayed
in labeled_comprehension
. However we could get the desired effect with less boiler plate and better performance by using something like map_blocks
or atop
. Would be good to revisit how this is being computed and try to fix it.
Copied from original issue: dask-image/dask-ndmeasure#81
running on conda enviroment
i get an error when i try to execute dask_image.ndfilters.gaussian_filter
this is the code:
for i, pt in enumerate(pts):
pt2d = np.zeros(gt.shape, dtype=np.float32)
pt2d[pt[1],pt[0]] = 1.
if gt_count > 1:
sigma = (distances[i][1]+distances[i][2]+distances[i][3])*0.1
else:
sigma = np.average(np.array(gt.shape))/2./2. #case: 1 point
density += dask_image.ndfilters.gaussian_filter(pt2d, sigma)
and this is the error:
AttributeError Traceback (most recent call last)
in
----> 1 gaussian_filter_density_fast(k)
<ipython-input-18-b49a9fa75c29> in gaussian_filter_density_fast(gt)
23 else:
24 sigma = np.average(np.array(gt.shape))/2./2. #case: 1 point
---> 25 density += dask_image.ndfilters.gaussian_filter(pt2d, sigma)
26 print ('done.')
27 return density
~/anaconda3/lib/python3.6/site-packages/dask_image/ndfilters/_gaussian.py in gaussian_filter(input, sigma, order, mode, cval, truncate)
58 depth, boundary = _utils._get_depth_boundary(input.ndim, depth, "none")
59
---> 60 result = input.map_overlap(
61 scipy.ndimage.filters.gaussian_filter,
62 depth=depth,
AttributeError: 'numpy.ndarray' object has no attribute 'map_overlap'
Looks like there have been various bugfixes and improvements since the last release. Perhaps the most interesting one is the new distributed implementation of ndmeasure.label()
from @jni and @jakirkham (#94).
Is it time for a new release? I suppose you'll want to bump straight to 0.3, since Python 2.7 has been dropped (#119).
Seeing this test failure in this build.
_______________________________________________________________ test__norm_input_labels_index _______________________________________________________________
def test__norm_input_labels_index():
shape = (15, 16)
chunks = (4, 5)
ind = None
a = np.random.random(shape)
d = da.from_array(a, chunks=chunks)
lbls = (a < 0.5).astype(int)
d_lbls = da.from_array(lbls, chunks=d.chunks)
d_n, d_lbls_n, ind_n = dask_image.ndmeasure._utils._norm_input_labels_index(
d, d_lbls, ind
)
assert isinstance(d_n, da.Array)
assert isinstance(d_lbls_n, da.Array)
assert isinstance(ind_n, da.Array)
dau.assert_eq(d_n, d)
dau.assert_eq(d_lbls_n, d_lbls)
> dau.assert_eq(ind_n, np.array([1], dtype=int))
tests/test_dask_image/test_ndmeasure/test__utils.py:57:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = array(1), b = array([1]), check_shape = True, kwargs = {}, a_original = dask.array<ones, shape=(), dtype=int64, chunksize=()>, b_original = array([1])
adt = dtype('int64'), bdt = dtype('int64')
def assert_eq(a, b, check_shape=True, **kwargs):
a_original = a
b_original = b
if isinstance(a, Array):
assert a.dtype is not None
adt = a.dtype
_check_dsk(a.dask)
a = a.compute(scheduler='sync')
if hasattr(a, 'todense'):
a = a.todense()
if not hasattr(a, 'dtype'):
a = np.array(a, dtype='O')
if _not_empty(a):
assert a.dtype == a_original.dtype
if check_shape:
assert_eq_shape(a_original.shape, a.shape, check_nan=False)
else:
if not hasattr(a, 'dtype'):
a = np.array(a, dtype='O')
adt = getattr(a, 'dtype', None)
if isinstance(b, Array):
assert b.dtype is not None
bdt = b.dtype
_check_dsk(b.dask)
b = b.compute(scheduler='sync')
if not hasattr(b, 'dtype'):
b = np.array(b, dtype='O')
if hasattr(b, 'todense'):
b = b.todense()
if _not_empty(b):
assert b.dtype == b_original.dtype
if check_shape:
assert_eq_shape(b_original.shape, b.shape, check_nan=False)
else:
if not hasattr(b, 'dtype'):
b = np.array(b, dtype='O')
bdt = getattr(b, 'dtype', None)
if str(adt) != str(bdt):
diff = difflib.ndiff(str(adt).splitlines(), str(bdt).splitlines())
raise AssertionError('string repr are different' + os.linesep +
os.linesep.join(diff))
try:
> assert a.shape == b.shape
E AssertionError
../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeho/lib/python3.6/site-packages/dask/array/utils.py:112: AssertionError
Ran pytest
on CI using the latest copy of Dask.
From @jakirkham on August 3, 2017 17:22
Right now we test float64
quite a bit, but we need to test other types to make sure things are behaving correctly. In particular, we should start testing integer data.
Copied from original issue: dask-image/dask-ndmeasure#55
From @jakirkham on May 4, 2017 21:3
Pulled from the listing in issue ( dask-image/dask-ndfilters#13 ).
Some of the generic filters from SciPy N-D filters' module scipy.ndimage.filters
were skipped initially. They are listed below. The intent it to wrap them using the map_overlap method of Dask Arrays with appropriate halos. Will also include tests to verify these retain the behavior that they would using ordinary NumPy Arrays. This issue exists as a reminder that these still need to be done.
Note: Have excluded 1-D filters from this for now. These have been added to a separate issue to be discussed and addressed later. ( dask-image/dask-ndfilters#14 )
Copied from original issue: dask-image/dask-ndfilters#22
I think that in order to use the imread
function I need to dive within the imread module today
import dask_image.imread
x = dask_image.imread.imread('...')
This might be more user friendly if it was just
import dask_image
x = dask_image.imread('...')
Separately, it also looks like this is the only function in the imread
module. I wonder if maybe having a seaprate module here is overkill for now?
Would be great to use Netlify to build and preview doc changes (particularly for PRs).
In scipy.ndimage.measurements.center_of_mass, the function returns indices of type float, whereas dask_image.ndmeasure.center_of_mass returns indices of type int. It seems like a relatively small difference, but to duplicate functionality it would be nice if this function could return floats.
From @jakirkham on May 16, 2017 18:35
The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.interpolation
. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.
Copied from original issue: dask-image/dask-ndinterp#3
From @jakirkham on May 9, 2017 3:13
Currently fourier_*
functions only accept -1
for the n
argument. Though in SciPy's implementations, it is possible to use positive semi-definite values for n
. As this wasn't really a priority for the initial implementations, this was merely marked as unsupported. Though it would be nice to be able to provide equivalent support to the SciPy implementation of these functions w.r.t. other values for n
.
Copied from original issue: dask-image/dask-ndfourier#5
From @jakirkham on May 11, 2017 1:24
Appears that SciPy allows these strange things and we are not consistent with SciPy in our fourier_uniform
implementation. That said, it seems like ours is consistent in the limit. Not really sure if we want to do anything about it yet. Just noting the differences.
Copied from original issue: dask-image/dask-ndfourier#17
It would be good to add this project to the Dask webpage. Also would be nice if the docs could be served there as well. Any advice/help in doing this would be greatly appreciated.
Feature request: a convenience function for calculating the area of label regions in an image.
I was thinking about creating a function dask_image.ndmeasure.area()
to calculate the area of label regions given a labelled image (Juan suggests I base it around using dask.array.bincount()
). Measuring the area of segmented objects is a common to many types of quantitative image analysis, so I think it makes sense to have a convenience function for it.
This issue is for discussion, including discussion on the consistency with the rest of the dask-image API.
As a good large dataset to work with and show examples on, it might be nice to have an EM dataset. Hopefully others can point us to something on cloud storage that would be easy to access with friendly licensing.
The sum
function in ndmeasure
here overlaps with the sum
builtin. Basically this is a consequence of following the SciPy API. That said, it would be nice to improve things here. Though it would be good to see if SciPy is amenable to such a change first and if so follow their lead. ( scipy/scipy#10593 )
We want to implement a dask-image version of scipy.ndimage.fourier_ellipsoid
(see the docs here).
This is expected to be a little more difficult than the rest of the functions implemented in #21. That issue has some discussion of implementation details.
From @jakirkham on June 7, 2017 20:3
We had to add a hack for the CIs in commit ( dask-image/dask-imread@ffd6cc5 ) to pip
install slicerator
in addition to conda
installing it as pip
could not recognize that it was already installed. This seems to be some consequence of the choice to use Python modules and not a Python package to hold the library content.
Have proposed a fix upstream in PR ( soft-matter/slicerator#26 ), which seems to resolve the issue. Once upstream merges that fix and releases, we should drop this hack. Though admittedly that will hinge on dropping Python 3.4 as we won't have a new Python 3.4 package to work with from conda-forge
.
Copied from original issue: dask-image/dask-imread#7
From @jakirkham on August 7, 2017 23:15
Not entirely sure what the cause of this issue is, but using dask-ndmeasure
on Python 2 is pretty slow compared to using it on Python 3. For instance, running the test suite ends up being ~50% or more slower on Python 2 compared to Python 3. Given that Python 2 is legacy at this point, maybe this isn't a priority, but the issue is worth being aware of.
Copied from original issue: dask-image/dask-ndmeasure#73
This now has the content from dask-image org copied over thanks to PR ( #10 ). Would be good to have a parity release here to make it easy for users to transition from the old packages from the dask-image org to the new dask-image package before potential changes here potentially make that difficult.
From @jakirkham on June 9, 2017 0:17
When trying to do autocompletion with IPython, it treats the maximum_filter
, median_filter
, minimum_filter
functions as if they are variables. As they are constructed differently from the other filters, this is not surprising. Still it would be nice if we could have them behave more like functions in this regard. Screenshot below to elucidate this.
Copied from original issue: dask-image/dask-ndfilters#31
Hi @jakirkham,
I have some potential contributions to make here as I am currently using dask to write a parallel image registration library.
Right now I have an implentation of image warping that you might be interested in. It allows warping with a vector field of pixel displacements (like skimage.transform.warp) by creating a large linear operator matrix using scipy sparse and then multiplying with the flattened image. It can only go as fast as scipy's sparse matmul implementation, but I couldn't find a faster way because of the horrible nature of the memory access pattern for arbitrary image warping.
If you are interested please let me know and I'll shape it into a PR.
From @jakirkham on August 4, 2017 16:2
The current implementation of label
in PR ( dask-image/dask-ndmeasure#57 ) does not support multiple chunks, but only 1 chunk along all dimensions. This is still useful when working with individual images that fit in memory. Still to better utilize compute resources and to extend to even larger data, it would be nice to be able to support multiple chunks.
Copied from original issue: dask-image/dask-ndmeasure#58
From @jakirkham on May 30, 2017 15:18
Make use of wraps
on all of the SciPy-like functions in dask-ndfourier
.
Copied from original issue: dask-image/dask-ndfourier#31
From @jakirkham on June 7, 2017 19:54
Instead of specifying the number of frames, nframes
for imread
it might make more sense to have a chunks
argument. We could handle this in a couple of ways.
None
or the full length)rechunk
afterwards._read_frame
somehow so as to get chunks.xref: dask-image/dask-imread#2 (comment)
Copied from original issue: dask-image/dask-imread#6
From @jakirkham on November 24, 2017 23:48
Not sure if it is possible due to the fact that histogram
can return None
. ( dask-image/dask-ndmeasure#56 ) That said, it would be nice if we could rewrite histogram
without delayed
.
Copied from original issue: dask-image/dask-ndmeasure#82
Would be good to consolidate the work from the dask-image org libraries in this library. While it made sense to keep those libraries small and subject-oriented in the beginning to keep work focused and allow it progress incrementally, it now makes more sense to have one library that consolidates this functionality under a single API. Should improve usability and visibility of this work. Also should help by providing a singular point to engage users and developers alike.
I found myself reaching for an imsave
function to compliment imread
. Presumably this would have similar semantics, and would effectively map over the skimage.io.imsave
function, or something else in pims.
I don't have a concrete need though, this just came up when writing up an example.
We need some good example data for tutorials with dask-image.
This issue is a place for discussion and suggestions. If you have links, add them here!
Ideally this data should:
It would be nice to have
What we want to avoid:
EDIT:
napari/napari#316
Just saw this tweet announcing a human brain MRI at 100µm isotropic resolution. This could be a very cool dataset to use as a napari demo. I suggest we use this issue to keep track of datasets that we could put in napari once we have proper data downloading. Please just edit the checklist below to add your preferred demo data.
* [ ] 100µm resolution human brain: https://twitter.com/ComaRecoveryLab/status/1134436231775961088 * [ ] 10m resolution vegetation cover in Victoria: http://francois-petitjean.com/Research/MonashVegMap/info.php and https://labo.obs-mip.fr/multitemp/mapping-a-part-of-australia-at-10-m-resolution/ * [ ] correlative superres https://www.biorxiv.org/content/10.1101/773986v1.abstract * [x] SARS-CoV2 in gut epithelium https://twitter.com/notjustmoore/status/1256232842755014656 * [ ] developing sea squirt https://www.nytimes.com/2020/07/09/science/sea-squirts-embryos.html * [ ] mechanobiology of intestinal organoids https://twitter.com/XavierTrepat/status/1308026944349450241 * [ ] tracking of particles on astral microtubules ([paper](https://www.biorxiv.org/content/10.1101/2020.06.17.154260v1), [tweet (😍)](https://twitter.com/the_Node/status/1341050276011237379)), could make a really neat demo for the tracks layer. * [ ] [Sentinel-2 1y Cloud optimised geotiff dataset](https://medium.com/sentinel-hub/digital-twin-sandbox-sentinel-2-collection-available-to-everyone-20f3b5de846e) * [ ] Calcium imaging in the Drosophila ellipsoid body ([2013](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3830704/) and [2015](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4704792/)) * [ ] Janelia FlyLight data ([AWS](https://registry.opendata.aws/janelia-flylight/)) * [ ] Fruit Fly Brain Observatory (FFBO) ([tweet](https://twitter.com/FlyBrainObs/status/1369496338266750977)) * [ ] Janelia [Open Organelle datasets](https://openorganelle.janelia.org/) * [ ] CZBiohub [Open Cell](https://opencell.czbiohub.org/about) * [ ] [CoCo](https://cocodataset.org/#download) + [Voxel51 datasets](https://voxel51.com/docs/fiftyone/user_guide/using_datasets.html) * [ ] @tlambert03's lattice light sheet dataset used in the dask application post https://www.ebi.ac.uk/biostudies/studies/S-BSST435?query=Talley%20lambert
two cryo-ET datasets to add to the pile
- https://zenodo.org/record/6504891 - deconvolved tomogram of HIV virus-like particles + annotations
- https://zenodo.org/record/6504949 - denoised tomogram of a M. pneumonaie cell
There is also some more developing Tribolium embryos, mouse brain slices and mouse colon volumes:
... and this 3D cell tracking dataset is gorgeous! http://celltrackingchallenge.net/
C.elegans developing embryo
Waterston Lab, University of Washington, Seattle, WA, USA
Training dataset: http://data.celltrackingchallenge.net/training-datasets/Fluo-N3DH-CE.zip✱ (3.1 GB)
Challenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/Fluo-N3DH-CE.zip (1.7 GB)Microscope: Zeiss LSM 510 Meta
Objective lens: Plan-Apochromat 63x/1.4 (oil)
Voxel size (microns): 0.09 x 0.09 x 1.0
Time step (min): 1 (1.5)
Additional information: Nature Methods, 2008
Tracking issue for adding support for Python 3.8
Now that major dependencies like scipy and dask added/are adding support for Python 3.8 we should think about doing the same.
Progress tracking issues:
scipy scipy/scipy#10927
dask dask/dask#5493
From @jakirkham on May 2, 2017 23:56
The list of functions below are a set of 1-D filters. These all come from SciPy. In particular they come from scipy.ndimage.filters
. They are not a near term priority. They may be included in this library or potentially another wrapper library.
Copied from original issue: dask-image/dask-ndfilters#14
I have about 20k 500px500p images stored in a directory.
My ultimate goal is to reshape them into only a few images, blocking the 500s into the few larger images.
I first wanted to see how fast I could load the images.
arr=dask_image.imread.imread('*.png')[:100].compute()
versus
from PIL import Image
arr=[Image.open(img) for img in glob.glob('*.png')[:100]]
The second task is performing much much much faster than the first task (first task hangs).
Can you help me understand why this is so and what I can do to speed this up?
Chunk size is (1,500,500,3) but I changed it to (25,500,500,3) and it didn't really make a difference. I would normally expect this to be a fast and easy Dask operation. I even started processing and threaded distributed clients with no luck.
Help is very much appreciated. Thanks!
I was just starting to work on a PIL plugin for intake to support the handling of image stacks. I was hoping to make something that takes as input: paths, file objects, url, or s3 at a minimum. The output I think would be xarray dask arrays.
Do you think this project is sturdy enough to be built on top of in that way or is it moving too quickly?
Here are a few notes about the documentation.
This is a question about the API design for label indices in dask_image.ndmeasure
functions.
Where labels
is given but index
is None, the label array is overwritten and becomes a mask image. This was not very intuitive for me, and means the behaviour of the ndmeasure
functions is inconsistent. In some cases (index=None
) you get an aggregate value, and in others you get values for each individual label (even if there are multiple indices).
I think there's an argument to be made that if labels
is given and index=None
the default value should be the range of all non-zero labels (eg: [1, 2, 3, ..., n]). This would mean (a) no nasty surprise aggregations, and (b) you wouldn't need to near-constantly write index=da.arange(da.max(labels))
or revert to the slightly clunkier label_comprehension()
syntax (I can never remember the six input arguments).
def _norm_input_labels_index(input, labels=None, index=None):
...
elif index is None:
labels = (labels > 0).astype(int)
index = dask.array.ones(tuple(), dtype=int, chunks=tuple())
with this instead:
def _norm_input_labels_index(input, labels=None, index=None):
...
elif index is None:
index = dask.array.arange(dask.array.max(labels) + 1)[1:]
and making a separate mask()
convenience function available.
In my view it's much clearer that area(input, mask(labels))
is expected to return an aggregate value, compared to area(input, labels, index=None)
.
Add a note to CONTRIBUTING.rst under the headings 'Fix bugs' and 'Implement features' to say how to create the development conda environment (they're hiding in the hidden folders for travis, appveyor and circleCI). As an example, PR ( #90 ) shows the text added for testing the docs.
From @jakirkham on May 8, 2017 15:55
The list of functions below are what that this library intends to support with Dask Arrays. These all come from SciPy. In particular they come from scipy.ndimage.fourier
. The intent is to emulate their behavior with Dask Arrays as wrapping them is not likely to work. Will also include tests to verify these retain the behavior that the SciPy functions ordinarily have.
Copied from original issue: dask-image/dask-ndfourier#2
From @jakirkham on June 7, 2017 20:19
Currently we have some tests for TIFFs. However there are no tests for any other image formats. Might be worth covering a couple basic ones. Though don't want to go too far into the weeds. Might be best to see some use cases before digging in.
Edit: Also might need to switch to something like imageio
for testing so as to generate a large variety of test data.
Copied from original issue: dask-image/dask-imread#8
I was wondering what the relationship between this package and the function dask.array.image.imread
that's already part of dask.
Especially as I detected that dask.array.image.imread
doesn't actually make use of the remote data, so I couldn't give it a s3://
protocol.
From @jakirkham on June 9, 2017 3:18
After attempts to build a package of dask-imread
for conda-forge, there appeared to be errors caused by deleting the temporary directory at the end of the tests. The issue was resolved by dropping that temporary directory deletion from the tests in PR ( dask-image/dask-imread#11 ). Not entirely sure why this occurs, but there is some speculation that something is not being cleaned up properly (an open file handle perhaps). Will require some debugging on Windows to know for sure. Also these tips may help.
Copied from original issue: dask-image/dask-imread#12
We had some discussions on #94 about coding style. @jakirkham suggested we open a specific issue to discuss them.
I have three specific things that I find "inelegant" (yes, I know, it's imprecise) as currently implemented in dask_image
.
dask.array
instead of da
, numpy
instead of np
. Some of these abbreviations are very ingrained in the community and the full specification is surprising to read. (Not to mention annoying to write.)__init__.py
. Again, this is unconventional. scikit-image
doesn't use these and still gets a nice API generation from sphinx, so I don't think it's a major hurdle to get sphinx to behave well for a more conventional code structure.I'll throw a bonus idea in here: we should rename input
in all the ndimage functions. It was a bad choice from SciPy, and they're stuck with it, but I don't think dask-image should be bound by it.
Anyway, whatever is decided with the above points, it should probably be codified somewhere, and preferably it should reference the style guides for bigger projects, so that these aren't "just for dask-image" conventions.
Raised by @jni during the sprint, it would be nice to support some common thresholding methods. For the most part these should be pretty straightforward to implement. Would be good to gather some ideas about which methods are useful to include/support.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.