Git Product home page Git Product logo

neuroglancer's Introduction

Neuroglancer: Web-based volumetric data visualization

License PyPI Build DOI

Neuroglancer is a WebGL-based viewer for volumetric data. It is capable of displaying arbitrary (non axis-aligned) cross-sectional views of volumetric data, as well as 3-D meshes and line-segment based models (skeletons).

This is not an official Google product.

Examples

A live demo is hosted at https://neuroglancer-demo.appspot.com. (The prior link opens the viewer without any preloaded dataset.) Use the viewer links below to open the viewer preloaded with an example dataset.

The four-pane view consists of 3 orthogonal cross-sectional views as well as a 3-D view (with independent orientation) that displays 3-D models (if available) for the selected objects. All four views maintain the same center position. The orientation of the 3 cross-sectional views can also be adjusted, although they maintain a fixed orientation relative to each other. (Try holding the shift key and either dragging with the left mouse button or pressing an arrow key.)

Supported data sources

Neuroglancer itself is purely a client-side program, but it depends on data being accessible via HTTP in a suitable format. It is designed to easily support many different data sources, and there is existing support for the following data APIs/formats:

Supported browsers

  • Chrome >= 51
  • Firefox >= 46
  • Safari >= 15.0

Keyboard and mouse bindings

For the complete set of bindings, see src/ui/default_input_event_bindings.ts, or within Neuroglancer, press h or click on the button labeled ? in the upper right corner.

  • Click on a layer name to toggle its visibility.

  • Double-click on a layer name to edit its properties.

  • Hover over a segmentation layer name to see the current list of objects shown and to access the opacity sliders.

  • Hover over an image layer name to access the opacity slider and the text editor for modifying the rendering code.

Troubleshooting

  • Neuroglancer doesn't appear to load properly.

    Neuroglancer requires WebGL (2.0) and the EXT_color_buffer_float extension.

    To troubleshoot, check the developer console, which is accessed by the keyboard shortcut control-shift-i in Firefox and Chrome. If there is a message regarding failure to initialize WebGL, you can take the following steps:

    • Chrome

      Check chrome://gpu to see if your GPU is blacklisted. There may be a flag you can enable to make it work.

    • Firefox

      Check about:support. There may be webgl-related properties in about:config that you can change to make it work. Possible settings:

      • webgl.disable-fail-if-major-performance-caveat = true
      • webgl.force-enabled = true
      • webgl.msaa-force = true
  • Failure to access a data source.

    As a security measure, browsers will in many prevent a webpage from accessing the true error code associated with a failed HTTP request. It is therefore often necessary to check the developer tools to see the true cause of any HTTP request error.

    There are several likely causes:

    • Cross-origin resource sharing (CORS)

      Neuroglancer relies on cross-origin requests to retrieve data from third-party servers. As a security measure, if an appropriate Access-Control-Allow-Origin response header is not sent by the server, browsers prevent webpages from accessing any information about the response from a cross-origin request. In order to make the data accessible to Neuroglancer, you may need to change the cross-origin request sharing (CORS) configuration of the HTTP server.

    • Accessing an http:// resource from a Neuroglancer client hosted at an https:// URL

      As a security measure, recent versions of Chrome and Firefox prohibit webpages hosted at https:// URLs from issuing requests to http:// URLs. As a workaround, you can use a Neuroglancer client hosted at a http:// URL, e.g. the demo client running at http://neuroglancer-demo.appspot.com, or one running on localhost. Alternatively, you can start Chrome with the --disable-web-security flag, but that should be done only with extreme caution. (Make sure to use a separate profile, and do not access any untrusted webpages when running with that flag enabled.)

Multi-threaded architecture

In order to maintain a responsive UI and data display even during rapid navigation, work is split between the main UI thread (referred to as the "frontend") and a separate WebWorker thread (referred to as the "backend"). This introduces some complexity due to the fact that current browsers:

  • do not support any form of shared memory or standard synchronization mechanism (although they do support relatively efficient transfers of typed arrays between threads);
  • require that all manipulation of the DOM and the WebGL context happens on the main UI thread.

The "frontend" UI thread handles user actions and rendering, while the "backend" WebWorker thread handle all queuing, downloading, and preprocessing of data needed for rendering.

Documentation Index

Building

node.js is required to build the viewer.

  1. First install NVM (node version manager) per the instructions here:

https://github.com/creationix/nvm

  1. Install a recent version of Node.js if you haven't already done so:

    nvm install stable

  2. Install the dependencies required by this project:

    (From within this directory)

    npm i

    Also re-run this any time the dependencies listed in package.json may have changed, such as after checking out a different revision or pulling changes.

  3. To run a local server for development purposes:

    npm run dev-server

    This will start a server on http://localhost:8080.

  4. To run the unit test suite on Chrome:

    npm test

  5. See package.json for other commands available.

Discussion Group

There is a Google Group/mailing list for discussion related to Neuroglancer: https://groups.google.com/forum/#!forum/neuroglancer.

Related Projects

Contributing

Want to contribute? Great! First, read CONTRIBUTING.md.

License

Copyright 2016 Google Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this software except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

neuroglancer's People

Contributors

alexbaden avatar austinhoag avatar brifly1 avatar chrisj avatar chrisroat avatar davepar avatar dependabot[bot] avatar dstansby avatar falkben avatar fcollman avatar funkey avatar hubbardp avatar j6k4m8 avatar jbms avatar jingpengw avatar kleined avatar leekamentsky avatar manuel-castro avatar moenigin avatar neomorphic avatar nkemnitz avatar pacher avatar perlman avatar seankmartin avatar sridharjagannathan avatar stuarteberg avatar thejohnhoffer avatar timblakely avatar tingzhao avatar william-silversmith avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuroglancer's Issues

unify mesh source format

It looks like there are two different fragment mesh formats currently being used:

Brainmaps: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/brainmaps/backend.ts#L212
Precomputed: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/backend.ts#L60

Does anyone object to changing the precomputed source to use the brainmaps format? Or, perhaps, adding a flag in the manifest to specify the source type? Opening this issue for discussion!

Ndviz not getting render z bounds correctly

Ndviz does not display the full range of z values in a stack. For example, if I have z values 1-10, it does not display z=10. Looks like the render bounds checking is off by 1.

blend mode state is not properly url encoded

I am using ndviz with render and my render web service has a default value set of:
WEB_SERVICE_MAX_TILE_SPECS_TO_RENDER=20
to speed rendering by rendering only bounding boxes when the FOV is larger than 20.
When I have a multi-layer view, and am zoomed in sufficiently, I can see both layers rendered. My URL looks like:
http://em-131snb1:8001/#!{'layers':{'12_section_montage_split0':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split0'}_'split1':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split1/'_'color':2}}_'navigation':{'pose':{'position':{'voxelSize':[1_1_1]_'voxelCoordinates':[23398.046875_249166.5_1028.5]}}_'zoomFactor':27.11263892065789}_'blend':'additive'}

Now, if I'd like to zoom out, and want to render more tiles, I manually add in maxTilesSpecsToRenderto the URL like this:
http://em-131snb1:8001/#!{'layers':{'12_section_montage_split0':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split0?maxTileSpecsToRender=500'}_'split1':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split1?maxTileSpecsToRender=500/'_'color':2}}_'navigation':{'pose':{'position':{'voxelSize':[1_1_1]_'voxelCoordinates':[23398.046875_249166.5_1028.5]}}_'zoomFactor':27.11263892065789}_'blend':'additive'}

Once I do this, the 2 layers can't be seen at the same time anymore. I.e. the blending isn't working anymore.

On the other hand, if, when I add the 2nd layer, and add ?maxTileSpecsToRender=500 in the dialog html text box before pressing "add layer", then I do get the desired behavior, though, the URL looks basically the same:

http://em-131snb1:8001/#!{'layers':{'12_section_montage_split0':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split0?maxTileSpecsToRender=500'}_'split1':{'type':'image'_'source':'render://http://em-131snb1:8080/danielk/MM2/12_section_montage_split1/?maxTileSpecsToRender=500'_'color':2}}_'navigation':{'pose':{'position':{'voxelSize':[1_1_1]_'voxelCoordinates':[62600.31640625_295237.78125_1028.5]}}_'zoomFactor':90.01713130052188}_'blend':'additive'}

so,it seems like the URL is not fully capturing the blending state, and I can't manually tweak the URL and maintain the correct blending.

New feature: click on 3D rendering to center XY(Z) images

I'm not sure if this is the right place to ask for features. But it would be extremely helpful to be able to command-click on a feature in the 3d rendering and have the image viewers center on it. It could just search for the nearest boundary or mesh-plane "below" the cursor. That way you could quickly look for synapses, for instance.
Thanks

nyroglancer: could not register token: [Errno 111] Connection refused

Running the nyroglancer example on a local anaconda version of jupyter I get this error:

/home/rodrigo/anaconda2/envs/NT/lib/python2.7/site-packages/neuroglancer/base_viewer.pyc in get_json_state(self)
74 specified_names = set(layer.name for layer in self.layers)
75 for layer in self.layers:
---> 76 self.register_volume(layer.volume)
77 name = layer.name
78 if name is None:

/home/rodrigo/anaconda2/envs/NT/lib/python2.7/site-packages/nyroglancer-1.0.2-py2.7.egg/nyroglancer/viewer.pyc in register_volume(self, volume)
56 response = http_client.fetch(self.get_server_url() + '/register_token/' + volume.token.decode('utf8') + '/' + cf)
57 except Exception as e:
---> 58 raise RuntimeError("could not register token: " + str(e))
59 http_client.close()
60

RuntimeError: could not register token: [Errno 111] Connection refused

Running this from a jupyter notebook with sudo:
sudo jupyter notebook --allow-root

Fixes the issue, but this only works if you have jupyter installed on root (which is not a reasonable solution for servers, etc).

*Crossposted on nyroglancer repo (feel free to close if this issue is not due to neuroglancer)

Typescript 2.6.1 Compilation Errors

If I checkout master today into an empty directory and run npm i from npm v5.0.3 for node v8.0.0, I've installed all the packages in the this package-lock.json, which includes typescript v2.6.1.

When I try to run npm run dev-server as usual, typescript highlights an unused variable:

    ERROR in ./src/neuroglancer/chunk_manager/backend.ts
    [tsl] ERROR in /Users/John/2017/fall/neuroglancer/src/neuroglancer/chunk_manager/backend.ts(586,9)
          TS6133: 'visibleChunksChanged' is declared but its value is never read.

as well as a couple technical edge cases:

    ERROR in ./src/neuroglancer/util/google_oauth2.ts
    [tsl] ERROR in /Users/John/2017/fall/neuroglancer/src/neuroglancer/util/google_oauth2.ts(254,9)
          TS2531: Object is possibly 'null'.

    ERROR in ./src/neuroglancer/webgl/shader.ts
    [tsl] ERROR in /Users/John/2017/fall/neuroglancer/src/neuroglancer/webgl/shader.ts(109,9)
          TS2531: Object is possibly 'null'.

    ERROR in ./src/neuroglancer/chunk_manager/frontend.ts
    [tsl] ERROR in /Users/John/2017/fall/neuroglancer/src/neuroglancer/chunk_manager/frontend.ts(222,5)
          TS2322: Type 'RefCounted' is not assignable to type 'T'.

A quick fix

Anyway, I just took the simplest possible steps to fix these errors in this pull request.

A side note

Before or after the changes, here is a potentially relevant warning that I ignore on every build:

ts-loader: Using /Users/John/2017/fall/neuroglancer/config/[email protected]. This version may or may not be compatible with ts-loader.

z slices at multipes of 16 & 16+1 display the same image on Intel graphics

In the X/Y view identical images are displayed at z values of 16/17, 32/33... (n * 16) / (n * 16 + 1) when using Intel integrated graphics. The underlying intensities are loaded correctly, because the onhover data that is displayed corresponding to the image intensity changes with the different z values. The x/z and y/z show different image data, it's just the X/Y view that doesn't update the image display. When using a different graphics chip (Nvidia/AMD) the images display correctly at the different z slices.

I've seen this behavior on notebooks running Windows and MacOS using the intel graphics in Chrome, Firefox, and Safari.

limit z range for mouse scrollwheel

Currently neuroglancer will allow you to ctl-scroll in z well outside the range of any image stacks. I'm working with stacks that are just a few images deep and it's easy to end up far away from the data. I'd love to see scrollable z range to the z limits of the data (maybe padded with 1 slice to make people feel good about reaching the end). Thanks!

Firefox stable does not render meshes w/ objectAlpha setting

On Firefox stable (67.0.1 (64-bit)), meshes with objectAlpha parameter don't render (though segmentations do). Firefox developer edition and Chrome have no problems. (Linux).

Firefox stable developer console shows the following warnings (on viz.neurodata.io where we have the sourcemaps):

Error: WebGL warning: drawArrays: Float32 blending requires EXT_float_blend. offscreen.ts:317:7
Error: WebGL warning: drawArrays: Float32 blending requires EXT_float_blend.
Error: WebGL warning: drawElements: Float32 blending requires EXT_float_blend. frontend.ts:211:7

Link to reproduce: https://neuroglancer-demo.appspot.com/#!%7B%22layers%22:%5B%7B%22source%22:%22precomputed://https://s3.amazonaws.com/zbrain/atlas_owen%22%2C%22type%22:%22segmentation%22%2C%22objectAlpha%22:0.94%2C%22segments%22:%5B%221%22%2C%2210%22%2C%2211%22%2C%2212%22%2C%2213%22%2C%2214%22%2C%2215%22%2C%2216%22%2C%2217%22%2C%2218%22%2C%2219%22%2C%222%22%2C%2220%22%2C%2221%22%2C%2222%22%2C%2223%22%2C%2224%22%2C%2225%22%2C%2226%22%2C%2227%22%2C%2228%22%2C%2229%22%2C%223%22%2C%2230%22%2C%2231%22%2C%2232%22%2C%2233%22%2C%224%22%2C%225%22%2C%226%22%2C%227%22%2C%228%22%2C%229%22%5D%2C%22skeletonRendering%22:%7B%22mode2d%22:%22lines_and_points%22%2C%22mode3d%22:%22lines%22%7D%2C%22name%22:%22zbrain_atlas%22%7D%5D%2C%22navigation%22:%7B%22pose%22:%7B%22position%22:%7B%22voxelSize%22:%5B798%2C798%2C2000%5D%2C%22voxelCoordinates%22:%5B585.5321044921875%2C321.09796142578125%2C71.52733612060547%5D%7D%7D%2C%22zoomFactor%22:1490.8602740309134%7D%2C%22showDefaultAnnotations%22:false%2C%22perspectiveOrientation%22:%5B0.7711568474769592%2C-0.2878552973270416%2C0.13006828725337982%2C-0.5527555346488953%5D%2C%22perspectiveZoom%22:11789%2C%22showSlices%22:false%2C%22selectedLayer%22:%7B%22layer%22:%22zbrain_atlas%22%7D%2C%22layout%22:%224panel%22%7D

Possibly related to: #138

Setting up on-disk multiresolution images

Our lab is using neuroglancer for interacting with light sheet images of whole mouse brains that are >200GB. We are able to load these volumes into memory on our larger servers, but this hasn't been ideal. So far, I've used HDF5 and Zarr with the python API to serve up data from disk, but I don't think I'm using the multiresolution capabilities of neuroglancer to their fullest extent (the HDF5 files have just one resolution, for example). How hard would it be to set up a multiresolution data store locally rather than a cloud-based data store? Eventually, we'd like to be able to have neuroglancer clients efficiently display these datasets that live on our in-house storage servers.

Custom input controller

We want to connect our chair (the Limbic Chair) that can be used as an input device to neuroglancer as an additional input device in order to enhance workflow. Our initial aim is to globally control opacity of the colored-segmented view, so that you can switch between original image and segmentation view without having to move mouse or keyboard. My issue is now to get the data from the chair into neuroglancer, ideally that would be done via UDP stream. Is that even possible? And how would that be wisest to implement. I am currently trying to use node.js's native UDP support, would that also work non-locally? Is there a better/smarter way to do it? Alternatively I can send the data to PC over virtual COM port.
Thanks!

Segmentation Downsampling

Hi Jeremy!

While working on Seung Lab's pipeline processing pipeline, I found that it's possible to use numpy to 2x2x1 downsample segmentation correctly for at least one mip level at reasonable performance. Would you be interested in a contribution? Two benefits are 1) Recursively downsampled segmentation mip levels won't visually march diagonally as they load. 2) MIP 1 is 100% accurate as in the most frequent pixel in a 2x2 grid is always chosen, so e.g. agglomeration algorithms can depend on it.

I have a 2x2x2 implementation as well that works in theory but that I haven't tested extensively.

I've written it up here: https://medium.com/towards-data-science/countless-high-performance-2x-downsampling-of-labeled-images-using-python-and-numpy-e70ad3275589

Thanks for the many enjoyable hours of neuroglancing. ^_^.

Will

Npm test failing

On 97c522b, npm test for me fails with following output:

> karma start ./config/karma.conf.js --single-run

/usr/people/macastro/SeungLabRepos/googleNG/neuroglancer/node_modules/karma/lib/server.js:118
  async start () {
        ^^^^^
SyntaxError: Unexpected identifier
    at Object.exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:513:28)
    at Object.Module._extensions..js (module.js:550:10)
    at Module.load (module.js:458:32)
    at tryModuleLoad (module.js:417:12)
    at Function.Module._load (module.js:409:3)
    at Module.require (module.js:468:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/people/macastro/SeungLabRepos/googleNG/neuroglancer/node_modules/karma/lib/cli.js:8:16)
    at Module._compile (module.js:541:32)
npm ERR! Test failed.  See above for more details.

(feature request) render data source with manual metadata

due to some limitations in how render works, sometimes stack metadata is not available (when the stack is in a loading state) but for pipeline operations it is useful to leave it in a loading state. Therefore it would be useful to be able to pass stack bounds (or other metadata information that is required for render) by query parameter in the data_source, and have the data source initialized via that mechanism, rather than querying the service. The box calls to get images still work in the loading state so this would be a functional way of calling a data source. We would use this immediately at the Allen.

do not work with tornado>=5.1.1

Thanks for this great package.

The default installation from setup.py will install tornado 5.1.1, and is not working properly now. It works with tornado 5.0.

This is the error message I got:

In [1]: import neuroglancer as ng                                                                           
---------------------------------------------------------------------------                                 
ImportError                               Traceback (most recent call last)                                 
<ipython-input-1-4c1b9d133e96> in <module>()                                                                
----> 1 import neuroglancer as ng                                                                           
                                                                                                            
/usr/local/lib/python3.5/dist-packages/neuroglancer/__init__.py in <module>()                               
     14                                                                                                     
     15 from __future__ import absolute_import                                                              
---> 16 from .server import set_static_content_source, set_server_bind_address, is_server_running, stop     
     17 from .static import dist_dev_static_content_source                                                  
     18 from .viewer import Viewer, UnsynchronizedViewer                                                    
                                                                                                            
/usr/local/lib/python3.5/dist-packages/neuroglancer/server.py in <module>()                                 
     28 import tornado.web                                                                                  
     29                                                                                                     
---> 30 import sockjs.tornado                                                                               
     31                                                                                                     
     32 from . import local_volume, static                                                                  
                                                                                                            
/usr/local/lib/python3.5/dist-packages/sockjs/tornado/__init__.py in <module>()                             
      1 # -*- coding: utf-8 -*-                                                                             
      2                                                                                                     
----> 3 from .router import SockJSRouter                                                                    
      4 from .conn import SockJSConnection                                                                  
                                                                                                            
/usr/local/lib/python3.5/dist-packages/sockjs/tornado/router.py in <module>()                               
      9 from tornado import ioloop, version_info                                                            
     10                                                                                                     
---> 11 from sockjs.tornado import transports, session, sessioncontainer, static, stats, proto              
     12                                                                                                     
     13                                                                                                     
                                                                                                            
/usr/local/lib/python3.5/dist-packages/sockjs/tornado/transports/__init__.py in <module>()                  
      1 # -*- coding: utf-8 -*-                                                                             
      2                                                                                                     
----> 3 import sockjs.tornado.transports.pollingbase                                                        
      4                                                                                                     
      5 from .xhr import XhrPollingTransport, XhrSendHandler                                                
                                                                                                            
/usr/local/lib/python3.5/dist-packages/sockjs/tornado/transports/pollingbase.py in <module>()               
      7 """                                                                                                 
      8                                                                                                     
----> 9 from sockjs.tornado import basehandler                                                              
     10 from sockjs.tornado.transports import base                                                          
     11                                                                                                     
                                                                                                            
/usr/local/lib/python3.5/dist-packages/sockjs/tornado/basehandler.py in <module>()                          
     11 import logging                                                                                      
     12                                                                                                     
---> 13 from tornado.web import asynchronous, RequestHandler                                                
     14                                                                                                     
     15 CACHE_TIME = 31536000                                                                               
                                                                                                            
ImportError: cannot import name 'asynchronous'                                                              

Image layer not rendering when running dev-server

After pulling from master and running npm install on ecdbc0d, when I add an image layer (such as precomputed://gs://neuroglancer-public-data/kasthuri2011/image), it does not render. However, the layer widget shows the 8bit uints when I move my cursor, so the data appears to have loaded. It was working fine in the recent commit a65045a, so it seems to be caused by something in the 6 commits in between.

Key bindings ignore OS key remapping

I use the Dvorak keyboard layout. Both in Mac and Ubuntu, and in Firefox and Chromium, my OS-level remapping is ignored by neuroglancer. Instead, the key bindings are triggered by the "correct" physical key on the keyboard.

(As a side note, Control should perhaps be replaced by Command on the Mac, though this is debatable.)

Enabling VR support

I would like to add VR support for the occulus to neuroglancer, especially for the 3D rendered view. I thought about using the WebVR API and for occulus specifically the React VR API. Would this be the right approach and if yes, where is the rendering in neuroglancer localized? Did anybody maybe already do something similar? I hope this is the right place here to ask, I thought about posting it in the discussion group, but the contributing section says one should ask for advice trough the issue tracker.
Thanks.

Exception in neuroglancer.downsample.downsample_with_averaging.

I received the following exception traceback. The root of the problem is here: https://github.com/google/neuroglancer/blob/master/python/neuroglancer/downsample.py#L27. The shape calculated is a tuple of floats, but Numpy (my version is 1.12.1) throws an exception when floats are used for a shape - I know Numpy seems to be increasingly picky, so an earlier version may work with this code.

I will supply a patch that worked for me shortly.

<ipython-input-85-3787469aa333> in <module>()
----> 1 json.dumps(nyroglancer.ndstore.Data.backend(nyroglancer.volumes['b907be05ef9b7d8863c6ca77ea85371169c8bd5b'], "2,2,1", "npz", 512, 640, 384, 448, 0, 32))

.../nyroglancer/nyroglancer/ndstore.pyc in backend(volume, scale_key, data_format, min_x, max_x, min_y, max_y, min_z, max_z)
     47         """
     48 
---> 49         (data, content_type) = volume.get_encoded_subvolume(data_format, [ min_x, min_y, min_z ], [ max_x, max_y, max_z ], scale_key=scale_key)
     50         return binascii.b2a_base64(data).strip()
     51 

.../neuroglancer/volume.pyc in get_encoded_subvolume(self, data_format, start, end, scale_key)
    161         if downsample_factor != (1, 1, 1):
    162             if self.volume_type == 'image':
--> 163                 subvol = downsample.downsample_with_averaging(subvol, full_downsample_factor)
    164             else:
    165                 subvol = downsample.downsample_with_striding(subvol, full_downsample_factor)

.../neuroglancer/downsample.pyc in downsample_with_averaging(array, factor)
     26     factor = tuple(factor)
     27     output_shape = tuple(math.ceil(s / f) for s, f in zip(array.shape, factor))
---> 28     temp = np.zeros(output_shape, float)
     29     counts = np.zeros(output_shape, np.int)
     30     for offset in np.ndindex(factor):

TypeError: 'float' object cannot be interpreted as an index

Update uint64_t documentation

Looks like the image_layer_rendering.md doc needs to be updated to reflect the change in uint64_t from

struct uint64_t {
  vec4 low, high;
};

to

struct uint64_t {
  highp uvec2 value;
};

Other uintX_t values/definitions on that page might need updating as well (haven't tried them yet).

Incorrect handling of precomputed subvolume coordinates

I'm using cloud-volume ( https://github.com/seung-lab/cloud-volume ) to create precomputed segmentation volumes for use in Neuroglancer (from point predictions generated by google-ffn), and as far as I can read the specs for precomputed volumes here:
https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed

Neuroglancer appears to be mis-parsing the volume_size field in precomputed JSON, turning it directly into end coordinates for the segmentation volume rather than adding its values to voxel_offset to get that same data.

(I'm spotting this with the version running on neuroglancer-demo.appspot.com)

Here is a precomputed data layer demonstrating the issue:
https://neuroglancer-demo.appspot.com/#!{'layers':{'volumetric-1521214834':{'tab':'annotations'_'selectedAnnotation':'data-bounds'_'source':'precomputed://https://s3.us-east-2.amazonaws.com/megaphragma-neuroglancer/waspem/volumetric-1521214834'_'type':'segmentation'}}_'navigation':{'pose':{'position':{'voxelSize':[8_8_8]_'voxelCoordinates':[3110.885009765625_7685.583984375_3677.552001953125]}}_'zoomFactor':10.798870460608036}_'perspectiveOrientation':[-0.07376252114772797_-0.7471214532852173_0.025685368105769157_0.6600825190544128]_'perspectiveZoom':1669.033507744755_'showSlices':false}

As you can see from the info file here:
https://s3.us-east-2.amazonaws.com/megaphragma-neuroglancer/waspem/volumetric-1521214834/info

The size is 250x250x250, but the bounding box is not cubic and far larger than it should be, stretching back to the absolute position 250,250,250 rather than +250,+250,+250 relative to the voxel_offset coordinates. If you left-click on the volume and go to the annotations tab, you'll see that the data bounds align with this interpretation. (The overlay displays fine though)

I have, for the sake of experimentation, generated a layer that plays nice with this bug and has the size in its json file be instead the end coordinates of the cube:
https://neuroglancer-demo.appspot.com/#!{'layers':{'volumetric-1521215128':{'tab':'annotations'_'source':'precomputed://https://s3.us-east-2.amazonaws.com/megaphragma-neuroglancer/waspem/volumetric-1521215128'_'type':'segmentation'}}_'navigation':{'pose':{'position':{'voxelSize':[8_8_8]_'voxelCoordinates':[3110.885009765625_7685.583984375_3677.552001953125]}}_'zoomFactor':10.798870460608036}_'perspectiveOrientation':[-0.07376252114772797_-0.7471214532852173_0.025685368105769157_0.6600825190544128]_'perspectiveZoom':1669.033507744755_'showSlices':false_'selectedLayer':{'layer':'volumetric-1521215128'_'visible':true}}

and

https://s3.us-east-2.amazonaws.com/megaphragma-neuroglancer/waspem/volumetric-1521215128/info

This displays as cubic and has the right annotation values, but it's not following the spec.

"toggle-layout" not available for XZ, YZ views

(Feature request.)

The XY view can be maximized via the "toggle-layout" key (spacebar). But as far as I can tell, there's no way to do the same thing for the XZ and YZ views (even if I click inside one of those views before hitting the spacebar). Providing "toggle-layout" for all three orthoviews would be a nice feature.

cc @alexwweston

having trouble compiling for python in windows

Using either the version on Pypi or the version on Github, I am having trouble installing/building using Windows. I know you only test/have access to Mac and Linux, but I'm hoping you might still point me in the right direction for debugging this error.

The command line output is as follows when I run python setup.py develop:

running develop
running egg_info
writing neuroglancer.egg-info\PKG-INFO
writing dependency_links to neuroglancer.egg-info\dependency_links.txt
writing requirements to neuroglancer.egg-info\requires.txt
writing top-level names to neuroglancer.egg-info\top_level.txt
reading manifest file 'neuroglancer.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'neuroglancer.egg-info\SOURCES.txt'
running build_ext
building 'neuroglancer._neuroglancer' extension
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\ben\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\include -I.\ext/third_party/openmesh/OpenMesh/src -IC:\Users\ben\AppData\Local\Programs\Python\Python36\include -IC:\Users\ben\AppData\Local\Programs\Python\Python36\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Tools\MSVC\14.15.26726\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\cppwinrt" /EHsc /Tp.\ext/src\_neuroglancer.cc /Fobuild\temp.win-amd64-3.6\Release\.\ext/src\_neuroglancer.obj -std=c++11 -fvisibility=hidden -O3
cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
cl : Command line warning D9002 : ignoring unknown option '-fvisibility=hidden'
cl : Command line warning D9002 : ignoring unknown option '-O3'
_neuroglancer.cc
.\ext/src\_neuroglancer.cc(217): error C2491: 'neuroglancer::PyInit__neuroglancer': definition of dllimport function not allowed
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Tools\\MSVC\\14.15.26726\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2

The Microsoft information about this error discusses removing code (__declspec(dllimport)) which is not present on the line of code that it errors on, but it is present in the openmesh extension file: neuroglancer\python\ext\third_party\openmesh\OpenMesh\src\OpenMesh\Core\System\OpenMeshDLLMacros.hh

Automatic mesh generator: can't figure out how to run neuroglancer

I have tried few examples so far and couldn't make them work in my case:

  1. Grayscale raw image:
raw = np.fromfile('raw.bin',dtype=np.uint16).reshape([1200,880,930])
  1. Segmented companion:
segs = np.fromfile('segs.bin',dtype=np.uint32).reshape(raw.shape)

But I don't know where to start from in order to make neuroglancer to work in my case.

Could anyone please help me?

Thanks in Advance,
Anar.

"keyhole" effect on shear transformations

When volumes are rotated or sheared with a transformation that rotates them off axis, you get this keyhole effect where you can only see a slice of the data, and you see a smaller and smaller extent of that slice the more you zoom in, and a larger and larger one when you move out. For example if i use the demo data to shear the data 45 degrees.

https://neuroglancer-demo.appspot.com/#!%7B%22layers%22:%7B%22corrected-image%22:%7B%22tab%22:%22transform%22%2C%22transform%22:%5B%5B1%2C0%2C1%2C0%5D%2C%5B0%2C1%2C0%2C0%5D%2C%5B0%2C0%2C1%2C0%5D%2C%5B0%2C0%2C0%2C1%5D%5D%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22type%22:%22image%22%7D%7D%2C%22navigation%22:%7B%22pose%22:%7B%22position%22:%7B%22voxelSize%22:%5B6%2C6%2C30%5D%2C%22voxelCoordinates%22:%5B12598.5732421875%2C6671.513671875%2C840.3251342773438%5D%7D%7D%2C%22zoomFactor%22:269.63624668241846%7D%2C%22perspectiveOrientation%22:%5B0.0248279869556427%2C-0.8274109959602356%2C-0.5594022870063782%2C-0.04293772205710411%5D%2C%22perspectiveZoom%22:1789.3699851865956%2C%22selectedLayer%22:%7B%22layer%22:%22corrected-image%22%2C%22visible%22:true%7D%7D

This would be a useful way to visualize data that was acquired on a light sheet microscope with objectives at 45 degrees relative to stage movement without having to resample the data, but the keyhole effect makes it somewhat unusable. Is this easy to fix, or are their aspects of data loading or gpu's that make this hard/impossible to fix?

Suppressing xz and yz views

It would be great to have the option of suppressing the xz and yz views. I.e. show only xy and 3D. Showing three orthogonal views is good for block face data but for serial section EM data it's a waste of screen space.

UnboundLocalError: local variable 'layer' referenced before assignment

Running the examples/example.py code yields the following stack trace:

/home/leek/projects/google/neuroglancer/python/examples/example.py in <module>()
     30                toNormalized(getDataValue(2))));
     31 }
---> 32 """)
     33   s.append_layer(name='b',
     34                  layer=neuroglancer.LocalVolume(

/home/leek/projects/google/neuroglancer/python/neuroglancer/viewer_state.py in append_layer(self, *args, **kwargs)
    348 
    349     def append_layer(self, *args, **kwargs):
--> 350         self.layers.append(ManagedLayer(*args, **kwargs))

/home/leek/projects/google/neuroglancer/python/neuroglancer/viewer_state.py in append(self, *args, **kwargs)
    289         else:
    290             layer = ManagedLayer(*args, **kwargs)
--> 291         self._layers.append(layer)
    292 
    293     def extend(self, elements):

UnboundLocalError: local variable 'layer' referenced before assignment

The problem seems to be here: https://github.com/google/neuroglancer/blob/master/python/neuroglancer/viewer_state.py#L288
the line should read layer = args[0]

Python 3 Support

The Neuroglancer Python Integration doesn't seem to work properly with python 3. E.g., the example.py produces URLs that lead to 404.

Is there any support planned for this?

render data access from neuroglancer

I want to visualize with Neuroglancer a render project (https://github.com/saalfeldlab/render) that is stored locally on a hard drive.
I have built Neuroglancer successfully. I do not know what link I should provide to access the project. I have tried these links unsuccessfully:

-http://localhost:8080/render-ws/v1/owner/tahar/project/LMEMproject/stack/v1_align_tps
-localhost:8080/render-ws/v1/owner/tahar/project/LMEMproject/stack/v1_align_tps
-render-ws/v1/owner/tahar/project/LMEMproject/stack/v1_align_tps

Before I take more time trying to decipher the regular expression it is looking for (https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/render/frontend.ts#L321), could maybe someone provide an example of a link used to access a local render project ?
Thanks a lot

Failed at the [email protected] dev-server script.

I encountered this problem when I used the command

npm run dev-server

And the following is the output on the terminal:

> [email protected] dev-server /home/aries/NG/neuroglancer
> webpack-dev-server --config ./config/webpack.config.js

Skipping "*" -> ["*"]
{ neuroglancer: '/home/aries/NG/neuroglancer/src/neuroglancer' }
(node:16315) DeprecationWarning: Tapable.plugin is deprecated. Use new API on `.hooks` instead
ℹ 「wds」: Project is running at http://localhost:8080/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /home/aries/NG/neuroglancer
(node:16315) DeprecationWarning: Resolver: The callback argument was splitted into resolveContext and callback.
(node:16315) DeprecationWarning: Resolver#doResolve: The type arguments (string) is now a hook argument (Hook). Pass a reference to the hook instead.


#
# Fatal error in , line 0
# Check failed: U_SUCCESS(status).
#
#
#
#FailureMessage Object: 0x7ffe98157e00Illegal instruction (core dumped)
npm ERR! code ELIFECYCLE
npm ERR! errno 132
npm ERR! [email protected] dev-server: `webpack-dev-server --config ./config/webpack.config.js`
npm ERR! Exit status 132
npm ERR! 
npm ERR! Failed at the [email protected] dev-server script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/aries/.npm/_logs/2019-07-09T08_29_24_745Z-debug.log

I do not know why this error happen.
Firstly, there was a problem says the ts-loader may not be compatible with the [email protected], so I installed another version of ts-loader. And the following is my current package.json.

{
  "name": "neuroglancer",
  "description": "Visualization tool for 3-D volumetric data.",
  "license": "Apache-2.0",
  "version": "0.0.0-beta.0",
  "repository": {
    "type": "git",
    "url": "git+https://github.com/google/neuroglancer.git"
  },
  "engines": {
    "node": ">=8"
  },
  "scripts": {
    "generate-code": "node ./config/generate_code.js",
    "build-min": "webpack --config ./config/webpack.config.js --env=min",
    "build": "webpack --config ./config/webpack.config.js --env=dev",
    "build-python": "webpack --config ./config/webpack.config.js --env=python-dev",
    "build-python-min": "webpack --config ./config/webpack.config.js --env=python-min",
    "build:watch": "webpack --config ./config/webpack.config.js --watch",
    "dev-server": "webpack-dev-server --config ./config/webpack.config.js",
    "dev-server-python": "webpack-dev-server --config ./config/webpack.config.js --env=python-dev",
    "test": "karma start ./config/karma.conf.js --single-run",
    "test:watch": "karma start ./config/karma.conf.js --no-single-run",
    "benchmark": "karma start ./config/karma.benchmark.js --single-run",
    "benchmark:watch": "karma start ./config/karma.benchmark.js --no-single-run",
    "gulp": "gulp"
  },
  "devDependencies": {
    "@types/jasmine": "^3.3.12",
    "benchmark": "^2.1.4",
    "clang-format": "^1.2.4",
    "copy-webpack-plugin": "^5.0.2",
    "extract-text-webpack-plugin": "^4.0.0-beta.0",
    "gulp": "^4.0.0",
    "gulp-clang-format": "^1.0.27",
    "gulp-tslint": "^8.1.4",
    "jasmine-core": "^3.4.0",
    "karma": "^4.1.0",
    "karma-benchmark": "^1.0.1",
    "karma-benchmark-reporter": "^0.1.1",
    "karma-chrome-launcher": "^2.2.0",
    "karma-coverage": "^1.1.2",
    "karma-jasmine": "^2.0.1",
    "karma-mocha-reporter": "^2.2.5",
    "karma-sourcemap-loader": "^0.3.7",
    "karma-webpack": "^4.0.2",
    "minimist": "^1.2.0",
    "nunjucks": "^3.2.0",
    "tslint": "^5.15.0",
    "tslint-eslint-rules": "^5.4.0",
    "webpack-cli": "^3.3.0",
    "webpack-dev-server": ">=3.2.1"
  },
  "dependencies": {
    "@tensorflow/tfjs": "0.14.2",
    "@types/codemirror": "0.0.72",
    "@types/gl-matrix": "^2.4.5",
    "@types/lodash": "^4.14.123",
    "@types/pako": "^1.0.1",
    "@types/sockjs-client": "^1.1.1",
    "@types/text-encoding": "0.0.35",
    "@types/webgl2": "^0.0.4",
    "@types/webpack-env": "^1.13.9",
    "acorn": "^6.1.1",
    "codemirror": "^5.45.0",
    "css-loader": "^2.1.1",
    "gl-matrix": "^3.0.0",
    "glsl-editor": "^1.0.0",
    "glsl-strip-comments-loader": "^1.1.0",
    "html-webpack-plugin": "^3.2.0",
    "lodash": "^4.17.11",
    "mini-css-extract-plugin": "^0.7.0",
    "nifti-reader-js": "^0.5.4",
    "pako": "^1.0.10",
    "raw-loader": "^2.0.0",
    "resize-observer-polyfill": "^1.5.1",
    "sockjs-client": "^1.3.0",
    "style-loader": "^0.23.1",
    "ts-loader": "^3.5.0",
    "typescript": "^3.4.5",
    "url-loader": "^1.1.2",
    "webpack": "^4.35.0"
  }
}

Typescript 2.4 support

TS 2.4 provides more strict generic type checking, and it looks like it finds a few errors in types that weren't caught previously - particularly in the LinkedListOperators signatures (<T> vs <T extends Node<T>).

./third_party/neuroglancer/src/neuroglancer/chunk_manager/backend.ts:341:60 
TS2345: Argument of type 'typeof default' is not assignable to parameter of type 'LinkedListOperations'.
      Types of property 'insertAfter' are incompatible.
        Type '<T extends Node<T>>(head: T, x: T) => void' is not assignable to type '<T>(head: T, x: T) => void'.
          Types of parameters 'head' and 'head' are incompatible.
            Type 'T' is not assignable to type 'Node<T>'.

./third_party/neuroglancer/src/neuroglancer/chunk_manager/backend.ts:345:60
TS2345: Argument of type 'typeof default' is not assignable to parameter of type 'LinkedListOperations'.
      Types of property 'insertAfter' are incompatible.
        Type '<T extends Node<T>>(head: T, x: T) => void' is not assignable to type '<T>(head: T, x: T) => void'.
          Types of parameters 'head' and 'head' are incompatible.
            Type 'T' is not assignable to type 'Node<T>'.

./third_party/neuroglancer/src/neuroglancer/datasource/brainmaps/frontend.ts:360:13
TS2322: Type 'Promise<string[] | undefined>' is not assignable to type 'Promise<string[]>'.
      Type 'string[] | undefined' is not assignable to type 'string[]'.
        Type 'undefined' is not assignable to type 'string[]'.

./third_party/neuroglancer/src/neuroglancer/layer_dialog.ts:220:47
TS2345: Argument of type 'CancellationTokenSource' is not assignable to parameter of type 'GetVolumeOptions | undefined'.
      Type 'CancellationTokenSource' has no properties in common with type 'GetVolumeOptions'.

./third_party/neuroglancer/src/neuroglancer/layer_dialog.ts:221:15
TS7006: Parameter 'source' implicitly has an 'any' type.

Error in worker when restoreState is invoked while state already exists

Spinning up a local instance and going to this link works. However, changing the state without a full refresh (e.g. removing least significant digit of zoomFactor) causes the following error:

backend.ts:26 Uncaught TypeError: Cannot read property 'visibilityCount' of undefined at RPC.<anonymous> (backend.ts:26) at RPC.target.onmessage.e (worker_rpc.ts:104)

It appears it's coming from this line in the shared visibility code. Discovered this while trying to do Viewer.state.restoreState with a modified state.

Possibly related: inspecting RPC.objects after restoreState shows that there are a number of duplicate entries for SharedUint64DisjointSets and Uint64Sets. Not sure if that's intended

Neuroglancer on Windows 10

I'm trying to build and run neuroglancer on windows 10 64bit, but was not able to build it.
I tried following the instructions given with nodejs versions 5.9.0, 6.1.0, 7.2.0 and 7.3.0.
With all versions npm complains that the fsevents module could not be installed:
notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
I assume that this is important, because when trying to run the dev server it complains about two things:

  1. ts-loader: \neuroglancer\config\[email protected]. This version may or may not be compatible with ts-loader.
  2. ERROR in error TS18002: The 'files' list in config file 'tsconfig.json' is empty.
    What files do need to be specified in the tsconfig?
    And is it even possible to get neuroglancer running on windows?

Strange behavior with neuroglancer.set_static_content_source

Hi,

We ran into an issue using neuroglancer.set_static_content_source where the served neuroglancer does not work correctly (auth errors with brainmaps and other oddities).

Our intent was to use neuroglancer-python without having to locally compile neuroglancer.

This works:
python -i python/examples/example.py
This does not (state not loaded, popup to add data source):
python -i python/examples/example.py --static-content-url https://neuroglancer-demo.appspot.com/

@schlegelp also notes that the example of using npm run dev-server withneuroglancer.set_static_content_source(url='http://localhost:8080')` also starts in a broken state.

I have poked around a bit with the Chrome developer tools but did not see anything obvious.

Thanks,
Eric

neuroglancer viewer slice view breaks on Firefox 67 (current) and 68 (beta)

Dear all, newest Firefox (windows, macos and ubuntu 16 + 18) update seem to have broken the sliceview in neuroglancer. In console, errors were logged as follows:

Error: WebGL warning: drawArrays: Program has no frag output at location 1, but destination draw buffer has an attached image.

at this line:

gl.drawArrays(gl.TRIANGLE_FAN, 0, 4);

this line seems to be the line that throws the error, edit: introduced by this commit.

I apologise that my webgl foo is not strong enough to provide a fix, and thus can only research on some background.

small volumes with render source break

We tried to visualize a small stack that was 512x512x94 in render and ran into an error

renderlayer.ts:53 Uncaught (in promise) TypeError: Cannot read property '0' of undefined
    at new RenderLayer (renderlayer.ts:53)
    at new RenderLayer (renderlayer.ts:400)
    at new ImageRenderLayer (image_renderlayer.ts:52)
    at new ImageRenderLayer (image_renderlayer.ts:43)
    at ImageUserLayer.Object.then.volume (image_user_layer.ts:87)
    at <anonymous>

Here was the render stack metadata for the stack we were visualizing..

this stopped breaking when we increased it to 1K x 1K

{
  "stackId" : {
    "owner" : "6_ribbon_experiments",
    "project" : "M335503_Ai139_smallvol",
    "stack" : "STEP_1_GFP"
  },
  "state" : "COMPLETE",
  "lastModifiedTimestamp" : "2018-01-18T22:53:04.794Z",
  "currentVersionNumber" : 11,
  "currentVersion" : {
    "createTimestamp" : "2022-04-18T22:52:57.000Z",
    "cycleNumber" : 100,
    "cycleStepNumber" : 1,
    "stackResolutionX" : 1.0,
    "stackResolutionY" : 1.0,
    "stackResolutionZ" : 1.0
  },
  "stats" : {
    "stackBounds" : {
      "minX" : 0.0,
      "minY" : 0.0,
      "minZ" : 0.0,
      "maxX" : 511.0,
      "maxY" : 511.0,
      "maxZ" : 94.0
    },
    "sectionCount" : 95,
    "nonIntegralSectionCount" : 0,
    "tileCount" : 95,
    "transformCount" : 1,
    "minTileWidth" : 511,
    "maxTileWidth" : 511,
    "minTileHeight" : 511,
    "maxTileHeight" : 511
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.