Git Product home page Git Product logo

web-vol-viewer's Introduction

web-vol-viewer

DOI

Summary

This module implements a React component for direct volume rendering in a web browser using the Three.js wrapper for WebGL 2. Its basic arguments are a 3D data volume in a JavaScript Uint8Array, and a transfer function (to map data values to colors and opacities) in a Three.js DataTexture. An additional, optional argument is a 3D surface in a Three.js Mesh, which is rendered opaquely within the volume with proper depth occlusion. The intended application is the matching of fluorescence light microscopy or "LM" data (the volume) with electron microscopy or "EM" data (the surface) in the NeuronBridge system, but the renderer should be able to handle other applications.

An example of the EM-LM matching application is the following image, using data from the FlyLight Generation 1 MCFO collection (citation: https://dx.doi.org/10.1016/j.celrep.2012.09.011). The magenta LM volume for sample R89H10 is converted from H5J format and the green EM data for body 358259842 is converted from SWC format.

The implementation uses ray casting in a GLSL fragment shader. The basic idea was described as early as 2009 in a blog post by Kyle Hayward. In 2014, Leandro Barbagallo demonstrated a WebGL 1 version. The implementation here is more closely related to a simpler approach from a blog post by Will Usher in 2019. This code extends that approach in various ways, adding support for non-cubical volumes, lighting using gradients in the volume as the surface normal, and the opaque surface with depth occlusion.

Usage

Stand-Alone Application

The simplest way to use this renderer is as part of a stand-alone application, launched as follows:

npm start

The application will then be available in a web browser as https://localhost:3000. It has the simple user interface shown in the images above:

  • a button at the top for choosing the volume file, in H5J format; pressing this button reveals a panel for either entering a URL or choosing a local file on the host
  • a button at the top for choosing the color of the rendered volume
  • a button at the top for choosing the surface file, in SWC format or OBJ format, from either a URL or a local file as with the volume
  • a button at the top for choosing the color of the rendered surface
  • a control at the bottom for choosing the "data peak", the 8-bit value below which opacity ($\alpha$) falls off as controlled by the "data $\gamma$" (gamma); see section on transfer functions
  • a control at the bottom for choosing the "data $\gamma$", which controls the rate of opacity ($\alpha$) falloff from the "data peak"; see section on transfer functions
  • a control at the bottom for choosing the "data $\alpha$ scale", which adjusts the opacity ($\alpha$) at each sample in the ray casting
  • a control at the bottom for choosing the spacing of samples in the ray casting, with a value larger than the default of 1 improving performance at the cost of quality
  • a control at the bottom for choosing the speedup (resolution reduction) during camera interaction, with a value larger than the default of 1 improving performance at the cost of quality
  • a control at the bottom for choosing the "final $\gamma$", which helps to bring out faint features in the data; see section on transfer functions
  • mouse and key bindings for camera orbiting, zooming and panning, from the three-orbit-unlimited-controls module
  • a spacebar key binding to toggle the surface off and on
  • an l key binding to toggle lighting off and on
  • an m key binding to toggle mirroring of the volume data along the x dimension

The user interface is implemented with only standard HTML and CSS to avoid package dependencies.

Reusable Component with User Interface

The H5j3dViewerWithBasicUI component makes the renderer and simple user interface available for use in other applications. First, install the NPM module:

npm install @janelia/web-vol-viewer

Then, if the other application is a React functional component, the code would be like the following:

import { H5j3dViewerWithBasicUI } from '@janelia/web-vol-viewer';

function App() {
  ...
  return (
    ...
    <H5j3dViewerWithBasicUI />
    ...
  );
);

The H5j3dViewerWithBasicUI component has no props because everything is set from its simple user interface.

If opening an H5J file with this component produces an error in the console, "SharedArrayBuffer is not defined", then the application may not have cross-origin isolation set up properly; see the section on H5J volume data.

Basic Reusable Component

Use the Vol3dViewer component to wrap the renderer in a different user interface, created with a toolkit like Ant Design or Material-UI. This component provides the renderer with no user interface beyond the mouse and key bindings (for camera control, etc). First, install the NPM module:

npm install @janelia/web-vol-viewer

Then, use it as in this example, with its required props:

import { Vol3dViewer } from '@janelia/web-vol-viewer';

function App() {
  ...
  return (
    ...
    <Vol3dViewer 
      volumeDataUint8={volumeDataUint8}
      volumeSize={volumeSize}
      voxelSize={voxelSize}
      transferFunctionTex={transferFunctionTex}
    />
    ...
  );
);

The required props are:

  • volumeDataUint8: a JavaScript Uint8Array representing the data volume, with one 8-bit value per voxel
  • volumeSize: an array, [x, y, z], giving the volume size in voxels
  • voxelSize: an array, [x, y, z], giving the dimensions of each voxel in some consistent units (e.g, microns)
  • transferFunctionTex: a Three.js DataTexture with width 256, height 1, and pixel format THREE.RGBAFormat. The texture's data is a Uint8Array of size 256 * 4 giving the color (with alpha) for each 8-bit data value. See src/TransferFunctions.js for an example.

The component also supports additional optional props:

import { Vol3dViewer } from '@janelia/web-vol-viewer';

function App() {
  ...
  return (
    ...
    <Vol3dViewer 
      volumeDataUint8={volumeDataUint8}
      volumeSize={volumeSize}
      voxelSize={voxelSize}
      transferFunctionTex={transferFunctionTex}

      useVolumeMirrorX={useVolumeMirrorX}
      alphaScale={alphaScale}
      dtScale={dtScale}
      interactionSpeedup={interactionSpeedup}
      finalGamma={finalGamma}
      cameraPosition={cameraPosition}
      cameraTarget={cameraTarget}
      cameraUp={cameraUp}
      cameraFovDegrees={cameraFovDegrees}
      orbitZoomSpeed={orbitZoomSpeed}
      useLighting={useLighting}
      useSurface={userSurface}
      surfaceMesh={surfaceMesh}
      surfaceColor={surfaceColor}
      onCameraChange={onCameraChange}
    />
    ...
  );
);

These optional props are:

  • useVolumeMirrorX (default: false): controls whether to mirror the volume data along the x axis
  • alphaScale (default: 1): a lower value decreases the opacity ($\alpha$) at each sample when ray casting
  • dtScale (default: 1): a higher value increases performance at the cost of quality, by increasing the step size when ray casting (and thus decreasing the number of samples)
  • interactionSpeedup (default: 1): a higher value increases interactivity at the cost of quality, by reducing the rendering resolution during interactive camera manipulation; this setting should be needed only with weak graphics cards and large data sets
  • finalGamma (default: 4.5): a higher value brings out more of that faint details in the rendering; see the section on transfer functions
  • cameraPosition (default: [0, 0, -2]): the initial position of the camera, relative to the box representing the volume (which is centered at the origin, scaled so its longest dimension goes from -0.5 to 0.5)
  • cameraTarget (default: [0, 0, 0]): the initial point at which the camera is looking
  • cameraUp (default: [0, -1, 0]): the initial "up" direction for the camera
  • cameraFovDegrees (default: 45.0): the vertical field of view of the camera, in degrees
  • orbitZoomSpeed (default: 0.15): controls the speed with which the camera zooms on a mouse-wheel event
  • useLighting (default: true): controls whether the volume rendering uses lighting or not (in which case it is like a maximum intensity projection)
  • surfaceMesh (default: null): a Three.js Mesh to be rendered within the volume
  • useSurface (default: false): controls whether the mesh is visible or not
  • surfaceColor (default: '#00ff00'): the color of the mesh; note that there is no alpha because the mesh must be fully opaque
  • onCameraChange (default: null): a function of the form (event) => {} called each time the camera changes position. The event.target is the OrbitUnlimitedControls instance that controls the camera, and event.target.object is the Three.js PerspectiveCamera.
  • onWebGLRender (default: null): a function of the form () => {} called each time Three.js renders the 3D scene. The H5j3dViewerWithBasicUI component uses this callback to implement throttling of the spinners on the UI controls.

If opening an H5J file with this component produces an error in the console, "SharedArrayBuffer is not defined", then the application may not have cross-origin isolation set up properly; see the section on H5J volume data.

Transfer Functions

The transfer function determines the color and opacity for every 8-bit data value in the volume. It is implemented as a Three.js DataTexture with width 256, height 1, and pixel format THREE.RGBAFormat meaning there is 4-byte RGBA value for each 8-bit data value.

This module implements a transfer function that works well for fluorescence microscopy data, as described in the publication: Wan et al., "FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research", Proceedings of the 2012 IEEE Pacific Visualization Symposium, pp. 201–208 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3622106). The desktop application VVD Viewer also uses this transfer function.

The texture for this transfer function is returned by the following function from src/TransferFunctions.js:

makeFluoTransferTex(alpha0, peak, dataGamma, alpha1, colorStr)

The basic color for all data values is colorStr (e.g., '#ff00ff for the most saturated magenta). The alpha (i.e., 0 for most transparent, 255 for most opaque) varies with the data value in a "tent" shape. Alpha rises from the alpha0 value at the lowest data value (0) up to the alpha value 255 at the peak data value, then falls back to the alpha1 value at the highest data value (255). (It is typical for alpha0 to be 0 and alpha1 to be 255 so the value may not really "fall" after peak.) The shape of the rise and fall is controlled by the dataGamma applied with a power function: y ** (1.0 / dataGamma), where y is the data value normalized to be between 0 and 1. A dataGamma of 1 would cause a linear rise and fall, but it is typical to use a dataGamma less than 1 (e.g., 0.5) to de-emphasize but not completely eliminate low data values.

After the transfer function is applied to colors that are accumulated during ray casting, a finalGamma is applied to the accumulated color: c ** (1.0 / finalGamma) (see the pow() call at the end of fragmentShaderVolume in src/Shaders.js, and note again that the color must be normalized to have components between 0 and 1). It may seem counterintuitive to reduce the visibility of low data values with the dataGamma only to increase their visibility with the finalGamma, but this approach works well for fluorescence microscopy, where significant features emit more light and have higher data values. The dataGamma prevents low data values from overwhelming small, significant features during the ray casting, while the finalGamma prevents the loss of areas with only low data values, which can be important for visual context.

This effect is visible in the following two images, of sample VT049371 and body 5901203987 from the FlyLight Generation 1 MCFO collection (citation: https://dx.doi.org/10.1016/j.celrep.2012.09.011). With the default finalGamma of 4.5, not much of the low-value context is visible:

Increasing the finalGamma to 6 reveals more context without overwhelming the significant features visible in the previous image:

The renderer can work with other transfer functions but the module does not implement any at this time.

H5J Volume Data

The renderer in this module should work with a variety of data representable in a Uint8Array, but the support code has been designed specifically to work with volumes in the H5J format. A H5J file is an HDF5 container with one or more channels of 3D volumetric data with 12-bit values compressed using H.265 (a.k.a. HEVC or High Efficiency Video Coding). H5J is a "visually lossless" format useful for fluorescence microscopy data.

H5J data is loaded into a Uint8Array using the web-h5j-loader module. This module includes several example data sets for testing: one includes a sphere, cones and a cylinder, while another is actual microscopy data from the FlyLight Generation 1 MCFO collection.

The web-h5j-loader module uses multi-threaded WebAssembly (wasm) code from the ffmpeg.wasm module. The threads depends on a SharedArrayBuffer to implement shared memory. Due to security risks, SharedArrayBuffer is disabled in most browsers unless it is used with cross-origin isolation. If the server is not cross-origin isolated, loading an H5J file will produce an exception:

SharedArrayBuffer is not defined

To enable cross-origin isolation, a site must be served with two additional headers:

Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp

For a site created with create-react-app, a way to add these headers to the development server is to use the CRACO (Create React App Configuration Override) package. The stand-alone application included in this module is set up to use CRACO, and the approach can be copied in another application using one of the components from this module:

  1. Install CRACO:
    npm install @craco/craco --save
    
    (With newer versions of NPM, it may be necessary to append the --legacy-peer-deps argument to the end of the previous installation line.)
  2. Copy this module's craco.config.js file (as a sibling to the site's package.json file), which adds the two additional headers.
  3. Change the react-scripts to craco in most entries of the scripts section of the other application's package.json file:
    ...
    "scripts": {
      "start": "craco start",
      "build": "craco build",
      "test": "craco test",
      "eject": "react-scripts eject"
    },
    ...
    

Tips

Performance can be poor in the Chrome browser if the Developer Tools panel is open, for some reason. Closing that panel significantly improves performance.

web-vol-viewer's People

Contributors

hubbardp avatar neomorphic avatar krokicki avatar

Stargazers

 avatar  avatar Jes Gonzalez avatar Wen Jiang avatar Peyton Lee avatar penningavery avatar  avatar  avatar Volker Hilsenstein avatar Christopher Bruns avatar

Watchers

 avatar James Cloos avatar Stephan Preibisch avatar Christopher Bruns avatar Rob Svirskas avatar  avatar

Forkers

nafiul-nipu

web-vol-viewer's Issues

Option to show axis lines at camera center

It would be nice to have an option to show the axis lines, which should show the camera's central point.

If I want to center the camera on a particular structure (so I can rotate around it), I can use ctrl+drag. But it's difficult to find the correct location if I can't tell where the camera's central point actually is.

Here's a screenshot from neuroglancer, showing axis lines.

image

Consistency with neuroglancer controls

Neuroglancer is becoming the de facto standard 3D viewer for EM neuron visualization. As such, it's likely that most web-vol-viewer users will already have muscle memory trained for neuroglancer's controls.

If the web-vol-viewer choices for keyboard shortcuts and mouse gestures were chosen somewhat arbitrarily, then I suggest changing them to match neuroglancer's controls.

Here's the current comparison between the two tools:

Action Neuroglancer 3D view web-vol-viewer
Pan Shift+drag Ctrl+drag
Pan [arrows] [arrows]
Rotate drag drag
Rotate shift+[arrows], r, e
Zoom Ctrl+scroll scroll
Zoom Ctrl+-, Ctrl+= scroll
Dolly (forward/back) scroll
Dolly (forward/back) ,, .
Hide layer [layer number] spacebar
Maximize spacebar <N/A>
Jump-to-point right-click (not implemented yet)
Toggle axis lines a (not implemented yet)
Snap camera angle z (viewer can be reset via a button in the side panel)

Jump-to-point

The user can recenter the camera using CTRL+drag, but that is inefficient relative to a jump-to-point feature such as the one neuroglancer has (via it's right-click behavior). It makes a huge difference.

It probably isn't feasible to enable jumping to fuzzy objects in the light data, but jumping to a point on the EM skeleton would be very useful, and probably not too computationally intense.

I realize the implementation of such a feature is not likely to be trivial, but I think the payoff is quite large.

Overall alpha scaling factor

Hideo wants an overall scaling factor to be applied to every alpha accumulated along the rays into the volume. It would match the "Alpha" slider near the bottom of the VVD_Viewer user interface (see the orange box):
Screen Shot 2022-05-25 at 4 20 25 PM
:

Can NIFTI and DICOM files can be used

Hey Jenelia,

Iam new to this three.js field so may i know whether your project is capable for uploading nifti and dicom files for volume rendering

Possible overflow

Hideo reports: " I found signal overflow (may be…) when I set Data Peak lower. As you can see the attached image, the black dots showed up. These black dots should set as 255 saturated signal."

Screen Shot 2022-05-16 at 5 38 36 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.