Git Product home page Git Product logo

electron-sight's People

Contributors

abhidg avatar iamleeg avatar jabull1066 avatar martinjrobins avatar

Watchers

 avatar  avatar  avatar

electron-sight's Issues

use redux for state

The app currently needs to store:

  • A classifier in the process of being built
  • A set of already built classifiers
  • An annotation in the process of being built
  • A set of annotations already built
  • A set of annotation+classifier which contain the results of a classifier used on a given annotated area

Where should this be stored?

  1. In Menu, and passed down to other react classes via props?
  2. Using something like redux?

I'm leaning towards 2 so I can learn how redux works....

persistent classifiers / annotation

  • annotations saved on a per-image basis automatically, using same filename as image
  • classifiers saved on a user-basis automatically
  • default classifier included
  • add ability to delete annotations + classifiers

move app logic to app.jsx

at the moment most of the logic for the UI is in the Menu component. Should live in App.jsx, and then the Menu.jsx can just be stateless

test failures due to src/ApplicationState.jsx not found

npm test results in failure:

> [email protected] test /home/mrobins/git/Electron-Sight
> mocha --require babel-core/register

/home/mrobins/git/Electron-Sight/node_modules/mocha/node_modules/yargs/yargs.js:1163
      else throw err
           ^

Error: Cannot find module '../src/ApplicationState'
Require stack:
- /home/mrobins/git/Electron-Sight/test/test_application_state.js
- /home/mrobins/git/Electron-Sight/node_modules/mocha/lib/mocha.js
- /home/mrobins/git/Electron-Sight/node_modules/mocha/lib/cli/one-and-dones.js
- /home/mrobins/git/Electron-Sight/node_modules/mocha/lib/cli/options.js
- /home/mrobins/git/Electron-Sight/node_modules/mocha/bin/mocha

javascript error with babel

After merging in #35, got the following error:

A JavaScript error occurred in the main process
Uncaught Exception:
ReferenceError: regeneratorRuntime is not defined
    at /home/mrobins/git/Electron-Sight/src/index.js:46:46
    at Object.<anonymous> (/home/mrobins/git/Electron-Sight/src/index.js:102:2)
    at Object.<anonymous> (/home/mrobins/git/Electron-Sight/src/index.js:128:3)
    at Module._compile (internal/modules/cjs/loader.js:693:30)
    at Object.require.extensions.(anonymous function) [as .js] (/home/mrobins/git/Electron-Sight/node_modules/electron-compile/lib/require-hook.js:77:14)
    at Module.load (internal/modules/cjs/loader.js:602:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:541:12)
    at Function.Module._load (internal/modules/cjs/loader.js:533:3)
    at Module.require (internal/modules/cjs/loader.js:640:17)
    at init (/home/mrobins/git/Electron-Sight/node_modules/electron-compile/lib/config-parser.js:294:16)

Perhaps related to babel/babel#8829

create classifier functionality

Potential workflow:

  1. User clicks on "Classifier" button
  2. Application automatically zooms to the zoom level set/chosen (what is this? maximum zoom?)
  3. for each user click:
    • If tile has not been SLICed -> SLIC tile, add to list of SLIC tiles, set all superpixels to "not chosen"
    • set superpixel clicked as "chosen"
  4. User clicks "Finish Classifier" button

create plotting tab

For demo in Jan, Joshua recommended a separate plotting tab that would displace summary statistics of classified regions, potential plots include:

user paperjs for annotations

from meeting 13th Dec - @martinjrobins @jbull

Clinicians (Philip) very concerned with ease of creating annotation -

e.g. freehand drawing of region, which then is simplified to polygon, then add/edit points in polygon

paperjs has functionality for this, could integrate with app

Convert slides to DeepZoom

OpenSeadragon cannot read whole slide images formats (from Aperio, Hamamatsu and so on)
But it can read DeepZoom.
So whole slide images must be first converted to Deepzoom.

The main tool for that is libvips, it relies on openslide to read whole slide images.
libvips defines a function vips_dzsave

int
vips_dzsave (VipsImage *in,
             const char *name,
             ...);

Can also be used from the command line vips utility

vips dzsave CMU-3.ndpi CMU-3.dzi

How to use libvips and openslide from javascript ?

Someone asking for node.js bindings for openslide on the
openslide repo (openslide/openslide#204)

Main libvips developer John Cupitt suggests to use sharp

sharp

sharp('input.tiff')
 .png()
 .tile({
   size: 512
 })
 .toFile('output.dz', function(err, info) {
   // output.dzi is the Deep Zoom XML definition
   // output_files contains 512x512 tiles grouped by zoom level
 });
  • since early versions sharp supports reading in whole slide images (using openslide), as well as writing DeepZoom images (lovell/sharp#146).

The documentation used to include installation instructions for installing sharp with openslide support but now no mention of openslide anymore...

I naively tried

sharp('CMU-3.ndpi')
  .png()
  .tile({
    size: 512
  })
  .toFile('output.dz', function(err, info) {
    // output.dzi is the Deep Zoom XML definition
    // output_files contains 512x512 tiles grouped by zoom level
  });

but doesn't produce any output.

In may 2017 jcupitt writes

@xmkevin, sharp can read all the formats that openslide can read,
you just need to enable openslide support. There are some notes in the README.

Somehow one may has to /enable openslide support/ and I'm not sure what that means. Potentially useful to figure it out because sharp, coupled with openslide seems to be able to do precisely what we are looking for.

node-vips

Sept. 2017: Experimental node binding for libvips
(https://github.com/libvips/node-vips)

The gitHub repo provide an example reading a whole slide image with openslide:

var vips = require('vips');

// get a rect from a level
// autocrop trims off pixels outside the image bounds
var image = vips.Image.openslideload('somefile.svs', {level: 2, autocrop: true});
console.log('level size:', image.width, 'x', image.height);
// try 'vipsheader -a somefile.svs' at the command-line to see all the metadata
// fields you can get
console.log('associated images are:', image.get('slide-associated-images'));
// crop is left, top, width, height in pixels
// images are RGBA with premultiplication taken out
image.crop(100, 100, 1000, 1000).writeToFile('x.png');

// extract an associated image
image = vips.Image.openslideload('somefile.svs', {associated: 'label'});
image.writeToFile('label.png');

That also looks like exactly what we'd need.

  • question What bother writing node bindings when sharp can do it ? Isn't it redundant with sharp ? How does sharp differ from libvips (in terms of functionalities)

!!! !!! Had to update libvips to the latest version.
Didn't work with the libvips-dev available in the ubuntu repo, so installed from source.

Now possible to read a slide using vips in javascript

const vips = require('vips');
var image = vips.Image.openslideload('CMU-3.ndpi')
  • ISSUE Cannot figure out how to use dzsave using javascript bindings
// None of these work
var test = vips.Image.dzsave('CMU-3.ndpi', 'test.dz')
image.dzsave('output.dz')
vips.Image.dzsave('CMU-3.dzi')

dzsave is only mentioned in lib/autogen.js

  vips.Image.prototype.dzsave = function (filename, options) {
    return vips.call('dzsave', this, filename, options);
  };

  vips.Image.prototype.dzsaveBuffer = function (options) {
    return vips.call('dzsave_buffer', this, options);
  };

add prediction capability

Suggested workflow:

  1. User clicks on New Classifier, clicks to get green (positive) and red (negative) examples. User enters a name for the classifier and clicks Build, which pops it in the Classifier card on the right.
  2. User can create multiple classifiers this way.
  3. User selects a classifier from the card. This automatically switches them to Annotations mode with a cross hair.
  4. Live update of predictions in annotations region.
  5. User can switch classifier at any time, which would redraw annotation region with predictions from selected classifier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.