Git Product home page Git Product logo

canvas-sandbox's Introduction

Some experiments with Dash components for image processing.

Getting started

install requirements

pip install -r requirements txt

deploy one of the apps, for example

python app_bounding_box.py

Apps

app_bounding_box: draw bounding box on a series of images

app_contour: magic scissors application, draw a rough contour and make it tight by calling a scikit-image algorithm

canvas-sandbox's People

Contributors

emmanuelle avatar nicholas-esterer avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

canvas-sandbox's Issues

Wrapping up ML segmentation app

After #14 gets merged, here is what remains to be done to finish the app.

Modifications

Replace classes dropdown by buttons, each of them having a background color corresponding to the shape color.

New components

  • dcc.Loading to add a spinning wheel (or something equivalent) when the segmentation is computed.
  • dcc.RangeSlider to select the sigma_min and sigma_max
  • suggested during the graphing libraries meetings: buttons to download the segmented image and the classifier.

CSS + beautifying the app

I'd suggest a classical design with two divs side by side, one for the graph and one for the other components. Open to suggestions!

Specifications for image filtering template app

Suggested by @nicolaskruchten

Goal : build a simple template app of an image processing pipeline which can be adapted by people to their own image processing algorithm. How can you make an image processing algorithm available to people who do not code?

Possible example: hysteresis thresholding (https://scikit-image.org/docs/stable/auto_examples/filters/plot_hysteresis.html#sphx-glr-auto-examples-filters-plot-hysteresis-py) or any other filter taking an image as input and returning an image an output. Maybe something working on RGB images would be better (hysteresis thresholding is for grayscale only).

Components

  • 2 dcc.Graph, one for the image input, the other one for the output
  • an Input for the image URL if people want to use another image than the default one
  • components to change the default parameters of the function (e.g. rangeslider for hysteresis thresholding)
  • button "Apply" to run the function
  • block of Markdown text to explain that the app is a template and can be adapted.

Related ideas for future apps:

  • In a more complex version of this app, there would be several steps (for example a denoising function and a segmentation function) and it would be possible to visualize intermediate images as well.
  • In an even more complex app, we would use the inspect module to generate one dropdown for each subpackage of scikit-image, and selecting a value would generate some components for the parameters of the app. This will become easier when scikit-image introduces type hints.

app_bounding_box.py features discussion

Shape drawing features needed:

  • Fast drawing of many rectangles (including existing rectangles)
  • Fast deletion of rectangles
    • Simply hovering mouse inside rectangle should select it
    • Then you can press d maybe to delete it
    • For cases where rectangles overlap, maybe pressing c cycles through the ones under the mouse?
  • Easy modification of rectangle (resize and drag)
  • If hover chooses, then press say r to resize, m to move or something like that
  • Maybe add undo? In case annotator accidentally modifies a rectangle

Additional Dash components:

  • dcc.Store for storing annotations or loading existing annotations
  • Load / Store from / to client's disk: JSON format?
  • Dropdown to select classes and modify newshape attributes
  • Next and Previous buttons to navigate between images of the dataset
  • Maybe menu to know where you are in dataset? So you can jump to an image?
  • Upload component to select batch of images from disk
  • Can we select a folder instead of having to shift-select with mouse?

Can't zoom in ML app

I just noticed this: when you use the zoom buttons in the Graph (which triggers a relayoutData event), the range goes back to the initial one, probably because of what the callback does. If the callback is triggered by relayoutData and relayoutData is not shape data, the callback should return dash.no_update objects (ie do nothing).

specifications for interactive learning and segmentation app

Goal : make a simplified web-app version of ilastik, ie an interactive learning and image segmentation app. On a given image, the user labels pixels of the different classes (cells, etc.) using annotations. Features are computed on pixels of annotations (with a default set of features, and the possibility to add other features), and a random forest classifier (from scikit-learn) is trained on labeled pixels in order to segment pixels outside of annotations. Ilastik has this live mode which can be turned on and off, where the segmentation is recomputed every time a new annotation is drawn.

From: https://www.ilastik.org/gallery.html#
image

Layout

  • an Input to paste the URL of an image to be segmented
  • dcc.Graph with the modebar button to draw open paths
  • a small slider to tune the width of the path (to modify layout.newshape)
  • radioitems or dropdowns or checklists to select the features (the grid they have for this in https://www.youtube.com/watch?v=kXzHbuJj9XQ is very neat but probably complicated to do)
  • a small colored button for the first class, with a text input to name this class (eg cell)
  • a button with "Add class" which will add another line with a colored button and a text input, for the next class (we could in fact have two rows from the start since there will always be more than two classes)
  • a "segmentation button" to train the machine learning model and compute the segmentation
  • a "live" button checklist button (unchecked by default) to trigger the segmentation every time a new annotation is added
  • a store for the annotations

Callbacks

relayoutData when adding an annotation --> add an annotation to the Store

pressing segmentation (or annotation Store changed when in live mode) --> retrieve geometry of annotations, compute features using skimage.features functions for this on pixels of the image, train random forest classifier and fit on unlabeled pixels. Output: dcc.Graph figure with segmentation overlaid on original image (as in https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_label.html#sphx-glr-auto-examples-segmentation-plot-label-py, with label2rgb), and annotations. The selected features should be used as a State in this callback.

pressing one the colored classes buttons : change the newshape line color

changing the "brush" slider: change the newshape line width

Open questions

Persistence of the machine learning model: ideally we would like to have a machine learning model which persist on the server for a given image. Indeed you can use warm_start in a scikit-learn random forest classifier to accelerate the training stage. We could probably store the model inside a dictionary, the key being the URL of the image or a hash.
persistence of machine learning model.

Ideally the image should also persist on disk maybe in this dictionary.

Getting started

I suggest watching https://www.youtube.com/watch?v=kXzHbuJj9XQ to see where we want to go.

Then one should get familiar with the underlying scikit-image and scikit-learn functions, the best thing is probably to work in a Jupyter notebook to play a bit with the code in https://github.com/scikit-image/skimage-tutorials/blob/master/lectures/solutions/machine_learning.ipynb (paragraph Increasing the number of low-level features: trained segmentation using Gabor filters and random forests). [It could also be a great way to test jupyter-dash :-)].

In order to get the geometry of the annotation from the plotly shape, one can use

def path_to_indices(path):
https://github.com/plotly/dash-canvas/blob/master/dash_canvas/utils/parse_json.py#L7 and https://github.com/plotly/dash-canvas/blob/master/dash_canvas/utils/parse_json.py#L79. The steps are

  • obtain coordinates of the path control points, from the shape returned in relayoutData
  • get the coordinates of all pixels covered by this path (in-between the control points), thanks to skimage.draw.bezier_curve
  • thicken the curve to the desired width with binary_dilation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.