Git Product home page Git Product logo

deepviz's People

Contributors

bruckner avatar etrain avatar joshrosen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepviz's Issues

Error while using magic package

I found that magic package syntax depends on how it package is installed ( from apt-get , pip). Syntax use in this problem is for apt-get so I think it will be easy if we can use code compatible for both cases.

Show the filter outputs for individual input images

Add views for showing how individual images are processed by the network. Commit 32314aa added a backend for visualizing the output of filters for particular input images.

  • Add an interface for selecting an image from the image corpus.
  • Show the selected image in the sidebar.
  • Show the image's predicted class in the sidebar. We could also plot the discrete probability distribution over classes to give a sense of the confidence of the prediction.
  • Pass the selected image through the filters and display the outputs.

Record layer activations aggregated by image categories at all timestamps

Imagine that at every timestamp, we had access to the cumulative responses of every component of the network for all images in each category.

This would let us could color components of layers based on which image categories most strongly activated them. Hue could indicate the most strongly-activating class, while saturation could indicate the precision of this activation. For example, output neurons that are finely-tuned to particular classes would be heavily saturated, while we might expect earlier layers to show less saturation since they probably capture more low-level visual features that aren't specific to any particular class (such as lines at a particular orientation). When moving the timeline slider, shifts in hue and saturation would show how different parts of the model become tuned to particular input classes.

This could be difficult to implement. The data storage requirements aren't huge if we aggregate the responses by image class, since we would be storing k extra copies of the model at each timestamp (one per class). Ideally, we would compute these aggregates during regular model training and testing. A naive approach would be to process each image through the network at each timestep in an offline batch job. A better approach would be to segment the training images by class, then pass them through a modified testing pipeline that aggregates all intermediate values.

We'd also have to decide whether the coloring will be based on the images' predicted or true classes.

Add ability to select subset of filters, times, etc

We want to give users the ability to select a subset of filters, layers, checkpoints and display them all at once.

Need to add new views to support this in views.py and logic in the frontend for handling multiple images in the result set.

Show how training error decreases over time

We should add a view that shows how the training error decreases over time. Some ideas:

  • Add a sparkline showing the training loss versus time. This could be drawn parallel to the timeline slider.
  • Add a confusion matrix view that changes in response to the timeline slider.

Interaction for viewing the filter while viewing the convolution of that filter and an input

When viewing the convolution of a filter and an image, it should be possible to interact with the convolved images to view the filters that produced them.

Currently, I have to clear my image selection to get back to the raw filters, which may have a different spatial layout / size than the filter output.

One approach would be to add mouseovers on images to view the filters that produced them. This would require us to add SVGs to enable mouseover interactions with convolved filter displays and a handler to render images of individual filters.

Another approach would be to ensure that the with image / without image views have the same size and layout, and to add a button to toggle between the two modes.

Error rates in model stats DB don't match up with convnet's final error

The error rates in the classifications in the Decaf-trained ModelStatsDB don't match up with the errors of the cuda-convnet model. This behavior could be due to possible differences in how the models receive their inputs (or possibly a bug in Decaf).

This issue is designed to track progress on fixing this.

Clustering of images based on fully-connected layer outputs

Even though our dataset only classifies images into 10 classes, the examples could be further divided into additional subcategories. For example, consider the images of planes. We have images of the fronts of planes on the ground:
image imageimage

And images of planes flying against different colored skies:
imageimageimage

Although these are all images of planes, it seems plausible that there could be a difference in the activation patterns between the two sets of images.

If we view the output of the fully-connected layers as signatures describing the images, then by computing distance metrics between these signatures we can rank the similarity of images.

Given the FC64 or FC10 outputs for all of the images, we could apply clustering techniques to identify groups of images that the network classifies similarly. We could project the results of this clustering down to two dimensions and display the images according to this clustered layout.

It would be interesting to see how this clustering evolves while the model is trained. I imagine that at first the similarity scores might reflect low-level image features, like overall color, but that over time the clusters would be based on higher-level features and some subcategories might begin to emerge.

This view could also be useful for understanding misclassifications; I've noticed that the classifier sometimes confuses dogs and horses. Given a misclassified horse, being able to see the most similar dog images might help to explain the misclassification: maybe that particular horse image is atypical and is similar to some particular subgroup of dog images.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.