bruckner / deepviz Goto Github PK
View Code? Open in Web Editor NEWVisualization tools for deep convolutional networks
Visualization tools for deep convolutional networks
while using provided model, corpus and model_stat there is problem in RGB to gray scale conversion in deepviz_webui.viewdecorators._image_to_png
We should add a view that shows how the training error decreases over time. Some ideas:
Images should be normalized according to the same scale across timesteps.
Add filter weights to detail display on selection/mouseover.
It would be great to somewhere see the raw values of the weights in the filters - the detail display in the top left probably makes sense.
Even though our dataset only classifies images into 10 classes, the examples could be further divided into additional subcategories. For example, consider the images of planes. We have images of the fronts of planes on the ground:
And images of planes flying against different colored skies:
Although these are all images of planes, it seems plausible that there could be a difference in the activation patterns between the two sets of images.
If we view the output of the fully-connected layers as signatures describing the images, then by computing distance metrics between these signatures we can rank the similarity of images.
Given the FC64 or FC10 outputs for all of the images, we could apply clustering techniques to identify groups of images that the network classifies similarly. We could project the results of this clustering down to two dimensions and display the images according to this clustered layout.
It would be interesting to see how this clustering evolves while the model is trained. I imagine that at first the similarity scores might reflect low-level image features, like overall color, but that over time the clusters would be based on higher-level features and some subcategories might begin to emerge.
This view could also be useful for understanding misclassifications; I've noticed that the classifier sometimes confuses dogs and horses. Given a misclassified horse, being able to see the most similar dog images might help to explain the misclassification: maybe that particular horse image is atypical and is similar to some particular subgroup of dog images.
The filter views should pre-load their images in the background.
Show weights for fully-connected layers.
The interface should give feedback to indicate which layer has been selected. For example, we could shade the currently selected layer in the layer DAG.
Do UI Stuff---backend is #8.
When viewing the convolution of a filter and an image, it should be possible to interact with the convolved images to view the filters that produced them.
Currently, I have to clear my image selection to get back to the raw filters, which may have a different spatial layout / size than the filter output.
One approach would be to add mouseovers on images to view the filters that produced them. This would require us to add SVGs to enable mouseover interactions with convolved filter displays and a handler to render images of individual filters.
Another approach would be to ensure that the with image / without image views have the same size and layout, and to add a button to toggle between the two modes.
We want to give users the ability to select a subset of filters, layers, checkpoints and display them all at once.
Need to add new views to support this in views.py and logic in the frontend for handling multiple images in the result set.
When images have been selected, we should be able to visualize pooling and neuron layers.
The layer graph currently displays neuron layers as sinks, when they should be placed between the convolution and pooling layers.
Imagine that at every timestamp, we had access to the cumulative responses of every component of the network for all images in each category.
This would let us could color components of layers based on which image categories most strongly activated them. Hue could indicate the most strongly-activating class, while saturation could indicate the precision of this activation. For example, output neurons that are finely-tuned to particular classes would be heavily saturated, while we might expect earlier layers to show less saturation since they probably capture more low-level visual features that aren't specific to any particular class (such as lines at a particular orientation). When moving the timeline slider, shifts in hue and saturation would show how different parts of the model become tuned to particular input classes.
This could be difficult to implement. The data storage requirements aren't huge if we aggregate the responses by image class, since we would be storing k
extra copies of the model at each timestamp (one per class). Ideally, we would compute these aggregates during regular model training and testing. A naive approach would be to process each image through the network at each timestep in an offline batch job. A better approach would be to segment the training images by class, then pass them through a modified testing pipeline that aggregates all intermediate values.
We'd also have to decide whether the coloring will be based on the images' predicted or true classes.
Cluster images by fc10 output - display images closest to cluster center.
The error rates in the classifications in the Decaf-trained ModelStatsDB don't match up with the errors of the cuda-convnet model. This behavior could be due to possible differences in how the models receive their inputs (or possibly a bug in Decaf).
This issue is designed to track progress on fixing this.
The main content area should be scrollable if its content doesn't fit.
Could we encode additional information on the layer graph? Maybe we could use edge labels to convey information about the sizes of the outputs at each layer.
Add views for showing how individual images are processed by the network. Commit 32314aa added a backend for visualizing the output of filters for particular input images.
I found that magic package syntax depends on how it package is installed ( from apt-get , pip). Syntax use in this problem is for apt-get so I think it will be easy if we can use code compatible for both cases.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.