Git Product home page Git Product logo

beholder's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beholder's Issues

Rename the plugin / repository

Wizards of the Coast might not be happy with the name "beholder". From their System Reference Document (pdf):

The following items are designated Product Identity, as defined in Section 1(e) of the Open Game License Version 1.0a, and are subject to the conditions set forth in Section 7 of the OGL, and are not Open Content: Dungeons & Dragons, D&D, Player’s Handbook .... beholder, gauth, carrion crawler, tanar’ri, baatezu, displacer beast, githyanki, githzerai, mind flayer, illithid, umber hulk, yuan-ti.

Suggestions welcome!

Add option to change scaling from the client

  • Add another set of radio buttons to choose how to scale - per-layer, or against the entire network.
  • Have the server add that option to the mode file.
  • Read the mode file for the scaling option, rather than having it passed in to the Beholder constructor.

How can color be used?

It could be useful to use hue and value instead of just value. What would hue represent? I could constrain hue to a certain range to keep the visualization from getting too noisy.

Add configuration option to pause network

Could be a FPS slider. This would be nice so that people don't have to worry about the visualizations slowing down their network. They can always leave the update function in their code.

IOError when importing beholder

There is some error occured when importing the module:

Traceback (most recent call last):
  File "train.py", line 28, in <module>
    from beholder.beholder import Beholder
  File "/usr/local/lib/python2.7/dist-packages/beholder/beholder.py", line 10, in <module>
    from beholder import im_util
  File "/usr/local/lib/python2.7/dist-packages/beholder/im_util.py", line 79, in <module>
    colormaps = np.load('{}/colormaps.npy'.format(resources_path()))
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 370, in load
    fid = open(file, "rb")
IOError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/beholder/resources/colormaps.npy'

Python: 2.7
TF version: 1.2.0
OS: Ubuntu 16.04
Bazel: 0.5.3
Pip: 9.0.1

Use perceptually uniform colormaps

See the explanation here: https://matplotlib.org/users/colormaps.html. They're also "relatively friendly to common forms of colorblindness" (see https://bids.github.io/colormap/).

Here's an example using the magma colormap (chosen by an informal survey of coworkers):
image

Original style example:
image

There's a bit of a slowdown in generating the image, but it's not terrible. For this small network, it was about .03 seconds. With all data shown, it was .75 seconds. For a big VGG network showing all data, it was a bit over 2 seconds (this is all on my laptop using the battery). I haven't timed slowdown across the entire pipeline, since images now have 2 additional channels.

Here's the code, where *_data is an array of shape [256, 3]:

def apply_colormap(image, colormap='magma'):
  string_to_array = {
    'magma': _magma_data,
    'inferno': _inferno_data,
    'plasma': _plasma_data,
    'viridis': _viridis_data,
  }

  colormap_data = string_to_array[colormap]
  return (colormap_data[image]*255).astype(np.uint8)

@jart Do you think this should be the default? Thoughts in general?

Make a demo video

@jart @wchargin: I'm almost there - I almost had it ready today, but I'm not very good with video editing software, so I'll have to give editing another go. I might not finish today, but definitely will be done by tomorrow.

Add custom tensor option

  • Allow people to pass in their own list of tensors to be visualized.
  • Add option to front end
  • Add mode code to back end

Prepare a TensorBoard pull request so other people can use this

@jart @dandelionmane:

I'll summarize the changes I think need to be made to TensorBoard here.

  1. tensorboard/BUILD:
    a. add dependency for the plugin. This seems problematic - what if I want to change the name of the plugin? Could you add tensorboard/plugins/third-party and then I put all of my stuff in there, and the following steps are automatically done? This could be a big request, but that way people can install new plugins just by cloning the repo, no TensorBoard update required. Maybe the third-party repos have some configuration file that TensorBoard reads for all the external changes that need to be made.

  2. tensorboard/components/tf_globals/globals.ts:
    a. add tab. In line with my thoughts in 1, I'm picturing instead of

image

it is

image

with all of the third party plugins in a dropdown or something.

  1. tensorboard/components/tf_tensorboard/BUILD
    a. add dependency for a new dashboard.

  2. tensorboard/components/tf_tensorboard/tf-tensorboard.html:
    a. import new dashboard
    b. add template to content div
    c. add _modeIs<plugin> function

  3. tensorboard/main.py
    a. add plugin to the list of plugins.

What do you think? Does this look right? I think these are all of the changes I've made to TensorBoard. I finally have the full pipeline working though! This is by no means final, but these frames are changing (requested at 10 FPS, just changing the source on an img tag using setInterval, which I think I'll change so that it automatically adjusts depending on whether it's receiving the same frame over and over or not):

image

Discuss Beholder potentially becoming an officially supported TensorBoard plugin

To: @jart @dandelionmane @wchargin

I think @jart and @dandelionmane both mentioned potentially merging this repo into TensorBoard. Here are some of my questions.

  1. Would you actually like to merge the repo? What if it was left separate, and I just waited for the plugin system to be changed so that installation was easier? A summary of your view of pros and cons for both parties would be nice.
  2. What changes in the code would have to happen before a transfer?
  3. What would the timeline be?
  4. How will my influence be limited? How would I contribute afterwards? It's been fun to build, and it's a bit sad to me to consider not being able to work on it as much.
  5. What will happen to this repo?

basic plugin functionality (up through writing to disk)

To: @dandelionmane, @jart, @caisq

I named it Beholder (for now) because... it's a viz project, I'm a nerd, it's short, and I'm too lazy to think of anything else right now. Anyway, here it goes. It's pretty close to how @dandelionmane described it in tensorflow/tensorboard#130.

proposed design

People should push tensors to the front end with two function calls: a constructor, Beholder, and an update function. Here's the flow I'm imagining.

  • Construct a Beholder, with configuration options, including:
    • logdir: where the logs go.
    • window_size: how many frames to use for calculating variance.
    • tensors: a list of tensors to visualize. Default behavior is "grab everything I can find".
    • scaling: either "layer" or "network". Determines how to scale the parameters for display. Scales using the min/max of the layer or the entire network.
  • Call beholder.update() in the train loop. If visualizing variance, a size-limited queue will be used to hold the t most recent tensors, one for each time update is called.
    • Determine what type of data to get from logdir/beholder/mode.
    • If the mode changed, clear the queue.
    • Get the appropriate tensors from the model, turn it into a bunch of numpy arrays (I'm not aware of any other way to save tensor values over time). Add those arrays to the queue.
    • Process the numpy arrays into an image. This might include reshaping, concatenating, and scaling to [0, 255] for image display.
    • write the image to logdir/beholder/current_frame or something. Only one file will exist there at the time, so there's no need to worry about disk space. The current worry is more about memory since I'm thinking of storing millions of parameters for several timesteps.
      • I picture this changing in the future, after I get things working on the front end. Like was mentioned, this can be replaced with grpc or something.
    • Maybe keep track of how quickly it writes images, and somehow communicate to the front end about how often it should poll. On second thought, this probably just needs to wait until v2 when the back end and front end are communicating in some other way, where this might not even be necessary.

questions / response requested

  1. Is there a better way to save the tensors over time than pulling them out as numpy arrays?
  2. How should I write the image? I don't know anything about the advantages of protocol buffers. Right now, I would just use cv2.imwrite, but I suppose I could make a tensor from the numpy array (sounds like it could be expensive, but I haven't tested anything) and then use tf.summary.image and a FileWriter to save the image.
  3. This design makes it simple for people to use; however, it doesn't really fit the TensorBoard way of using summaries and a writer to save things. Do I instead make some type of summary, and some type of writer?
  4. I kinda get what's going on with Bazel, but I have no clue how I need to use it for this project. Do I need to consider that at this point?
  5. This can wait til later, but I have no clue what I'm doing on the front end. Do I need to understand that to build the back end well? Also, what would you suggest I look at for that? Would there be a new tab in TensorBoard? Should I be developing this as its own little div that we can throw into TensorBoard later?
  6. Is it bad that the back end produces an image rather than handing over raw parameters over the last t steps? It will be faster to do it this way.
  7. Should I be following a style guide? Do you have some linter or something I should use?

Add option to not truncate any values

Currently all values are being truncated to the number of pixels (ish) that are being display. Add an option that allows people to see every single parameter.

Fix the is_active method

  • Fix is_active in beholder_plugin.
  • Add an inactive page that explains how to set beholder up in their script. Include the local file system disclaimer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.