Git Product home page Git Product logo

binarynet's Introduction

binarynet's People

Contributors

matthieucourbariaux avatar ppwwyyxx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

binarynet's Issues

why I trained binary model is bigger than the original model?

hi,
I use the Lasagne to train a mlp model and cnn model on mnist, and get their ”model.npz” files.
I found that the size of binary net ”model.npz” is larger than the ”model.npz” of mlp and cnn.
Is that means the binary net will use more parameters and cost too much memory???

speed of training

I'm running the mnist example using a K80 Tesla Chip but I'm finding each epoch is running around 130mins. Wondering if there's any Theano configuration or something that could help with run time.

Thank you.

An error occurs when training BinaryNet on mnist dataset

I'm trying to train Binarynet on the mnist dataset, but it prints the results:

Training...
Epoch 1 of 1000 took 70.4168188572s
LR: 0.003
training loss: nan
validation loss: nan
validation error rate: 90.0900000334%
best epoch: 1
best validation error rate: 90.0900000334%
test loss: nan
test error rate: 90.2000000477%
Epoch 2 of 1000 took 69.7526259422s
LR: 0.00297249583468
training loss: nan
validation loss: nan
validation error rate: 90.0900000334%
best epoch: 2
best validation error rate: 90.0900000334%
test loss: nan
test error rate: 90.2000000477%

I don't know what's going wrong....So anyone has some ideas of solving it?

upload `pip list > requirements.txt` ?

This package use some out-dated python libraries, and it's hard to get all the correct versions.

How about upload the the requirements.txt that you use?

Thanks.

Binarization of activations input to the first layer

Hi @MatthieuCourbariaux. So you mentioned in the paper that you didn't binarize the inputs to the first layer. I tested with binarized activations input to all layers including the first layer and the accuracy seems to decrease by a very minor factor. So is there a specific reason as to why you used float activations for the first layer aside from the maximization of accuracy?

Is there any reason not to use xnor_gemm in fprop for training?

In training examples xnor_gemm is not used for forward propagation, is there any reason for it?
As I understand convolution in forward propagation during the training use both binary activation and binarized weights, so xnor kernel should work for it too, correct?
Is cudnn theano convolution still faster?

benchmark error

Hi all, I run the benchmark_cubas.cu and find a really big problem.
when the N = 8192, the result is
success=1
success=1
that is, both xnor, gemm and cublas is correct
but when i change the N to other number(128,100,192,1192,2192,1000, 8195 and so on), the first is always wrong and the second is almost correct(only 100 is wrong), that is, the result is :
success = 0;
success =1
so, is this a problem??why only 8192 is possible??
anther auestion.
when i change the grid and block size to 1, the xnor result is also wrong(success = 0).
so, i think maybe the xnor code is not universal .It only works for some limited settings. am i wrong?
looking forward to your reply

Some BUG about CUDA KERNEL

If set matrix dimension N smaller than 512(e.g. 256, 128) when running benchmark, the result of the xnor_gemm has error.

Activation Binarization

Hi, you have mentioned in another issue that we don't need to binarize the input because previous layer could guarantee the binary input. But after passing a BatchNormalization layer, the previous output become full-precision again. Is that something I have missed?

ImportError: No module named six.moves

I installed pylearn2 and theano(bleeding-edge version) following the links given on the README.
When running mnist.py I encountered the following error:

Using cuDNN version 5005 on context None
Preallocating 3027/4036 Mb (0.750000) on cuda
Mapped name None to device cuda: GeForce GTX 970 (0000:0F:00.0)
Traceback (most recent call last):
File "mnist.py", line 24, in
from pylearn2.datasets.mnist import MNIST
File "/home/jiyu/fyp/pylearn2/pylearn2/init.py", line 4, in
from pylearn2.utils.logger import configure_custom
File "/home/jiyu/fyp/pylearn2/pylearn2/utils/init.py", line 11, in
from theano.compat.six.moves import input, zip as izip
ImportError: No module named six.moves

I tried to solve the issue by installing Theano 0.8, but then a new problem occurred:

ERROR (theano.sandbox.gpuarray): Could not initialize pygpu, support disabled
Traceback (most recent call last):
File "/home/jiyu/miniconda2/envs/py27/lib/python2.7/site-packages/theano/sandbox/gpuarray/init.py", line 95, in
init_dev(config.device)
File "/home/jiyu/miniconda2/envs/py27/lib/python2.7/site-packages/theano/sandbox/gpuarray/init.py", line 46, in init_dev
"Make sure Theano and libgpuarray/pygpu "
RuntimeError: ('Wrong major API version for gpuarray:', 2, 'Make sure Theano and libgpuarray/pygpu are in sync.')

Could you make sure that you are suggesting correct versions of Theano and Pylearn2?
Thanks

BNN's in TensorFlow?

Hi, Matthieu. Congratulations on a very exciting paper. It really opens up another branch to the current Deep Learning community. I have been trying to replicate the work in TensorFlow but I have had little success in doing that. I understand the paper and the idea, but due to the fact that I am new to deep learning may be, I am missing some key steps. I am trying to replicate the functions in TF implemented in theano for the BNNs.

Is it possible for you to share your insight on how to implement your work in TF and share any code that you may or can implement in TF? A large number of people are using TF so it would help a lot of us out

Shift based batch normalization

I am implementing this ideal in FPGA for acceleration, and the shift based batch normalization is very importance, since it won't involve any multiplier in FPGA, which could largely reduce the resource. However, this function seens not implemented in this project, do you have any update?

could you share your cifar-10 test code

Hello, I am working no cifar-10 binay network. I used your cifar-10 training code, and then I wrote my own testing code for ciffar-10. Generally I binarized conv2d layer weights and dense layer weights, but after binarizing conv2d layer weights, I couldn't get the right results. (I used deterministic to binarize)
So I am wondering am I understanding BNN correctly, or is it possible for you to share your cifar-10 test code?
Thank you very much!

Permission denied when running mnist.py

Hi All,

I am having following problem when I try to run mnist.py of training. Anyone have any ideas ?

Thanks in advance for your help.

I am on Windows 7 using Anaconda.

Using gpu device 0: Quadro K620 (CNMeM is disabled, cuDNN not available)
C:\Users\jmatai\AppData\Local\Continuum\Anaconda2\lib\site-packages\theano\tensor\signal\downsample.py:6: UserWarning: downsample module has been moved to the t
heano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
batch_size = 100
alpha = 0.1
epsilon = 0.0001
num_units = 4096
n_hidden_layers = 3
num_epochs = 1000
dropout_in = 0.2
dropout_hidden = 0.5
activation = binary_net.binary_tanh_unit
binary = True
stochastic = False
H = 1.0
W_LR_scale = Glorot
LR_start = 0.003
LR_fin = 3e-07
LR_decay = 0.990831944893
save_path = mnist_parameters.npz
shuffle_parts = 1
Loading MNIST dataset...
Traceback (most recent call last):
File "mnist.py", line 93, in
train_set = MNIST(which_set= 'train', start=0, stop = 50000, center = False)
File "c:\projects\python_packages\pylearn2\pylearn2\datasets\mnist.py", line 93, in init
topo_view = read_mnist_images(im_path, dtype='float32')
File "c:\projects\python_packages\pylearn2\pylearn2\utils\mnist_ubyte.py", line 94, in read_mnist_images
with open_if_filename(fn, 'rb') as f:
File "c:\projects\python_packages\pylearn2\pylearn2\utils\mnist_ubyte.py", line 46, in enter
self._handle = open(self._f, self._mode, self._buffering)
IOError: [Errno 13] Permission denied: 'C:\projects\repos\dl\data/mnist/train-images-idx3-ubyte'

(C:\Users\jmatai\AppData\Local\Continuum\Anaconda2) C:\projects\repos\bnn\BinaryNet\Train-time>

Binarization of Layer Inputs

Hi,
I could not see where the activations are binarized in the code. I am looking into code under Train-time directory. Can anyone point me to the code where layer input binarization is happening? Thanks

Runtime error when running svhn.py

When running svhn.py, a runtime error occurred, indicating a dimension mismatch. Both this program and the program under BinaryConnect project have the same problem. Is there something wrong with the data I downloaded?

hanwentao@hydrogen ~/repo/BinaryNet/Train-time {env2} (master*) % python svhn.py
batch_size = 50
alpha = 0.1
epsilon = 0.0001
activation = binary_net.binary_tanh_unit
binary = True
stochastic = False
H = 1.0
W_LR_scale = Glorot
num_epochs = 200
LR_start = 0.001
LR_fin = 1e-06
LR_decay = 0.96605087899
shuffle_parts = 1
Loading SVHN dataset
Building the CNN...
W_LR_scale = 20.0499
H = 1.0
W_LR_scale = 27.7128
H = 1.0
W_LR_scale = 33.9411
H = 1.0
W_LR_scale = 39.1918
H = 1.0
W_LR_scale = 48.0
H = 1.0
W_LR_scale = 55.4256
H = 1.0
W_LR_scale = 58.4237
H = 1.0
W_LR_scale = 36.9504
H = 1.0
W_LR_scale = 26.2552
H = 1.0
Training...
Traceback (most recent call last):
  File "svhn.py", line 322, in <module>
    test_set.X,test_set.y)
  File "/home/hanwentao/repo/BinaryConnect/binary_connect.py", line 252, in train
    train_loss = train_epoch(X_train,y_train,LR)
  File "/home/hanwentao/repo/BinaryConnect/binary_connect.py", line 218, in train_epoch
    loss += train_fn(X[i*batch_size:(i+1)*batch_size],y[i*batch_size:(i+1)*batch_size],LR)
  File "/home/hanwentao/sandbox/binarize/env2/lib/python2.7/site-packages/theano/compile/function_module.py", line
912, in __call__
    storage_map=getattr(self.fn, 'storage_map', None))
  File "/home/hanwentao/sandbox/binarize/env2/lib/python2.7/site-packages/theano/gof/link.py", line 314, in raise_with_op
    reraise(exc_type, exc_value, exc_trace)
  File "/home/hanwentao/sandbox/binarize/env2/lib/python2.7/site-packages/theano/compile/function_module.py", line
899, in __call__
    self.fn() if output_subset is None else\
ValueError: Input dimension mis-match. (input[1].shape[1] = 1, input[2].shape[1] = 10)
Apply node that caused the error: Elemwise{Composite{(i0 - (i1 * ((i2 * i3) + i4)))}}(TensorConstant{(1, 1) of 1.0}
, targets, Elemwise{sub,no_inplace}.0, Elemwise{true_div,no_inplace}.0, Rebroadcast{1}.0)
Toposort index: 448
Inputs types: [TensorType(float64, (True, True)), TensorType(float64, matrix), TensorType(float64, matrix), TensorT
ype(float64, row), TensorType(float64, row)]
Inputs shapes: [(1, 1), (50, 1), (50, 10), (1, 10), (1, 10)]
Inputs strides: [(8, 8), (8, 8), (80, 8), (80, 8), (80, 8)]
Inputs values: [array([[ 1.]]), 'not shown', 'not shown', 'not shown', 'not shown']
Outputs clients: [[Elemwise{maximum,no_inplace}(TensorConstant{(1, 1) of 0.0}, Elemwise{Composite{(i0 - (i1 * ((i2
* i3) + i4)))}}.0), Elemwise{EQ}(Elemwise{maximum,no_inplace}.0, Elemwise{Composite{(i0 - (i1 * ((i2 * i3) + i4)))}
}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created.
This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
Closing remaining open files:/home/hanwentao/data/tmp/svhn/h5/valid_32x32.h5...done/home/hanwentao/data/tmp/svhn/h5
/test_32x32.h5...done/home/hanwentao/data/tmp/svhn/h5/splitted_train_32x32.h5...done

bias binarization

Hi,

Is there any expecial reason not to binarize the bias parameters in a similar way to weights?

Thanks you!

error when running Run-time/mnist.py

This is what I got, (the training part works fine):

Using gpu device 0: GeForce GTX 960 (CNMeM is enabled with initial size: 90.0% of memory, cuDNN 5105)
Traceback (most recent call last):
File "binary_ops.py", line 225, in
dot2 = theano.function([A,B], Gemm()(A, B))
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/compile/function.py", line 326, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/compile/pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/compile/function_module.py", line 1784, in orig_function
defaults)
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/compile/function_module.py", line 1651, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "/usr/local/lib/python2.7/dist-packages/Theano-0.9.0.dev4-py2.7.egg/theano/gof/vm.py", line 1055, in make_all
impl=impl))
TypeError: ('The following error happened while compiling the node', Gemm(GpuContiguous.0, GpuContiguous.0), '\n', "make_thunk() got an unexpected keyword argument 'impl'")

Thanks

How to represent +1 and -1 in one bit?

As mentioned in the BinaryNet paper, we can use the XNOR-Bitcounting operation.
It seems to be reasonable, but when we really try to realize this in one bit, how should we represent -1? Should we convert it into 0?
And that seems to meet some problems when we try to xnor it with the input 8-bit vector, since mostly a negative number will be in 2-complement form..
I might be wrong, just a little bit confused since I was wondering how to achieve this on hardware...

binary_net.py syntax error

Hi,
I'm trying to run mnist.py. But there is SyntaxError in binary_net.py line 40. Any idea why does this error occur? BTW, I'm using Python 3.5.

Thanks.

Cannot run inference with XNOR kernel on MNIST, PyCUDApy cuda._driver.LogicError: cuLaunchKernel failed: an illegal memory access was encountered

Hello;

I trained the MNIST network for 10 epochs and then I run the mnist.py in Run-Time folder with XNOR kernel. I got below error: Pycuda cuLaunchkernel error.

Can anyone tell me how to fix this ?

Thanks

(root) d1230@linse3:~/no_backup/d1230/anaconda2/bin/BinaryNet/Run-time> python mnist.py
Using gpu device 0: Graphics Device (CNMeM is enabled with initial size: 30.0% of memory, CuDNN 5110)
/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/theano/sandbox/cuda/init.py:600: UserWarning: Your CuDNN version is more recent then Theano. If you see problems, try updating Theano or downgrading CuDNN to version 4.
warnings.warn(warn)
batch_size = 10000
num_units = 4096
n_hidden_layers = 3
kernel = xnor
Loading MNIST dataset...
Building the MLP...
Loading the trained parameters and binarizing the weights...
Running...
Traceback (most recent call last):
File "mnist.py", line 112, in
test_error = val_fn(test_set.X,test_set.y)*100.
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py", line 871, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
File "/net/linse8-sn/no_backup_00/d1230/anaconda2/bin/BinaryNet/Run-time/binary_ops.py", line 162, in thunk
xnor_kernel(Ac,Bc,C[0], np.intc(m), np.intc(n/32.), np.intc(k), block= block, grid=grid)
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/pycuda/driver.py", line 402, in function_call
func._launch_kernel(grid, block, arg_buf, shared, None)
pycuda._driver.LogicError: cuLaunchKernel failed: an illegal memory access was encountered
Apply node that caused the error: XnorGemm(GpuContiguous.0, GpuContiguous.0)
Toposort index: 20
Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32, matrix)]
Inputs shapes: [(10000, 4096), (4096, 4096)]
Inputs strides: [(4096, 1), (4096, 1)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuElemwise{Add}[(0, 0)](XnorGemm.0, GpuDimShuffle{x,0}.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "mnist.py", line 88, in
test_output = lasagne.layers.get_output(mlp, deterministic=True)
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/lasagne/layers/helper.py", line 185, in get_output
all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
File "/net/linse8-sn/no_backup_00/d1230/anaconda2/bin/BinaryNet/Run-time/binary_ops.py", line 199, in get_output_for
activation = xnor_gemm(input, self.W)
File "/home/d1230/no_backup/d1230/anaconda2/lib/python2.7/site-packages/theano/gof/op.py", line 611, in call
node = self.make_node(*inputs, **kwargs)
File "/net/linse8-sn/no_backup_00/d1230/anaconda2/bin/BinaryNet/Run-time/binary_ops.py", line 108, in make_node
return theano.Apply(self, [inp1, inp2], [self.output_type(inp1)()])

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: context is destroyed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuCtxDetach failed: context is destroyed
(root) d1230@linse3:/no_backup/d1230/anaconda2/bin/BinaryNet/Run-time>
(root) d1230@linse3:
/no_backup/d1230/anaconda2/bin/BinaryNet/Run-time>

Problem on understanding the Code and corresponding paper

I read the paper "Binarized Neural Networks". It's a very nice paper. But it's hard for me to find the g_{a_k}<--g_{a_k^b}\cdot 1_{|a_k|<=1} part (Algorithm 1 in the paper) in this code. I supposed this is the code of that paper. Could anyone help me on this?

Implementation on PyTorch

I'm trying to implement BinaryNet on PyTorch and trying to train on CIFAR-10 dataset, I have some questions

  1. Is BNN (Binarized Neural Networks) same as BinaryNet ?

both paper writes their CIFAR-10 test error rate as 11.40% and BNN's code link goes to BinaryNet

  1. On BinaryNet paper, it says no data augmentation was used, so is it correct that there's only
    image data [0,255] (int) scaled & shifted to [-1,1] (float) ?

in other words, no crop/brighten/resize etc. ?
I don't understand how original CNN (10.96%) and Binarized CNN (11.40%) have same error rate...

MLP with XNOR kernel is slower than theano.tensor.dot on MNIST dataset

I've got the following benchmarking results for kernels on TITAN Z GPU:
Baseline - 2.642s, Theano - 0.582s, XNOR - 0.988s :-(
You can see that Theano is faster than XNOR which is not consistent with the main point of the referenced paper.
Do you have any idea why it happens? How to tweak XNOR to beat Theano?

P.S.: matrix multiplication gives:
GEMM - 2.788s, cublasSgemm - 0.331s, XNOR GEMM - 0.182s
which is quite OK.

BinaryNet Run Time Error

Hi All,

I am having trouble running BinaryNet/Run-time/mnist.py, and I am getting following error. I believe this is version mismatch between PyCUDA and CUDA on my system. I am wondering what versions of dependencies (PyCuda, CUDA, Lasagne,..) are supposed to run Run-Time/Train-Time of BinaryNet?

(C:\Users\jmatai\AppData\Local\Continuum\Anaconda2) C:\Work\bnn\BinaryNet\Run-time>python mnist.py
Using gpu device 0: Quadro K620 (CNMeM is disabled, cuDNN not available)
Traceback (most recent call last):
File "mnist.py", line 20, in
import binary_ops
File "C:\Work\bnn\BinaryNet\Run-time\binary_ops.py", line 10, in
import pycuda.driver as drv
File "C:\Users\jmatai\AppData\Local\Continuum\Anaconda2\lib\site-packages\pycuda\driver.py", line 5, in
from pycuda._driver import * # noqa
ImportError: DLL load failed: The specified module could not be found.

(C:\Users\jmatai\AppData\Local\Continuum\Anaconda2) C:\Work\bnn\BinaryNet\Run-time>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.