Git Product home page Git Product logo

variational-dropout-sparsifies-dnn's Introduction

Variational Dropout Sparsifies Deep Neural Networks

Tensorflow implementation

Google AI Research has released State of Sparsity in Deep Neural Networks - a nice large scale study of sparsification methods. The code contains an implementation of Sparse variational dropout on Tensorflow.

Play around w/ SparseVD (PyTorch)

You can play with compression of a small neural network using the following IPython notebook @ Colab, which is also available as an assigment @ Colab from DeepBayes Summer School. The code is not highly tuned but it is simple.

This repo contains the code for our ICML17 paper, Variational Dropout Sparsifies Deep Neural Networks (talk, slides, poster, blog-post). We showed that Variational Dropout leads to extremely sparse solutions both in fully-connected and convolutional layers. Sparse VD reduced the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy. This effect is similar to the Automatic Relevance Determination effect in empirical Bayes. However, in Sparse VD the prior distribution remaines fixed, so there is no additional risk of overfitting.

We visualize the weights of Sparse VD LeNet-5-Caffe network and demonstrate several filters of the first convolutional layer and a piece of the fully-connected layer :)

ICML 2017 Oral Presentation by Dmitry Molchanov

ICML 2017 Oral Presentation by Dmitry Molchanov

MNIST Experiments

The table containes the comparison of different sparsity-inducing techniques (Pruning (Han et al., 2015b;a), DNS (Guo et al., 2016), SWS (Ullrich et al., 2017)) on LeNet architectures. Our method provides the highest level of sparsity with a similar accuracy

Network Method Error Sparsity per Layer Compression
Original 1.64 1
Pruning 1.59 92.0 − 91.0 − 74.0 12
LeNet-300-100 DNS 1.99 98.2 − 98.2 − 94.5 56
SWS 1.94 23
(ours) SparseVD 1.92 98.9 − 97.2 − 62.0 68
Original 0.8 1
Pruning 0.77 34 − 88 − 92.0 − 81 12
LeNet-5 DNS 0.91 86 − 97 − 99.3 − 96 111
SWH 0.97 200
(ours) SparseVD 0.75 67 − 98 − 99.8 − 95 280

CIFAR Experiments

The plot contains the accuracy and sparsity level for VGG-like architectures of different sizes. The number of neurons and filters scales as k. Dense networks were trained with Binary Dropout, and Sparse VD networks were trained with Sparse Variational Dropout on all layers. The overall sparsity level, achieved by our method, is reported as a dashed line. The accuracy drop is negligible in most cases, and the sparsity level is high, especially in larger networks.

Environment setup

sudo apt install virtualenv python-pip python-dev
virtualenv venv --system-site-packages
source venv/bin/activate

pip install numpy tabulate 'ipython[all]' sklearn matplotlib seaborn  
pip install --upgrade https://github.com/Theano/Theano/archive/rel-0.9.0.zip
pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip

Launch experiments

source ~/venv/bin/activate
cd variational-dropout-sparsifies-dnn
THEANO_FLAGS='floatX=float32,device=gpu0,lib.cnmem=1' ipython ./experiments/<experiment>.py
  • If you have CuDNN problem please look at this issue.
  • This repo seems to use more up-to-date libs (Python 3.5 and Theano 1.0.0).

Further extensions

These two papers heavily rely on the Sparse Variational Dropout technique and extend it to other applications:

Citation

If you found this code useful please cite our paper

@InProceedings{
  molchanov2017variational,
  title={Variational Dropout Sparsifies Deep Neural Networks},
  author={Dmitry Molchanov and Arsenii Ashukha and Dmitry Vetrov},
  booktitle={Proceedings of the 34th International Conference on Machine Learning},
  year={2017}
}

variational-dropout-sparsifies-dnn's People

Contributors

senya-ashukha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

variational-dropout-sparsifies-dnn's Issues

Details for reproducing LeNet-5 results in ICML paper

Can you please specify the training details for generating the LeNet5 pruning results in your paper? Did you pretrain the network or use the warm-up procedure for the KL divergence term? What learning rate did you use?

cudnn version?

Hi I am not able to run the experiments except the lenet300-100. The error I got corresponds to the convolution layer. Also, recently theano changes backend to gpuarray. So I wonder should I use specific version of cuda/cudnn? or any additional config of using gpuarray?

How do you choose the threshold?

Dear author,
I found that you choose 3 as threshold for the mask of weights, I am curious how did you pick up this value?
Thanks a lot!
Feng

How to run examples on CPU?

I tried to run it on CPU:
THEANO_FLAGS='floatX=float32,device=cpu,lib.cnmem=1' ipython experiments/lenet/lenet5-ard.py
and got the error:
lib/python2.7/site-packages/lasagne/layers/dnn.py in ()
40 else:
41 raise ImportError(
---> 42 "requires GPU support -- see http://lasagne.readthedocs.org/en/"
43 "latest/user/installation.html#gpu-support") # pragma: no cover
44

ImportError: requires GPU support -- see http://lasagne.readthedocs.org/en/latest/user/installation.html#gpu-support
After i read a answer “you replace GPU Convolution (dnn.dnn_conv) in Conv2DVarDropOutARD on CPU one it will fix the issue.”
I find GPU Convolution(dnn.dnn_conv in /home/tom/variational-dropout-sparsifies-dnn/nets/layers.py,but i am not familiar to theano,has anyone tried to change this to CPU????
if deterministic:
conved = dnn.dnn_conv(img=input, kerns=T.switch(T.ge(log_alpha, thresh), 0, self.W),
subsample=self.stride, border_mode=border_mode,
conv_mode=conv_mode)
else:
W = self.W
if train_clip:
W = T.switch(clip_mask, 0, W)
conved_mu = dnn.dnn_conv(img=input, kerns=W,
subsample=self.stride, border_mode=border_mode,
conv_mode=conv_mode)
conved_si = T.sqrt(1e-8+dnn.dnn_conv(img=input * input, kerns=T.exp(log_alpha) * W * W,
subsample=self.stride, border_mode=border_mode,
conv_mode=conv_mode))
conved = conved_mu + conved_si * self._srng.normal(conved_mu.shape, avg=0, std=1)
return conved

failed to run on CPU

Hi guys, I have a question. Is GPU mandatory to run lenet-5?

I tried to run it on CPU:
THEANO_FLAGS='floatX=float32,device=cpu,lib.cnmem=1' ipython experiments/lenet/lenet5-ard.py
and got the error:

ImportError Traceback (most recent call last)
variational-dropout-sparsifies-dnn/experiments/lenet/lenet5-ard.py in ()
5 from nets import objectives
6 from theano import tensor as T
----> 7 from nets import optpolicy, layers
8 from lasagne import init, nonlinearities as nl, layers as ll
9 from lasagne.layers.dnn import Pool2DDNNLayer as MaxPool2DLayer

variational-dropout-sparsifies-dnn/nets/layers.py in ()
5 from lasagne.nonlinearities import rectify
6 from lasagne.layers.base import Layer
----> 7 from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
8 from theano.sandbox.cuda import dnn
9 from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams

lib/python2.7/site-packages/lasagne/layers/dnn.py in ()
40 else:
41 raise ImportError(
---> 42 "requires GPU support -- see http://lasagne.readthedocs.org/en/"
43 "latest/user/installation.html#gpu-support") # pragma: no cover
44

ImportError: requires GPU support -- see http://lasagne.readthedocs.org/en/latest/user/installation.html#gpu-support

Cross entropy loss mutliplied by training set size

I was running the Colab notebook (Pytorch) and had 2 questions:

  1. Why is the loss going below 0? Is it due to the regularisation term added for the dropout? Also, I noticed that only when the loss went negative, I got a good compression ratio
  2. Why is cross-entropy loss multiplied by the training set size for each batch, since we usually divide the epoch loss by the dataset size?
    I would really appreciate it if you could help me with my queries.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.