Git Product home page Git Product logo

lp_bnn's Introduction

LP-BNN CIFAR-10, CIFAR-100 official implementation using PyTorch BatchEnsemble CIFAR-10, CIFAR-100 unofficial implementation using PyTorch

Please if you use this code please cite the following papers:

Requirements

see the requirement of CIFAR code In addition to this requirement our code needs a big GPU to have a big batch. Our code where tested and implemented on a V100 tesla GPU thanks to Jeanzay cluster. Please proceed to the installation of Cuda, and Pytorch to as explained on PyTorch web page to be able to use our code.

How to TRAIN the Deep Neural Network with LP-BNN

After you have cloned the repository, you can train each dataset of either cifar10, cifar100 by running the script below. To have better results we advise you to perform several trainings(minimum 3).

 python main_LPBNN.py --dataset [cifar10/cifar100] --dirsave_out LPBNN_C10_T0
 python main_LPBNN.py --dataset [cifar10/cifar100] --dirsave_out LPBNN_C10_T1
 python main_LPBNN.py --dataset [cifar10/cifar100] --dirsave_out LPBNN_C10_T2

How to train the Deep Neural Network with LP-BNN BatchEnsemble

After you have cloned the repository, you can train each dataset of either cifar10, cifar100 by running the script below. To have better results we advise you to perform several trainings(minimum 3).

 python main_BatchEnsemble.py --dataset [cifar10/cifar100] --dirsave_out BE_C10_T0
 python main_BatchEnsemble.py --dataset [cifar10/cifar100] --dirsave_out BE_C10_T1
 python main_BatchEnsemble.py --dataset [cifar10/cifar100] --dirsave_out BE_C10_T2

How to evaluate the code

here are the comand line to test for CIFAR10. For CIFAR100 please adapt the code

 python  evaluate_uncertainty.py --algo 'BE' --dataset cifar10 --dirsave_out './checkpoint/cifar10/BE_C10_T'
 python  evaluate_uncertainty.py --algo 'LPBNN' --dataset cifar10 --dirsave_out './checkpoint/cifar10/LPBNN_C10_T'

Implementation Details

Hyper-parameter CIFAR-10 CIFAR-100
Ensemble size J 4 4
initial learning rate 0.1 0.1
batch size 128 128
lr decay ratio 0.1 0.1
lr decay epochs 80, 160, 200 80, 160, 200
cutout True True
SyncEnsemble BN False False
Size of the latent space $ 32 32

CIFAR-10 Results

alt tag

Below is the result of the test set accuracy for CIFAR-10 dataset training.

Accuracy is the average of 3 runs

network Accuracy (%) AUC AUPR FPR-95-TPR ECE (%) cA(%) cE (%)
BatchEnsemble 96.48 0.9540 0.9731 0.132 0.0167 47.44 0.2909
LP-BNN 94.76 0.9670 0.9812 0.104 0.0148 69.92 0.2421

If you are interrested about the corrupted accuraccy and corrupted expected calibration error please download the dataset from CIFAR-10-C

lp_bnn's People

Contributors

giannifranchi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

lp_bnn's Issues

DeepLabv3+ Release?

Hey! I really loved reading your paper, and was interested in using your approach to assist in some segmentation tasks - do you have a timeframe on when you might release the code for your DeepLabv3+ implementation? Cheers!

LPBNN_layers bug?

Hello, thanks for providing your code.

I have a question, LPBNN_layers line 75 is:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcmean(embedded)
Should this not be:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcvar(embedded)
As it is, it appears to be enforcing that the mean and logvar are the same. This bug is present in every layer defined in LPBNN_layers.

Additionally, I was wondering why VAE embedding is applied only for alpha and not for gamma. Is there a benefit to only defining alpha as Bayesian? Was this the case for the results reported in your paper?

Batch ensemble

Thanks for the great work!
May I ask how did you re-implement the code for BatchEnsemble? Since the original official implementation is in Tensorflow, is there any Pytorch resources to refer to?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.