Comments (5)
I'd suggest also opening an issue with Timur's repo if you can't figure it out.
I believe that the discrepancy in your case would be caused by not including the --use-test
flag in your command. If that flag is not included, then you will be training the model on a slightly smaller dataset (45k examples with 5k test). By comparison, the results in both this repo and in Timur's use the real CIFAR10 test set.
from swa_gaussian.
For SGD we don't use BN update because the weights are used throughout training. In this case statistics of BN are accumulated during training. In SWA and SWAG the final weights that we evaluate are never used during training, and we have no activation statistics for them. In this case we need to do a BN update. See e.g. Section 3.2 of "Averaging Weights Leads to Wider Optima and Better Generalization".
Let me know if you have any other questions.
from swa_gaussian.
Thank you. Appreciate your quick response. I have couple of questions related to SWA paper.
Can you help me with that or should I post my queries to: SWA ?
I just checked-out the code and wanted to replicate the SWA experiment as mentioned here . In Table1 of the paper you have mentioned SGD numbers are much higher than what I get when I run the code. Infact the SWA number I get is better. Also not knowing about details of BN Update, I called that operation even for SGD before evaluation and it improves the performance, not sure why ? I am attaching files of training VGG16BN model on CIFAR-10 dataset. Will be thankful for your insights.
VGG16BN Model
VGG16BN Model with BN update for SGD
What's the best way to get similar or exact numbers as mentioned in paper? I used the same command as mentioned in Git repo:
python3 train.py --dir=/home/swa/swa_cifar10_VGG16BNModel/ --dataset=CIFAR10 --data_path=/home/swa_gaussian/cifar10_data/ --model=VGG16BN --epochs=300 --lr_init=0.05 --wd=5e-4 --swa --swa_start=161 --swa_lr=0.01 --save_freq=50 --eval_freq=10 > cifar10_VGG16BNModel_swaLogs
Thank you once again.
from swa_gaussian.
That's not the issue, actually "--use-test" flag is not applicable for training "swa" model code. It's applicable for your code base " run_swag.py" . I tried replicating SWAG following the steps mentioned here but still unable to get the same numbers as mentioned in your paper. Can you please suggest where could be the issue ? I followed the exact steps as mentioned on the git-repo.
Including example of CIFAR-100, VGG16 model: (Can provide others too)
./experiments/train/run_swag.py --data_path=/data/swa_gaussian/cifar100_data/ --epochs=300 --dataset=CIFAR100 --save_freq=300 --model=VGG16 --lr_init=0.05 --wd=5e-4 --swa --swa_start=161 --swa_lr=0.01 --cov_mat --use_test --dir=./cifar100_VGGModel > cifar100_VGG16Model_swagLogs
Followed by this, used your uncertainty code:
python3 ./experiments/uncertainty/uncertainty.py --data_path=/data/swa_gaussian/cifar100_data/ --dataset=CIFAR100 --model=VGG16 --use_test --cov_mat --method=SWAG --scale=0.5 --file=./cifar100_VGGModel/checkpoint-300.pt --save_path=./cifar100_VGGModel/ >
uncertainty_cifar100_VGG16_SWAGLogs
Thank You for your help
from swa_gaussian.
Closing due to resolution in timgaripov/swa#14 .
from swa_gaussian.
Related Issues (20)
- Replicating results from paper with dropout HOT 4
- Running on CPU HOT 2
- Replicating results of transfer learning and out-of-domain image detection HOT 3
- Could you share the pretrained model for imagenet? HOT 4
- Cannot find key 'n_models' HOT 1
- Question about KFACLaplace for BatchNorm
- Error with CUDA10 HOT 5
- Questions about the plotting of relability diagrams HOT 5
- Questions about the implementation of calculation of Low-Rank Covariance Matrix HOT 2
- Loading SWAG Checkpoint and Continue SWAG Training HOT 7
- Non-Reproducible / Weird Uncertainty Results HOT 1
- Results CSV
- RMSE UCI Regression Results Paper
- Reproducing UCI Regression Experiments
- Sampling using SWAG HOT 2
- reliability diagrams HOT 7
- Cannot understand result HOT 1
- Reproducibility of Uncertainty Experiment HOT 2
- 'CIFAR10' object has no attribute 'targets' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from swa_gaussian.