This is quick evaluation of BatchNorm layer (BVLC/caffe#3229) performance on ImageNet-2012.
Similar on-going evaluations:
- activations
- [architectures] (https://github.com/ducha-aiki/batchnorm-benchmark/blob/master/Architectures.md)
The architecture is similar to CaffeNet, but has differences:
- Images are resized to small side = 128 for speed reasons.
- fc6 and fc7 layers have 2048 neurons instead of 4096.
- Networks are initialized with LSUV-init
Because LRN layers add nothing to accuracy, they were removed for speed reasons in further experiments.
As one can see, BN makes difference between ReLU, ELU and PReLU negligable. It may confirm that main source of VLReLU and ELU advantages is that their output is closer to mean=0, var=1, than standard ReLU.
BN+Dropout = 0.5 is too much regularization. Dropout=0.2 is just enough :)
TBD: Explore usefullness of BatchNorm+EltwiseAffine combination
P.S. Logs are merged from lots of "save-resume", because were trained at nights, so plot "Anything vs. seconds" will give weird results.