See the examples folder for notebooks that show a bit of this library's functionality.
See supplementary material for the ECCV 2020 paper.
- Highly configurable:
- Yaml files for organized configuration
- A powerful command line syntax that allows you to merge, override, swap, apply, and delete config options.
- Customizable:
- Benchmark your own losses, miners, datasets etc. with a simple function call.
- Easy hyperparameter optimization:
- Append the ~BAYESIAN~ flag to the names of hyperparameters you want to optimize.
- Extensive logging:
- View experiment data in tensorboard, CSV and SQLite format.
- Reproducible:
- Config files are saved with each experiment and are easily reproduced.
- Trackable changes:
- Keep track of changes to an experiment's configuration.
pip install powerful-benchmarker
If you'd like to cite the benchmark results, please cite this paper:
@misc{musgrave2020metric,
title={A Metric Learning Reality Check},
author={Kevin Musgrave and Serge Belongie and Ser-Nam Lim},
year={2020},
eprint={2003.08505},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
If you'd like to cite the powerful-benchmarker code, you can use this bibtex:
@misc{Musgrave2019,
author = {Musgrave, Kevin and Lim, Ser-Nam and Belongie, Serge},
title = {Powerful Benchmarker},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/KevinMusgrave/powerful-benchmarker}},
}
Thank you to Ser-Nam Lim at Facebook AI, and my research advisor, Professor Serge Belongie. This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists.