Logo by Zhuoning Yuan
An end-to-end machine learning library for AUC optimization (AUROC, AUPRC).
Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.
- In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance.
- Many real-world datasets are usually imbalanced. AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data
- Repository: https://github.com/yzhuoning/libauc
- Website: https://libauc.org
$ pip install libauc
- 01.Creating Imbalanced Benchmark Datasets [Notebook][Script]
- 02.Optimizing AUROC loss with ResNet20 on Imbalanced CIFAR10 [Notebook][Script]
- 03.Optimizing AUPRC loss with ResNet18 on Imbalanced CIFAR10 [Notebook][Script]
- 04.Training with Pytorch Learning Rate Scheduling [Notebook][Script]
- 05.Training with Imbalanced Datasets on Distributed Setting [Coming soon]
>>> #import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
...
>>> #define loss
>>> Loss = AUCMLoss()
>>> optimizer = PESG()
...
>>> #training
>>> model.train()
>>> for data, targets in trainloader:
>>> data, targets = data.cuda(), targets.cuda()
preds = model(data)
loss = Loss(preds, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
...
>>> #restart stage
>>> optimizer.update_regularizer()
>>> #import library
>>> from libauc.losses import APLoss_SH
>>> from libauc.optimizers import SOAP_SGD, SOAP_ADAM
...
>>> #define loss
>>> Loss = APLoss_SH()
>>> optimizer = SOAP_SGD()
...
>>> #training
>>> model.train()
>>> for index, data, targets in trainloader:
>>> data, targets = data.cuda(), targets.cuda()
preds = model(data)
loss = Loss(preds, targets, index)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Please visit our website or github for more examples.
If you find LibAUC useful in your work, please cite the following paper for our library:
@article{yuan2020robust,
title={Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
journal={arXiv preprint arXiv:2012.03173},
year={2020}
}
If you have any questions, please contact us @ Zhuoning Yuan [[email protected]] and Tianbao Yang [[email protected]] or please open a new issue in the Github.