This repository provides Python implementation for our AAAI 2020 paper Fairness for Robust Log Loss Classification.
Developing classification methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications. Many existing methods enforce fairness constraints on a selected classifier (e.g., logistic regression) by directly forming constrained optimizations. We instead re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into its worst-case logarithmic loss minimization. This construction takes the form of a minimax game and produces a parametric exponential family conditional distribution that resembles truncated logistic regression. We present the theoretical benefits of our approach in terms of its convexity and asymptotic convergence. We then demonstrate the practical advantages of our approach on three benchmark fairness datasets.
The provided version of Adult (Code), and COMPAS (Code) datasets are taken from IBM AIF360 Toolkit
test_fairlogloss.py
trains and tests a fair classifier given a fairness critria:
- Demographic Parity (DP)
- Equalized Odds (EqOdd)
- Equalized Opportunity (EqOpp)
To run the experiment for each dataset run:
$ python test_fairlogloss.py [adult|compas] [dp|eqodd|eqopp]
- Ashkan Rezaei, Rizal Fathony, Omid Memarrast, Brian Ziebart. "Fairness for Robust Log Loss Classification" AAAI-20 [pdf]