FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

2021 
Several notions of fairness, such as demographic parity and equal opportunity, are defined based on statistical independence between a predicted target and a sensitive attribute. In machine learning applications, however, the data distribution is unknown to the learner and statistical independence is not verifiable. Hence, the learner could only resort to empirical evaluation of the degree of fairness violation. Many fairness violation notions are defined as a divergence/distance between the joint distribution of the target and sensitive attributes and the Kronecker product of their marginals, such as \Renyi correlation, mutual information, L∞ distance, to name a few. In this paper, we propose another notion of fairness violation, called Exponential R\'enyi Mutual Information (ERMI) between sensitive attributes and the predicted target. We show that ERMI is a strong fairness violation notion in the sense that it provides an upper bound guarantee on all of the aforementioned notions of fairness violation. We also propose the Fair Empirical Risk Minimization via ERMI regularization framework, called FERMI. Whereas existing in-processing fairness algorithms are deterministic, we provide a stochastic optimization method for solving FERMI that is amenable to large-scale problems. In addition, we provide a batch (deterministic) method to solve FERMI. Both of our proposed algorithms come with theoretical convergence guarantees. Our experiments show that FERMI achieves the most favorable tradeoffs between fairness violation and accuracy on test data across different problem setups, even when fairness violation is measured in notions other than ERMI.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    3
    Citations
    NaN
    KQI
    []