Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes

2020 
Machine learning models are at the foundation of modern society. Accounts of unfair models penalizing subgroups of a population have been reported in domains including law enforcement, job screening, etc. Unfairness can spur from biases in the training data, as well as from class imbalance, i.e., when a sensitive group's data is not sufficiently represented. Under such settings, balancing techniques are commonly used to achieve better prediction performance, but their effects on model fairness are largely unknown. In this paper, we first illustrate the extent to which common balancing techniques exacerbate unfairness in real-world data. Then, we propose a new method, called fair class balancing, that allows to enhance model fairness without using any information about sensitive attributes. We show that our method can achieve accurate prediction performance while concurrently improving fairness.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    14
    Citations
    NaN
    KQI
    []