Learning fair predictors with Sensitive Subspace Robustness

2019 
We consider an approach to training machine learning systems that are fair in the sense that their performance is invariant under certain perturbations to the features. For example, the performance of a resume screening system should be invariant under changes to the name of the applicant or switching the gender pronouns. We connect this intuitive notion of algorithmic fairness to individual fairness and study how to certify ML algorithms as algorithmically fair. We also demonstrate the effectiveness of our approach on three machine learning tasks that are susceptible to gender and racial biases.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    90
    References
    0
    Citations
    NaN
    KQI
    []