Mitigating Biases in Multimodal Personality Assessment

2020 
As algorithmic decision making systems are increasingly used in high-stake scenarios, concerns have risen about the potential unfairness of these decisions to certain social groups. Despite its importance, the bias and fairness of multimodal systems are not thoroughly studied. In this work, we focus on the multimodal systems designed for apparent personality assessment and hirability prediction. We use the First Impression dataset as a case study to investigate the biases in such systems. We provide detailed analyses on the biases from different modalities and data fusion strategies. Our analyses reveal that different modalities show various patterns of biases and data fusion process also introduces additional biases to the model. To mitigate the biases, we develop and evaluate two different debiasing approaches based on data balancing and adversarial learning. Experimental results show that both approaches can reduce the biases in model outcomes without sacrificing much performance. Our debiasing strategies can be deployed in real-world multimodal systems to provide fairer outcomes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    6
    Citations
    NaN
    KQI
    []