Ethics of AI: Do the Face Detection Models Act with Prejudice?

2021 
This work presents a study on an ethical issue in Artificial Intelligence related to the presence of racist biases by detecting faces in images. Our analyses were performed on a real-world system designed to detect fraud in public transportation in Salvador (Brazil). Our experiments were conducted by taking into account three steps. Firstly, we individually analyzed a sample of images and added specific labels related to the users’ gender and race. Then, we used well-defined detectors, based on different Convolutional Neural Network architectures, to find faces in the previously labeled images. Finally, we used statistical tests to assess whether or not there is some relation between the error rates and such labels. According to our results, we had noticed important biases, thus leading to higher error rates when images were taken from black people. We also noticed errors are more likely in both black men and women. Based on our conclusions, we recall the risk of deploying computational software that might affect minority groups that are historically neglected.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []