Visualizing the Effect of Semantic Classes in the Attribution of Scene Recognition Models.

2020 
The performance of Convolutional Neural Networks for image classification has vastly and steadily increased during the last years. This success goes hand in hand with the need to explain and understand their decisions: opening the black box. The problem of attribution specifically deals with the characterization of the response of Convolutional Neural Networks by identifying the input features responsible for the model’s decision. Among all attribution methods, perturbation-based methods are an important family based on measuring the effect of perturbations applied to the input image in the model’s output. In this paper, we discuss the limitations of existing approaches and propose a novel perturbation-based attribution method guided by semantic segmentation. Our method inhibits specific image areas according to their assigned semantic label. Hereby, perturbations are link up with a semantic meaning and a complete attribution map is obtained for all image pixels. In addition, we propose a particularization of the proposed method to the scene recognition task which, differently than image classification, requires multi-focus attribution models. The proposed semantic-guided attribution method enables us to delve deeper into scene recognition interpretability by obtaining for each scene class the sets of relevant, irrelevant and distracting semantic labels. Experimental results suggest that the method can boost research by increasing the understanding of Convolutional Neural Networks while uncovering datasets biases which may have been inadvertently included during the harvest and annotation processes. All the code, data and supplementary results are available at http://www-vpu.eps.uam.es/publications/SemanticEffectSceneRecognition/
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    1
    Citations
    NaN
    KQI
    []