Explaining the Semantics Capturing Capability of Scene Graph Generation Models

2020 
Abstract Deep neural network is a effective way for scene graph generation tasks. However, it also makes the scene graph generation models difficult to explain. For instance, the current standard metric cannot explain how capable neural network models are of capturing the semantics of relations. In this paper, we try to understand the semantics capturing capability of scene graph generation models based on three types of metrics: conformance recall, violation recall, and non-violation recall, which measure semantic properties of relations that are reflected by triples in scene graph generated by models. Evaluation of these metrics on three representative state-of-the-art scene graph generation models based on deep neural network in Visual Genome dataset shows that the proposed metrics can effectively explain the capability of models to capture different semantic properties and identify design problems in models. By extending the Visual Genome dataset with different sets of additional annotations, these metrics can also explaining whether the semantics capturing capability of deep neural network models can be improved by data enhancement.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    4
    Citations
    NaN
    KQI
    []