Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things

2021 
Visual relationship can capture essential information for images, like the interactions between pairs of objects. Such relationships have become one prominent component of knowledge within sparse image data collected by multimedia sensing devices. Both the latent information and potential privacy can be included in the relationships. However, due to the high combinatorial complexity in modeling all potential relation triplets, previous studies on visual relationship detection have used the mixed visual and semantic features separately for each object, which is incapable for sparse data in IoT systems. Therefore, this paper proposes a new deep learning model for visual relationship detection, which is a novel attempt for cooperating computational intelligence (CI) methods with IoTs. The model imports the knowledge graph and adopts features for both entities and connections among them as extra information. It maps the visual features extracted from images into the knowledge-based embedding vector space, so as to benefit from information in the background knowledge domain and alleviate the impacts of data sparsity. This is the first time that visual features are projected and combined with prior knowledge for visual relationship detection. Moreover, the complexity of the network is reduced by avoiding the learning of redundant features from images. Finally, we show the superiority of our model by evaluating on two datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []