Exemplar-Based, Semantic Guided Zero-Shot Visual Recognition
2022
Zero-shot recognition has been a hot topic in recent years. Since no direct supervision is available, researchers use semantic information as the bridge instead. However, most zero-shot recognition methods jointly model images on the class level without considering the distinctive character of each image. To solve this problem, in this paper, we propose a novel exemplar-based, semantic guided zero-shot recognition method (EBSG). Both visual and semantic information of each image is used. We train visual sub-model to separate each image from the other images of different classes. We also train semantic sub-model to separate this image from the other images described with different semantics. We concatenate the outputs of visual and semantic sub-models to represent images. Image classification model is then learned by measuring visual similarity and semantic consistency of both source and target images. We conduct zero-shot recognition experiments on four widely used datasets. Experimental results show the effectiveness of the proposed EBSG method.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI