Sketch-Based Shape Retrieval via Best View Selection and a Cross-Domain Similarity Measure

2020 
Retrieving 3D shapes from 2D human sketches has received increasing attention in computer vision and computer graphics. Most previous methods projected 3D shapes from numerous viewpoints and then extracted features of 3D shapes from these projections and calculated the similarity with sketches. However, due to the unknown pose of 3D shapes, viewpoints were usually sampled uniformly from a sphere coordinate. Hence, some projections acquired insufficient descriptions of 3D shapes. In this paper, we proposed a view selection algorithm to find the most reasonable viewpoints, which can benefit representation learning for 3D shapes. Additionally, to indicate the apparent discrepancy between sketches and 3D shapes, we leveraged a generalized similarity model to encourage the accuracy of cross-domain feature matching. We first computed line renderings of 3D shapes from an enormous number of viewpoints. Then, we calculated the similarity of shapes between line renderings and sketches. In this vein, we obtained several superior projections. Second, we implemented a sketch network to extract features of the sketch and a shape network to extract features of projections. We combined the features of different projections to secure the compact representation for 3D shapes. Finally, a metric network was constructed using a cross-domain similarity model, and we trained the metric network with triplet loss. Online hard sample mining was leveraged to accelerate the convergence of the network. We evaluated our method on SHREC‘13 and SHREC’14 sketch track benchmark datasets. The experimental results demonstrated that both view selection and cross-domain similarity models were able to encourage retrieval performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    3
    Citations
    NaN
    KQI
    []