Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification.

2021 
Large and comprehensive datasets are crucial for the development of vehicle ReID. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, which consists of 274 cameras covering 200 km2. Specifically, the samples in our dataset present rich diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, and diversified background environments. Furthermore, we define a challenging test set containing about 400K vehicle images that do not have any camera overlap with the training set. Besides, we also design a new method. We observe that the orientation is a crucial factor for vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information, while features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network(DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and common features that thus can be adaptively exploited via a hybrid ranking strategy when dealing with different matching pairs. The comprehensive experimental results show the effectiveness of our proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []