Deep Deformable Patch Metric Learning for Person Re-Identification

2018 
The methodology for finding the same individual in a network of cameras must deal with significant changes in appearance caused by variations in illumination, viewing angle, and a person’s pose. Re-identification requires solving two fundamental problems: 1) determining a distance measure between features extracted from different cameras that cope with illumination changes (metric learning) and 2) ensuring that matched features refer to the same body part (correspondence). Most metric learning approaches focus on finding a robust distance measure between bounding box images, neglecting the alignment aspects. In this paper, we propose to learn appearance measures for patches that are combined using deformable models. Learning metrics for patches avoids strong dimensionality reduction, thus keeping more information. Additionally, we allow patches to change their locations, directly addressing the correspondence problem. As patches from different locations may share the same metric, our method effectively multiplies the amount of training data and allows patch metrics to be learned on the smaller amounts of labeled images. Different metric learning approaches (KISSME, XQDA, and LSSL) together with different deformable models (spring constraints and one-to-one matching constraints) are investigated and compared. For describing patches, we propose to learn a deep feature representation with convolutional neural networks, thus obtaining highly effective features for re-identification. We demonstrate that our approach significantly outperforms state-of-the-art methods on multiple data sets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    5
    Citations
    NaN
    KQI
    []