PSNet: Perspective-sensitive convolutional network for object detection

2022 
Abstract Multi-view object detection is challenging due to the influence of the different view-angles on intra-class similarity. The uniformed feature representation of traditional detectors couples the object’s perspective attribute and semantic feature, and the variances of perspective will cause intra-class differences. In this paper, a robust perspective-sensitive network (PSNet) is proposed to overcome the above problem. The uniformed feature is replaced by the perspective-specific structural feature, which makes the network perspective sensitive. Its essence is to learn multiple perspective spaces. In each perspective space, the semantic feature is decoupled from the perspective attribute and is robust to perspective variances. Perspective-sensitive RoI pooling and loss function are proposed for perspective-sensitive learning. Experiments on Pascal3D + and SpaceNet MOVI show the effectiveness and superiority of the PSNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    0
    Citations
    NaN
    KQI
    []