Group Sampling for Scale Invariant Face Detection

2022 
Detectors based on deep learning tend to detect multi-scale objects on a single input image for efficiency. Recent works, such as FPN and SSD, generally use feature maps from multiple layers with different spatial resolutions to detect objects at different scales, e.g., high-resolution feature maps for small objects. However, we find that objects at all scales can also be well detected with features from a single layer of the network. In this paper, we carefully examine the factors affecting detection performance across a large range of scales, and conclude that the balance of training samples, including both positive and negative ones, at different scales is the key. We propose a group sampling method which divides the anchors into several groups according to the scale, and ensure that the number of samples for each group is the same during training. Our approach using only one single layer of FPN as features is able to advance the state-of-the-arts. Comprehensive analysis and extensive experiments have been conducted to show the effectiveness of the proposed method. Moreover, we show that our approach is favorably applicable to other tasks, such as object detection on COCO dataset, and to other detection pipelines, such as YOLOv3, SSD and R-FCN. Our approach, evaluated on face detection benchmarks including FDDB and WIDER FACE datasets, achieves state-of-the-art results without bells and whistles.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []