SAT-GCN: Self-attention graph convolutional network-based 3D object detection for autonomous driving

2023 
Accurate 3D object detection from point clouds is critical for autonomous vehicles. However, point cloud data collected by LiDAR sensors are inherently sparse, especially at long distances. In addition, most existing 3D object detectors extract local features and ignore interactions between features, producing weak semantic information that significantly limits detection performance. We propose a self-attention graph convolutional network (SAT-GCN), which utilizes a GCN and self-attention to enhance semantic representations by aggregating neighborhood information and focusing on vital relationships. SAT-GCN consists of three modules: vertex feature extraction (VFE), self-attention with dimension reduction (SADR), and far distance feature suppression (FDFS). VFE extracts neighboring relationships between features using GCN after encoding a raw point cloud. SADR performs further weight augmentation for crucial neighboring relationships through self-attention. FDFS suppresses meaningless edges formed by sparse point cloud distributions in remote areas and generates corresponding global features. Extensive experiments are conducted on the widely used KITTI and nuScenes 3D object detection benchmarks. The results demonstrate significant improvements in mainstream methods, PointPillars, SECOND, and PointRCNN, improving the mean of AP 3D by 4.88%, 5.02%, and 2.79% on KITTI test dataset. SAT-GCN can boost the detection accuracy of the point cloud, especially at medium and long distances. Furthermore, adding the SAT-GCN module has a limited impact on the real-time performance and model parameters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []