Two-Stream Spatial-Temporal Graph Convolutional Networks for Driver Drowsiness Detection

2021 
Convolutional neural networks (CNNs) have achieved remarkable performance in driver drowsiness detection based on the extraction of deep features of drivers' faces. However, the performance of driver drowsiness detection methods decreases sharply when complications, such as illumination changes in the cab, occlusions and shadows on the driver's face, and variations in the driver's head pose, occur. In addition, current driver drowsiness detection methods are not capable of distinguishing between driver states, such as talking versus yawning or blinking versus closing eyes. Therefore, technical challenges remain in driver drowsiness detection. In this article, we propose a novel and robust two-stream spatial-temporal graph convolutional network (2s-STGCN) for driver drowsiness detection to solve the above-mentioned challenges. To take advantage of the spatial and temporal features of the input data, we use a facial landmark detection method to extract the driver's facial landmarks from real-time videos and then obtain the driver drowsiness detection result by 2s-STGCN. Unlike existing methods, our proposed method uses videos rather than consecutive video frames as processing units. This is the first effort to exploit these processing units in the field of driver drowsiness detection. Moreover, the two-stream framework not only models both the spatial and temporal features but also models both the first-order and second-order information simultaneously, thereby notably improving driver drowsiness detection. Extensive experiments have been performed on the yawn detection dataset (YawDD) and the National TsingHua University drowsy driver detection (NTHU-DDD) dataset. The experimental results validate the feasibility of the proposed method. This method achieves an average accuracy of 93.4% on the YawDD dataset and an average accuracy of 92.7% on the evaluation set of the NTHU-DDD dataset.
    • Correction
    • Source
    • Cite
    • Save
    0
    References
    0
    Citations
    NaN
    KQI
    []