Vehicle weight identification system for spatiotemporal load distribution on bridges based on non-contact machine vision technology and deep learning algorithms

2020 
Abstract Accurate information regarding the weight of vehicle loads plays a significant role in maintaining the structural health of bridges. However, the only method currently available for ascertaining the weight of loads is the bridge weigh-in-motion (BWIM) system, which is not widely used because of the high cost of the large device involved. There is therefore a need to develop an effective, low-cost technology to ascertain vehicle loads in relation to spatiotemporal load distribution on long-span bridges. This paper proposes a non-contact vehicle identification methodology to distinguish a vehicle from its load based on machine vision technology and deep learning algorithms. The vehicle information (i.e., type, weight, position, and motion trajectory, etc.) is conveniently obtained from a roadside monitoring surveillance camera, while the axle-weight distribution interval for nine classified vehicle types is obtained from the statistical information of 8,402 delivery vehicles from which the relationship between a unique vehicle type and the corresponding weight information is established. Meanwhile, a dataset containing 8,624 vehicle images was established for training the deep convolutional neural network (DCNN), where nine rough-grained vehicle classifications were contained in order to enhance the generalizability of the network. Optimization analysis was conducted to improve the network accuracy in vehicle types identification. The position of vehicles can also be effectively detected by a faster region-based convolutional neural network (Faster R-CNN), where the pre-trained DCNN with 98.17% vehicle types classification accuracy is employed as the co-shared network layer to enhance computation efficiency. Utilizing the object detection results from the Faster R-CNN and utilizing a Kalman filter, a vehicle in motion could also be simultaneously real-time tracked by the monitoring video, while a graphical user interface (GUI) incorporated into the video camera enabled automatic identification. A post-processing module has been established based on the proposed method, and a field test was conducted to validate the reliability of the system.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    13
    Citations
    NaN
    KQI
    []