Detecting Video Anomaly with a Stacked Convolutional LSTM Framework.

2019 
Automatic anomaly detection in real-world video surveillance is still challenging. In this paper, we propose an autoencoder architecture based on a stacked convolutional LSTM framework that highlights both spatial and temporal aspects in detecting anomalies of surveillance videos. The spatial component(i.e. spatial encoder/decoder) uses Convolutional Neural Network (CNN) and carries information about scenes and objects. The temporal component(i.e. temporal encoder/decoder) uses stacked convolutional LSTM and conveys object movement. Specifically, we integrate CNN and the stacked convolutional LSTM to learn normal patterns from the training data, which contains only normal events. With the integrated approach, our method can better model spatio-temporal information than many others. We train our models in an unsupervised manner, and labels are required only in the testing phase. Our method is evaluated on the datasets of Avenue, UCSD and ShanghaiTech Campus. The results show that the accuracy of our method rivals state-of-the-art methods with a faster detection speed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    2
    Citations
    NaN
    KQI
    []