No-reference video quality assessment based on modeling temporal-memory effects

2021 
Abstract This study presents a hybrid network for no-reference (NR) video quality assessment (VQA). Besides spatial cues, the network concerns temporal motion effect and temporal hysteresis effect on the visual quality estimation, and two modules are embedded. One module is dedicated to incorporate short-term spatio-temporal features based on spatial quality maps and temporal quality maps, and the follow-up module explores graph convolutional network to quantify the relationship between image frames in a sequence. The proposed network and several popular models are evaluated on three video quality databases (CSIQ, LIVE, and KoNViD-1K). Experimental results indicate that the network outperforms other involved NR models, and its competitive performance is close to that of state-of-the-art full-reference VQA models. Conclusively, short-term spatio-temporal feature fusion benefits the modeling of interaction between spatial and temporal cues in VQA tasks, long-term sequence fusion further improves the performance, and a strong correlation with human subjective judgment is achieved.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    1
    Citations
    NaN
    KQI
    []