Predicting the perceptual quality of networked video through light-weight bitstream analysis

2014 
With the exponential growth of video traffic over wireless networked and embedded devices such as mobile phones and sensors, mechanisms are needed to predict the perceptual quality of video in real time and with low complexity, based on which networking protocols can control video quality and optimize network resources to meet the quality of experience (QoE) requirements of users. This paper proposes an efficient and light-weight video quality prediction model through partial parsing of compressed video bitstreams. A set of features were introduced to reflect video content characteristics and distortions caused by compression and transmission. All the features can be obtained directly from the H.264/AVC compressed bitstream in parsing mode without decoding the pixel information in macroblocks. Based on these features, an artificial neural network model was trained for perceptual quality prediction. Evaluation results show that the proposed prediction model can achieve accurate prediction of perceptual video quality through low computation costs. Therefore, it is well-suited for real time networked video applications on embedded devices.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    2
    Citations
    NaN
    KQI
    []