Texture Plus Depth Video Coding Using Camera Global Motion Information

2017 
In video coding, traditional motion estimation methods work well for videos with camera translational motion, but their efficiency drops for other motions, such as rotational and dolly motions. In this paper, a motion-information-based three-dimensional (3D) video coding method is proposed for texture plus depth 3D video. The synchronized global motion information of the camera is obtained to assist the encoder improve its rate-distortion performance by projecting the temporal neighboring texture and depth frames into the position of the current frame, using the depth and camera motion information. Then, the projected frames are added into the reference buffer list as virtual reference frames. As these virtual reference frames could be more similar to the current to-be-encoded frame than the conventional reference frames, the required bits to represent the residual will be reduced. The experimental results demonstrate that the proposed scheme enhances the coding performance for all camera motion types and for various scene settings and resolutions using H.264 and HEVC standards, respectively. With the computer graphic sequences, for H.264, the average gain of texture and depth coding are up to 2 dB and 1 dB, respectively. For HEVC and HD resolution sequences, the gain of texture coding reaches 0.4 dB. For realistic sequences, up to 0.5 dB gain (H.264) is achieved for the texture video, while up to 0.7 dB gain is achieved for the depth sequences.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    1
    Citations
    NaN
    KQI
    []