Temporally coherent 4D video segmentation for teleconferencing
2013
We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap-
plications on emerging mobile devices. Our algorithm extracts users from their environments and places them
onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac-
tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth
sensors, while providing useful information for segmentation, result in noisy depth maps with a large number
of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other-
wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds
seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve-
ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates
in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video
conferencing scenarios.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI