Spatio-Temporal Compositing of Video Elements for Immersive eLearning Classrooms

2017 
Current live eLearning systems enable remote students to view the teaching environment comprising of several information sources such as the teacher and the teaching aids. These information sources are presented as individual video and audio elements. As a result, spatial connections between these elements, such as the teacher using hand gestures to point to an area on the screen, become meaningless at the remote location. Moreover, the remote students are required to divide their attention between multiple visual elements such as the screen displaying presentation slides and a separate screen displaying the teacher. This division in attention results in students getting disengaged from the class. This paper outlines a presentation architecture which preserves the spatio-temporal correlation amongst the multimedia elements. The architecture requires calibration of correlation data, capture of information sources, streaming to remote participating locations and compositing the received streams using feature matching techniques into a unified multilayered video presentation. A real-time virtual teaching environment is created for live eLearning sessions, which closely mimics a natural environment. The outcome is an engaging environment for the remote students. Also, the system does not impose any restrictions on the natural teaching style used by the teacher. A user study was performed comparing the proposed system to the current common existing representation. The results indicate a marked improvement in the classroom experience for the remote students.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    2
    Citations
    NaN
    KQI
    []