Automatic mapping of gaze position coordinates of eye-tracking glasses video on a common static reference image

2018 
This paper describes a method for automatic semantic gaze mapping from video obtained by eye-tracking glasses to a common reference image. Image feature detection and description algorithms are utilized to find the position of subsequent video frames and map corresponding gaze coordinates on a common reference image. This process allows aggregate experiment results for further experiment analysis and provides an alternative for manual semantic gaze mapping methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    3
    Citations
    NaN
    KQI
    []