Using Semantics to Automatically Generate Speech Interfaces for Wearable Virtual and Augmented Reality Applications

2017 
This paper presents a framework for automatically generating speech-based interfaces for controlling virtual and augmented reality (AR) applications on wearable devices. Starting from a set of natural language descriptions of application functionalities and a catalog of general-purpose icons, annotated with possible implied meanings, the framework creates both vocabulary and grammar for the speech recognizer, as well as a graphic interface for the target application, where icons are expected to be capable of evoking available commands. To minimize user's cognitive load during interaction, a semantics-based optimization mechanism was used to find the best mapping between icons and functionalities and to expand the set of valid commands. The framework was evaluated by using it with see-through glasses for AR-based maintenance and repair operations. A set of experimental tests were designed to objectively and subjectively assess first-time user experience of the automatically generated interface in relation to that of a fully personalized interface. Moreover, intuitiveness of the automatically generated interface was studied by analyzing the results obtained through trained users on the same interface. Objective measurements (in terms of false positives, false negatives, task completion rate, and average number of attempts for activating functionalities) and subjective measurements (about system response accuracy, likeability, cognitive demand, annoyance, habitability, and speed) reveal that the results obtained by the first-time users and experienced users with the proposed framework's interface are very similar, and their performances are comparable with those of both the considered references.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    14
    Citations
    NaN
    KQI
    []