Modality Capacity and Appropriateness in Multimodal Display of Complex Non-Semitic Information Stream

2019 
Abstract The design of multimodal output should be based on modality capacity and appropriateness. Previous relevant research had various limitations such as using over-simplified tasks. We proposed a paradigm with virtual reality to explore the capacity and appropriateness of complex non-semantic information stream. In two experiments, fifty-six college students identified location, magnitude, and/or frequency/duration of visual, auditory, and haptic stimuli, and the bi- and tri-modal combinations. We found that (1) for stimuli of 2–4 bits, visual stimuli were identified faster, more accurately, and with lower workload ( p values p values ≥ .068). (3) For visual stimuli, magnitude and location were identified more accurately than duration; for haptic stimuli, location was identified more accurately than magnitude and duration. (4) When two targets within one modality were to be identified simultaneously, performance of processing visual stimuli deteriorated least, and performance of processing auditory stimuli deteriorated most. Vision has the most general appropriateness and largest capacity for non-semantic information stream, which cannot be overcome by multimodal redundant output. These findings should be considered when designing multimodal interfaces.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    69
    References
    1
    Citations
    NaN
    KQI
    []