Multimodal enactive interface: A side vision to support the main vision of Industry 4.0

2019 
This article radiates a vision about having a next generation “fictional” kind of graphical user interfaces practically into reality within the industrial environment for having a better productivity. More specifically, there’s a strong need of sophisticated multimodal interaction between humans and machines from the shop floor till the management floor under the framework of the fourth industrial revolution or Industry 4.0. One of the key components of this new framework is the inclusion of humans in the production line (human-in-the-loop) and the manufacturing process is often a collaborative effort between humans and machines. In such a situation, humans and machines should speak a common language without any communication barrier. With the existing graphical user interfaces, predominantly text based, humans often use an unnatural mode of interaction with machines. Machines, on the other hand, doesn’t understand if humans speak in his/her natural mode of interaction using voice, natural language, gestures, facial expressions, eye-gaze, body-postures, etc. Evidently, there exists a problem with the existing user interfaces to fulfill the vision of Industry 4.0.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    2
    Citations
    NaN
    KQI
    []