Towards a Modelling Workbench with flexible Interaction Models for Model Editors operating through Voice and Gestures

2021 
Model-Driven Engineering (MDE) has emerged as a methodology, grounded in theory and tooling, to design and develop software systems with models at their core. MDE makes use of modelling languages, ranging from general-purpose like the standard UML, to dedicated modelling languages like Domain-Specific Modelling Languages. Those are supported by modelling editors, with languages that typically offer graphical and textual notations as concrete syntax. However, being concentrated on vision, these tools ignore other senses and communication channels like audition (voice and sound) that could be used in industrial settings, for accessibility purposes, or simply as complementary to the visual approaches.We are building a modelling workbench platform that, similarly to modelling workbenches dealing with diagrammatic languages, allows a software language engineer to model a domain-specific language and generate a voice/audio editor where the domain end-users can operate (Create, Read, Update and Delete), and navigate diagrams through speech recognition and voice synthesis tools.One of the problems of editors that use voice is the fixed interaction paradigms that contribute to poor user experience. In this paper, we propose an interaction mechanism that can recognise vocal and non-vocal sounds as well as gestures adding two senses in the definition of Domain-Specific Languages’ concrete syntax definition, not usually explored in the model-driven tools. we have built a prototype and we carried out a pilot empirical study, with preliminary positive results, to access the presented prototype in terms of usability, productivity and learning curve.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []