Utilising natural cross-modal mappings for visual control of feature-based sound synthesis

2017 
This paper presents the results of an investigation into audio-visual (AV) correspondences conducted as part of the development of Morpheme, a painting interface to control a corpus-based concatenative sound synthesis algorithm. Previous research has identified strong AV correspondences between dimensions such as pitch and vertical position or loudness and size. However, these correspondences are usually established empirically by only varying a single audio or visual parameter. Although it is recognised that the perception of AV correspondences is affected by the interaction between the parameters of auditory or visual stimuli when these are complex multidimensional objects, there has been little research into perceived AV correspondences when complex dynamic sounds are involved. We conducted an experiment in which two AV mapping strategies and three audio corpora were empirically evaluated. 110 participants were asked to rate the perceived similarity of six AV associations. The results confirmed that size/loudness, vertical position/pitch, colour brightness/spectral brightness are strongly associated. A weaker but significant association was found between texture granularity and sound dissonance, as well as colour complexity and sound dissonance. Harmonicity was found to have a moderating effect on the perceived strengths of these associations: the higher the harmonicity of the sounds, the stronger the perceived AV associations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []