Predicting Human-Reported Enjoyment Responses in Happy and Sad Music

2019 
Whether in a happy mood or a sad mood, humans enjoy listening to music. In this paper, we introduce a novel method to identify auditory features that best predict listener-reported enjoyment ratings by splitting the features into qualitative feature groups, then training predictive models on these feature groups and comparing prediction performance. Using audio features that relate to dynamics, timbre, harmony, and rhythm, we predicted continuous enjoyment ratings for a set of happy and sad songs. We found that a distributed lag model with Ll regularization best predicted these responses and that timbre-related features were most relevant for predicting enjoyment ratings in happy music, while harmony-related features were most relevant to predicting enjoyment ratings in sad music. This work adds to our understanding of how music influences affective human experience.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    4
    Citations
    NaN
    KQI
    []