当前位置: X-MOL 学术J. Multimodal User Interfaces › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interactive sonification strategies for the motion and emotion of dance performances
Journal on Multimodal User Interfaces ( IF 2.9 ) Pub Date : 2020-03-14 , DOI: 10.1007/s12193-020-00321-3
Steven Landry , Myounghoon Jeon

Sonification has the potential to communicate a variety of data types to listeners including not just cognitive information, but also emotions and aesthetics. The goal of our dancer sonification project is to “sonify emotions as well as motions” of a dance performance via musical sonification. To this end, we developed and evaluated sonification strategies for adding a layer of emotional mappings to data sonification. Experiment 1 developed and evaluated four musical sonifications (i.e., sin-ification, MIDI-fication, melody module, and melody and arrangement module) to see their emotional effects. Videos were recorded of a professional dancer interacting with each of the four musical sonification strategies. Forty-eight participants provided ratings of musicality, emotional expressivity, and sound-motion/emotion compatibility via an online survey. Results suggest that increasing musical mappings led to higher ratings for each dimension for dance-type gestures. Experiment 2 used the musical sonification framework to develop four sonification scenarios that aimed to communicate a target emotion (happy, sad, angry, and tender). Thirty participants compared four interactive sonification scenarios with four pre-composed dance choreographies featuring the same musical and gestural palettes. Both forced choice and multi-dimensional emotional evaluations were collected, as well as motion/emotion compatibility ratings. Results show that having both music and dance led to higher accuracy scores for most target emotions, compared to music or dance conditions alone. These findings can contribute to the fields of movement sonification, algorithmic music composition, as well as affective computing in general, by describing strategies for conveying emotion through sound.

中文翻译:

交互式声波策略,用于舞蹈表演的动作和情感

语音化有潜力将各种数据类型传达给听众,不仅包括认知信息,还包括情感和美感。我们的舞者声波处理项目的目标是通过音乐声波处理来“使舞蹈表演的情感和动作同时发生”。为此,我们开发并评估了超声化策略,以便为数据超声化添加一层情感映射。实验1开发并评估了四个音乐声素(即,罪化,MIDI配音,旋律模块以及旋律和编曲模块),以观察它们的情感效果。录制了一个视频,记录了一位专业舞者与四种音乐声化策略中的每一种相互作用的情况。四十八名参与者通过在线调查提供了音乐性,情感表现力和声音运动/情感兼容性的评分。结果表明,随着音乐映射的增加,舞蹈型手势的每个维度的评分都更高。实验2使用音乐的声音化框架开发了四个声音化场景,旨在传达目标情感(快乐,悲伤,愤怒和温柔)。30名参与者将四种互动的声音化场景与四种具有相同音乐和手势调色板的预先编好的舞蹈编排进行了比较。收集了强制选择和多维情感评估,以及运动/情感兼容性等级。结果表明,与单纯的音乐或舞蹈条件相比,音乐和舞蹈的结合对大多数目标情绪的准确性得分更高。这些发现有助于运动声处理,算法音乐创作,
更新日期:2020-03-14
down
wechat
bug