当前位置: X-MOL 学术Discourse Processes › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal Coordination of Sound and Movement in Music and Speech
Discourse Processes ( IF 2.437 ) Pub Date : 2020-06-18 , DOI: 10.1080/0163853x.2020.1768500
Camila Alviar 1 , Rick Dale 2 , Akeiylah Dewitt 1 , Christopher Kello 1
Affiliation  

ABSTRACT

Speech and music emerge from a spectrum of nested motor and perceptual coordination patterns across timescales of brief movements to actions. Intuitively, this nested clustering in movements should be reflected in sound. We examined similarities and differences in multimodal, multiscale coordination of speech and music using two complementary measures: We computed spectra for envelopes of acoustic amplitudes and motion amplitudes and correlated spectral powers across modalities as a function of frequency. We also correlated smoothed envelopes and examined peaks in their cross-correlation functions. YouTube videos of five different modes of speaking and five different types of music were analyzed. Speech performances yielded stronger, more reliable relationships between sound and movement compared with music. Interestingly, a cappella singing patterned more with music, and improvisational jazz piano patterned more with speech. Results suggest that nested temporal structures in sound and movement are coordinated as a function of communicative aspects of performance.



中文翻译:

音乐和语音中声音和运动的多模式协调

摘要

语音和音乐是从一系列简短的动作到行动的时间尺度上的嵌套运动和感知协调模式出现的。凭直觉,运动中的这种嵌套聚类应该在声音中反映出来。我们使用两种互补的方法检查了语音和音乐的多模式,多尺度协调中的异同:我们计算了声振幅和运动振幅的包络谱,并根据频率将模态之间的频谱功率相关联。我们还关联了平滑包络,并检查了它们的互相关函数中的峰。分析了YouTube上五种不同的说话方式和五种不同类型的音乐的视频。与音乐相比,语音表演在声音和动作之间产生了更牢固,更可靠的关系。有趣的是 无伴奏合唱更像音乐,而即兴爵士钢琴更像语音。结果表明,声音和运动中的嵌套时间结构是根据表演的交际方面进行协调的。

更新日期:2020-06-18
down
wechat
bug