当前位置: X-MOL 学术EURASIP J. Audio Speech Music Proc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automatic discrimination between front and back ensemble locations in HRTF-convolved binaural recordings of music
EURASIP Journal on Audio, Speech, and Music Processing ( IF 2.4 ) Pub Date : 2022-01-15 , DOI: 10.1186/s13636-021-00235-2
Sławomir K. Zieliński 1 , Paweł Antoniuk 1 , Hyunkook Lee 2 , Dale Johnson 2
Affiliation  

One of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this work was to develop a method for discriminating between front and back located ensembles in binaural recordings of music. To this end, 22, 496 binaural excerpts, representing either front or back located ensembles, were synthesized by convolving multi-track music recordings with 74 sets of head-related transfer functions (HRTF). The discrimination method was developed based on the traditional approach, involving hand-engineering of features, as well as using a deep learning technique incorporating the convolutional neural network (CNN). According to the results obtained under HRTF-dependent test conditions, CNN showed a very high discrimination accuracy (99.4%), slightly outperforming the traditional method. However, under the HRTF-independent test scenario, CNN performed worse than the traditional algorithm, highlighting the importance of testing the algorithms under HRTF-independent conditions and indicating that the traditional method might be more generalizable than CNN. A minimum of 20 HRTFs are required to achieve a satisfactory generalization performance for the traditional algorithm and 30 HRTFs for CNN. The minimum duration of audio excerpts required by both the traditional and CNN-based methods was assessed as 3 s. Feature importance analysis, based on a gradient attribution mapping technique, revealed that for both the traditional and the deep learning methods, a frequency band between 5 and 6 kHz is particularly important in terms of the discrimination between front and back ensemble locations. Linear-frequency cepstral coefficients, interaural level differences, and audio bandwidth were identified as the key descriptors facilitating the discrimination process using the traditional approach.

中文翻译:

HRTF 卷积双耳音乐录音中前后合奏位置的自动区分

开发双耳机器试听系统的最大挑战之一是消除前后音频源之间的歧义,尤其是在复杂的空间音频场景中。这项工作的目标是开发一种方法来区分音乐双耳录音中的前后合奏。为此,通过将多轨音乐录音与 74 组与头部相关的传递函数 (HRTF) 进行卷积,合成了 22、496 个双耳片段,代表前部或后部的合奏。鉴别方法是在传统方法的基础上开发的,涉及特征的手工工程,以及使用包含卷积神经网络 (CNN) 的深度学习技术。根据 HRTF 依赖测试条件下获得的结果,CNN 表现出非常高的识别准确率(99.4%),略优于传统方法。然而,在 HRTF 独立测试场景下,CNN 的表现比传统算法差,凸显了在 HRTF 独立条件下测试算法的重要性,并表明传统方法可能比 CNN 更通用。传统算法至少需要 20 个 HRTF 才能获得令人满意的泛化性能,而 CNN 需要至少 30 个 HRTF。传统方法和基于 CNN 的方法所需的音频摘录的最短持续时间被评估为 3 秒。基于梯度归因映射技术的特征重要性分析表明,对于传统和深度学习方法,5 到 6 kHz 之间的频带在区分前后合奏位置方面特别重要。线性频率倒谱系数、耳间电平差异和音频带宽被确定为使用传统方法促进辨别过程的关键描述符。
更新日期:2022-01-16
down
wechat
bug