当前位置: X-MOL 学术IEEE J. Biomed. Health Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Graphical-User-Interface-Based Azimuth-Collection Method in Autonomous Auditory Localization of Real and Virtual Sound Sources
IEEE Journal of Biomedical and Health Informatics ( IF 6.7 ) Pub Date : 2020-07-23 , DOI: 10.1109/jbhi.2020.3011377
Dong Cui , Yuexin Cai , Guangzheng Yu

Auditory localization of spatial sound sources is an important life skill for human beings. For the practical application-oriented measurement of auditory localization ability, the preference is a compromise among (i) data accuracy, (ii) the maneuverability of collecting directions, and (iii) the cost of hardware and software. The graphical user interface (GUI)-based sound-localization experimental platform proposed here (i) is cheap, (ii) can be operated autonomously by the listener, (iii) can store results online, and (iv) supports real or virtual sound sources. To evaluate the accuracy of this method, by using 12 loudspeakers arranged in equal azimuthal intervals of 30° in the horizontal plane, three groups of azimuthal localization experiments are conducted in the horizontal plane with subjects with normal hearing. In these experiments, the azimuths are reported using (i) an assistant, (ii) a motion tracker, or (iii) the newly designed GUI-based method. All three groups of results show that the localization errors are mostly within 5–12°, which is consistent with previous results from different localization experiments. Finally, the stimulus of virtual sound sources is integrated into the GUI-based experimental platform. The results with the virtual sources suggest that using individualized head-related transfer functions can achieve better performance in spatial sound source localization, which is consistent with previous conclusions and further validates the reliability of this experimental platform.

中文翻译:

真实和虚拟声源自主听觉定位中基于图形用户界面的方位角采集方法

空间声源的听觉定位是人类一项重要的生活技能。对于面向实际应用的听觉定位能力测量,偏好是 (i) 数据准确性、(ii) 收集方向的可操作性和 (iii) 硬件和软件成本之间的折衷。这里提出的基于图形用户界面(GUI)的声音定位实验平台(i)便宜,(ii)可以由听者自主操作,(iii)可以在线存储结果,以及(iv)支持真实或虚拟声音来源。为评估该方法的准确性,采用水平面30°等方位角间隔排列的12个扬声器,在水平面内对听力正常的受试者进行三组方位角定位实验。在这些实验中,使用 (i) 助手、(ii) 运动跟踪器或 (iii) 新设计的基于 GUI 的方法报告方位角。三组结果均表明定位误差大多在5-12°范围内,这与之前不同定位实验的结果一致。最后,将虚拟声源的刺激集成到基于 GUI 的实验平台中。虚拟声源的结果表明,使用个性化的头部相关传递函数可以获得更好的空间声源定位性能,这与之前的结论一致,进一步验证了该实验平台的可靠性。三组结果均表明定位误差大多在5-12°范围内,这与之前不同定位实验的结果一致。最后,将虚拟声源的刺激集成到基于 GUI 的实验平台中。虚拟声源的结果表明,使用个性化的头部相关传递函数可以获得更好的空间声源定位性能,这与之前的结论一致,进一步验证了该实验平台的可靠性。三组结果均表明定位误差大多在5-12°范围内,这与之前不同定位实验的结果一致。最后,将虚拟声源的刺激集成到基于 GUI 的实验平台中。虚拟声源的结果表明,使用个性化的头部相关传递函数可以在空间声源定位中获得更好的性能,这与先前的结论一致,进一步验证了该实验平台的可靠性。
更新日期:2020-07-23
down
wechat
bug