当前位置: X-MOL 学术J. Acoust. Soc. Am. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Do you have COVID-19? An artificial intelligence-based screening tool for COVID-19 using acoustic parameters
The Journal of the Acoustical Society of America ( IF 2.1 ) Pub Date : 2021-09-16 , DOI: 10.1121/10.0006104
Amir Vahedian-Azimi 1 , Abdalsamad Keramatfar 2 , Maral Asiaee 3 , Seyed Shahab Atashi 4 , Mandana Nourbakhsh 3
Affiliation  

This study aimed to develop an artificial intelligence (AI)-based tool for screening COVID-19 patients based on the acoustic parameters of their voices. Twenty-five acoustic parameters were extracted from voice samples of 203 COVID-19 patients and 171 healthy individuals who produced a sustained vowel, i.e., /a/, as long as they could after a deep breath. The selected acoustic parameters were from different categories including fundamental frequency and its perturbation, harmonicity, vocal tract function, airflow sufficiency, and periodicity. After the feature extraction, different machine learning methods were tested. A leave-one-subject-out validation scheme was used to tune the hyper-parameters and record the test set results. Then the models were compared based on their accuracy, precision, recall, and F1-score. Based on accuracy (89.71%), recall (91.63%), and F1-score (90.62%), the best model was the feedforward neural network (FFNN). Its precision function (89.63%) was a bit lower than the logistic regression (90.17%). Based on these results and confusion matrices, the FFNN model was employed in the software. This screening tool could be practically used at home and public places to ensure the health of each individual's respiratory system. If there are any related abnormalities in the test taker's voice, the tool recommends that they seek a medical consultant.

中文翻译:


您感染了新冠肺炎 (COVID-19) 吗?使用声学参数的基于人工智能的 COVID-19 筛查工具



本研究旨在开发一种基于人工智能 (AI) 的工具,用于根据患者声音的声学参数筛选 COVID-19 患者。从 203 名 COVID-19 患者和 171 名健康人的声音样本中提取了 25 个声学参数,这些人在深呼吸后尽可能长时间地发出持续元音,即 /a/。所选的声学参数来自不同的类别,包括基频及其扰动、和声、声道功能、气流充足性和周期性。特征提取后,测试了不同的机器学习方法。使用留一主题验证方案来调整超参数并记录测试集结果。然后根据模型的准确度、精确度、召回率和 F1 分数对模型进行比较。根据准确率 (89.71%)、召回率 (91.63%) 和 F1 分数 (90.62%),最佳模型是前馈神经网络 (FFNN)。其精度函数(89.63%)略低于逻辑回归(90.17%)。基于这些结果和混淆矩阵,软件中采用了 FFNN 模型。该筛查工具可实际用于家庭和公共场所,以确保每个人呼吸系统的健康。如果考生的声音有任何相关异常,该工具会建议他们寻求医疗顾问。
更新日期:2021-09-16
down
wechat
bug