当前位置: X-MOL 学术J. Sign. Process. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Attacks on Voice Recognition Based on Hyper Dimensional Computing
Journal of Signal Processing Systems ( IF 1.6 ) Pub Date : 2021-02-01 , DOI: 10.1007/s11265-020-01634-y
Wencheng Chen , Hongyu Li

Recently, there is a great demand for experimenting with Artificial Intelligence (AI) algorithms on the Internet of Things (IoT) devices that have only limited computing or transmission resources. Hyper-Dimensional Computing (HDC), which can effectively run on low-cost CPUs, is one of the solutions. However, since the AI algorithms are proved to be vulnerable to Adversarial Examples (AE) in recent research, it is then important to investigate the same security issues on other intelligent algorithms such as HDC. In our paper, motivated by the AE attacks for AI algorithms, we propose an attack measured based on the Differential Evolution (DE), which does not rely on the gradient. By attacking the VoiceHD model in the Isolet dataset, we prove that HDC is also vulnerable to AEs. In our experimentation, we can launch non-targeted attacks on the VoiceHD with the highest 85.7% success rate.



中文翻译:

基于超维计算的语音识别对抗攻击

近来,对仅具有有限的计算或传输资源的物联网(IoT)设备上的人工智能(AI)算法进行实验的需求很大。解决方案之一是可以在低成本CPU上有效运行的超维计算(HDC)。但是,由于最近的研究证明AI算法易受对抗性示例(AE)的攻击,因此研究其他智能算法(如HDC)的相同安全性问题非常重要。在本文中,受针对AI算法的AE攻击的启发,我们提出了一种基于差分进化(DE)的攻击测量方法,该方法不依赖梯度。通过攻击Isolet数据集中的VoiceHD模型,我们证明HDC也容易受到AE的攻击。在我们的实验中

更新日期:2021-02-01
down
wechat
bug