当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
arXiv - CS - Cryptography and Security Pub Date : 2021-07-30 , DOI: arxiv-2107.14569
Stefanos Koffas, Jing Xu, Mauro Conti, Stjepan Picek

Deep neural networks represent a powerful option for many real-world applications due to their ability to model even complex data relations. However, such neural networks can also be prohibitively expensive to train, making it common to either outsource the training process to third parties or use pretrained neural networks. Unfortunately, such practices make neural networks vulnerable to various attacks, where one attack is the backdoor attack. In such an attack, the third party training the model may maliciously inject hidden behaviors into the model. Still, if a particular input (called trigger) is fed into a neural network, the network will respond with a wrong result. In this work, we explore the option of backdoor attacks to automatic speech recognition systems where we inject inaudible triggers. By doing so, we make the backdoor attack challenging to detect for legitimate users, and thus, potentially more dangerous. We conduct experiments on two versions of datasets and three neural networks and explore the performance of our attack concerning the duration, position, and type of the trigger. Our results indicate that less than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a 100% attack success rate. What is more, while the trigger is inaudible, making it without limitations with respect to the duration of the signal, we observed that even short, non-continuous triggers result in highly successful attacks.

中文翻译:

你能听见吗?通过超声波触发器进行后门攻击

深度神经网络代表了许多现实世界应用程序的强大选择,因为它们能够对复杂的数据关系进行建模。然而,此类神经网络的训练成本也可能高得惊人,因此通常将训练过程外包给第三方或使用预训练的神经网络。不幸的是,这种做法使神经网络容易受到各种攻击,其中一种攻击是后门攻击。在这种攻击中,训练模型的第三方可能会恶意地将隐藏行为注入模型中。尽管如此,如果将特定输入(称为触发器)输入到神经网络中,网络将给出错误的结果。在这项工作中,我们探索了对自动语音识别系统的后门攻击选项,我们在其中注入了听不见的触发器。通过这样做,我们使后门攻击难以检测到合法用户,因此可能更加危险。我们对两个版本的数据集和三个神经网络进行了实验,并探讨了我们的攻击在触发器的持续时间、位置和类型方面的性能。我们的结果表明,不到 1% 的中毒数据足以部署后门攻击并达到 100% 的攻击成功率。更重要的是,虽然触发是听不见的,使其不受信号持续时间的限制,但我们观察到,即使是短的、非连续的触发也会导致非常成功的攻击。触发器的位置和类型。我们的结果表明,不到 1% 的中毒数据足以部署后门攻击并达到 100% 的攻击成功率。更重要的是,虽然触发是听不见的,使其不受信号持续时间的限制,但我们观察到,即使是短的、非连续的触发也会导致非常成功的攻击。触发器的位置和类型。我们的结果表明,不到 1% 的中毒数据足以部署后门攻击并达到 100% 的攻击成功率。更重要的是,虽然触发是听不见的,使其不受信号持续时间的限制,但我们观察到,即使是短的、非连续的触发也会导致非常成功的攻击。
更新日期:2021-08-02
down
wechat
bug