当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data
arXiv - CS - Cryptography and Security Pub Date : 2021-01-20 , DOI: arxiv-2101.08030
Francesco Cartella, Orlando Anunciacao, Yuki Funabiki, Daisuke Yamaguchi, Toru Akishita, Olivier Elshocht

Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the Artificial Intelligence (AI) system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.

中文翻译:

表格数据的对抗攻击:在欺诈检测和不平衡数据中的应用

保证交易系统的安全是所有处理交易的机构的首要任务,以保护其业务免受网络攻击和欺诈企图。对抗攻击是新颖的技术,除了被证明对傻瓜图像分类模型有效之外,还可以应用于表格数据。对抗性攻击旨在产生对抗性示例,换句话说,就是对输入进行稍加修改,以使人工智能(AI)系统返回对攻击者有利的错误输出。在本文中,我们说明了一种在欺诈检测的情况下修改和适用于不平衡表格数据的最新算法的新颖方法。实验结果表明,所提出的修改可以带来理想的攻击成功率,获得在人类进行分析时也不太容易察觉的对抗示例。而且,当应用于现实世界的生产系统时,所提出的技术显示出可能对高级基于AI的欺诈检测程序的鲁棒性构成严重威胁。
更新日期:2021-01-21
down
wechat
bug