当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Adversarial Robustness of Subspace Learning
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/tsp.2020.2974676
Fuwei Li , Lifeng Lai , Shuguang Cui

In this paper, we study the adversarial robustness of subspace learning problems. Different from the assumptions made in existing works on robust subspace learning where data samples are contaminated by gross sparse outliers or small dense noises, we consider a more powerful adversary who can first observe the data matrix and then intentionally modify the whole data matrix. We first characterize the optimal rank-one attack strategy that maximizes the subspace distance between the subspace learned from the original data matrix and that learned from the modified data matrix. We then generalize the study to the scenario without the rank constraint and characterize the corresponding optimal attack strategy. Besides, our analysis shows that the optimal strategies depend on the singular values of the original data matrix and the adversary's energy budget. Finally, we provide numerical experiments and practical applications to demonstrate the efficiency of the attack strategies.

中文翻译:

关于子空间学习的对抗性鲁棒性

在本文中,我们研究了子空间学习问题的对抗性鲁棒性。与现有工作中关于稳健子空间学习的假设不同,其中数据样本被粗略的稀疏异常值或小的密集噪声污染,我们考虑一个更强大的对手,他可以首先观察数据矩阵,然后有意修改整个数据矩阵。我们首先描述了最佳秩一攻击策略,该策略最大化从原始数据矩阵学习的子空间与从修改后的数据矩阵学习的子空间之间的子空间距离。然后我们将研究推广到没有等级约束的场景,并表征相应的最优攻击策略。此外,我们的分析表明,最优策略取决于原始数据矩阵的奇异值和对手的' s 能量预算。最后,我们提供了数值实验和实际应用来证明攻击策略的效率。
更新日期:2020-01-01
down
wechat
bug