当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Few-Shot Website Fingerprinting Attack with Data Augmentation
Security and Communication Networks Pub Date : 2021-09-16 , DOI: 10.1155/2021/2840289
Mantun Chen 1 , Yongjun Wang 1 , Zhiquan Qin 1 , Xiatian Zhu 2
Affiliation  

This work introduces a novel data augmentation method for few-shot website fingerprinting (WF) attack where only a handful of training samples per website are available for deep learning model optimization. Moving beyond earlier WF methods relying on manually-engineered feature representations, more advanced deep learning alternatives demonstrate that learning feature representations automatically from training data is superior. Nonetheless, this advantage is subject to an unrealistic assumption that there exist many training samples per website, which otherwise will disappear. To address this, we introduce a model-agnostic, efficient, and harmonious data augmentation (HDA) method that can improve deep WF attacking methods significantly. HDA involves both intrasample and intersample data transformations that can be used in a harmonious manner to expand a tiny training dataset to an arbitrarily large collection, therefore effectively and explicitly addressing the intrinsic data scarcity problem. We conducted expensive experiments to validate our HDA for boosting state-of-the-art deep learning WF attack models in both closed-world and open-world attacking scenarios, at absence and presence of strong defense. For instance, in the more challenging and realistic evaluation scenario with WTF-PAD-based defense, our HDA method surpasses the previous state-of-the-art results by nearly 3% in classification accuracy in the 20-shot learning case. An earlier version of this work Chen et al. (2021) has been presented as preprint in ArXiv (https://arxiv.org/abs/2101.10063).

中文翻译:

带有数据增强的小样本网站指纹攻击

这项工作引入了一种新的数据增强方法,用于少数网站指纹 (WF) 攻击,其中每个网站只有少数训练样本可用于深度学习模型优化。超越依赖手动设计的特征表示的早期 WF 方法,更高级的深度学习替代方案表明,从训练数据中自动学习特征表示是优越的。尽管如此,这种优势受制于一个不切实际的假设,即每个网站存在许多训练样本,否则这些样本就会消失。为了解决这个问题,我们引入了一种与模型无关、高效且和谐的数据增强(HDA) 方法,可以显着改进深度 WF 攻击方法。HDA 涉及样本内和样本间数据转换,可以以和谐的方式将微小的训练数据集扩展到任意大的集合,从而有效而明确地解决内在的数据稀缺问题。我们进行了昂贵的实验来验证我们的 HDA 在没有和存在强大防御的情况下,在封闭世界和开放世界攻击场景中提升最先进的深度学习 WF 攻击模型。例如,在具有基于 WTF-PAD 的防御的更具挑战性和现实性的评估场景中,我们的 HDA 方法在 20 次学习案例中的分类准确度超过了之前最先进的结果近 3%。这项工作的早期版本陈等人。
更新日期:2021-09-16
down
wechat
bug