当前位置: X-MOL 学术Comput. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Security of Mobile Multimedia Data:The Adversarial Examples for Spatio-temporal Data
Computer Networks ( IF 4.4 ) Pub Date : 2020-07-21 , DOI: 10.1016/j.comnet.2020.107432
Yuanyuan Chen , Jing Qiu , Xiaojiang Du , Lihua Yin , Zhihong Tian

With the rapid development of the internet, the number of mobile multimedia devices is increasing among people. As the popularity of these mobile devices, a large amount of multimedia data is produced. How to utilize these multimedia data is essential in the wireless communication area. In recent years, deep learning has achieved good performance in Multimedia data analysis. However, recent research shows that deep learning models are vulnerable to the crafted samples. The crafted samples are called adversarial examples. In the paper, we explore the security of deep learning in mobile multimedia computing systems. We use the Spatio-temporal data (GPS trajectory data) to conduct experiments. The deep learning networks attacked are trajectory mode classification models. Since both Spatio-temporal data and image data are continuous, we adopt the algorithms of Computer Vision (Fast Gradient Sign Method (FGSM)) to generate adversarial examples. We propose an Adversarial Sample generation model based on a Convolutional AutoEncoder (ASCAE). Firstly, we design a convolutional AutoEncoder (CAE) to convert trajectory data into image format data. Then, we utilize image format data to train trajectory mode classification models. Based on image format data and the trained models, we use FGSM algorithm to generate adversarial examples. Finally, we utilize adversarial examples to attack the trained models. Experiment results demonstrate that the adversarial examples successfully attack the deep learning models. Besides, the adversarial examples have good transferability. Therefore, in mobile multimedia computing systems, adversarial examples are a serious data security problem for deep learning models.



中文翻译:

移动多媒体数据的安全性:时空数据的对抗示例

随着互联网的快速发展,人们中移动多媒体设备的数量正在增加。随着这些移动设备的普及,产生了大量的多媒体数据。在无线通信领域,如何利用这些多媒体数据至关重要。近年来,深度学习在多媒体数据分析中取得了不错的成绩。但是,最近的研究表明,深度学习模型易受制作样本的影响。制作的样本称为对抗样本。在本文中,我们探索了移动多媒体计算系统中深度学习的安全性。我们使用时空数据(GPS轨迹数据)进行实验。被攻击的深度学习网络是轨迹模式分类模型。由于时空数据和图像数据都是连续的,我们采用计算机视觉算法(快速梯度符号方法(FGSM))来生成对抗性示例。我们提出了基于卷积自动编码器(ASCAE)的对抗性样本生成模型。首先,我们设计了卷积自动编码器(CAE),将轨迹数据转换为图像格式数据。然后,我们利用图像格式数据来训练轨迹模式分类模型。基于图像格式数据和训练好的模型,我们使用FGSM算法生成对抗性示例。最后,我们利用对抗性示例来攻击训练有素的模型。实验结果表明,这些对抗示例成功地攻击了深度学习模型。此外,对抗性例子具有良好的可传递性。因此,在移动多媒体计算系统中,

更新日期:2020-08-04
down
wechat
bug