当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Utilizing QR codes to verify the visual fidelity of image datasets for machine learning
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2020-09-23 , DOI: 10.1016/j.jnca.2020.102834
Yang-Wai Chow , Willy Susilo , Jianfeng Wang , Richard Buckland , Joonsang Baek , Jongkil Kim , Nan Li

Machine learning is becoming increasingly popular in modern technology and has been adopted in various application areas. However, researchers have demonstrated that machine learning models are vulnerable to adversarial examples in their inputs, which has given rise to a field of research known as adversarial machine learning. Potential adversarial attacks include methods of poisoning datasets by perturbing input samples to mislead machine learning models into producing undesirable results. While such perturbations are often subtle and imperceptible from the perspective of a human, they can greatly affect the performance of machine learning models. This paper presents two methods of verifying the visual fidelity of image-based datasets by using QR codes to detect perturbations in the data. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether or not an image in the dataset has been perturbed. In the second method, only a single verification string is stored and can be used to verify whether an entire dataset is intact.



中文翻译:

利用QR码验证图像数据集的视觉保真度以进行机器学习

机器学习在现代技术中变得越来越流行,并已在各种应用领域中采用。但是,研究人员已经证明,机器学习模型在输入中容易受到对抗性示例的影响,这引起了一个研究领域,即对抗性机器学习。潜在的对抗攻击包括通过扰乱输入样本以误导机器学习模型产生不良结果来毒害数据集的方法。尽管从人类的角度来看,这样的扰动通常是微妙且不可察觉的,但它们会极大地影响机器学习模型的性能。本文提出了两种通过使用QR码检测数据中的扰动来验证基于图像的数据集的视觉保真度的方法。在第一种方法中 为数据集中的每个图像存储一个验证字符串。这些验证字符串可用于确定数据集中的图像是否受到干扰。在第二种方法中,仅存储单个验证字符串,并且可用于验证整个数据集是否完整。

更新日期:2020-10-04
down
wechat
bug