当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-01-21 , DOI: arxiv-2101.08717
Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue, Alberto F. De Souza, Thiago Oliveira-Santos

Convolutional neural networks have been successful lately enabling companies to develop neural-based products, which demand an expensive process, involving data acquisition and annotation; and model generation, usually requiring experts. With all these costs, companies are concerned about the security of their models against copies and deliver them as black-boxes accessed by APIs. Nonetheless, we argue that even black-box models still have some vulnerabilities. In a preliminary work, we presented a simple, yet powerful, method to copy black-box models by querying them with natural random images. In this work, we consolidate and extend the copycat method: (i) some constraints are waived; (ii) an extensive evaluation with several problems is performed; (iii) models are copied between different architectures; and, (iv) a deeper analysis is performed by looking at the copycat behavior. Results show that natural random images are effective to generate copycats for several problems.

中文翻译:

模仿者CNN:随机非标签数据是否足以窃取黑匣子模型的知识?

卷积神经网络最近获得了成功,使公司能够开发基于神经的产品,这需要昂贵的过程,包括数据采集和注释。和模型生成,通常需要专家。所有这些费用使公司担心其模型对副本的安全性,并将其作为API访问的黑盒来交付。但是,我们认为即使是黑盒模型也仍然存在一些漏洞。在初步工作中,我们提出了一种简单但功能强大的方法,即通过使用自然随机图像查询黑盒模型来复制它们。在这项工作中,我们巩固并扩展了模仿方法:(i)放弃了一些约束;(ii)进行了有几个问题的广泛评估;(iii)在不同架构之间复制模型;和,(iv)通过查看模仿行为来进行更深入的分析。结果表明,自然随机图像可以有效地产生模仿问题。
更新日期:2021-01-22
down
wechat
bug