当前位置: X-MOL 学术Wireless Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can differential privacy practically protect collaborative deep learning inference for IoT?
Wireless Networks ( IF 2.1 ) Pub Date : 2022-09-05 , DOI: 10.1007/s11276-022-03113-7
Jihyeon Ryu , Yifeng Zheng , Yansong Gao , Alsharif Abuadbba , Junyaup Kim , Dongho Won , Surya Nepal , Hyoungshick Kim , Cong Wang

Collaborative inference has recently emerged as an attractive framework for applying deep learning to Internet of Things (IoT) applications by splitting a DNN model into several subpart models among resource-constrained IoT devices and the cloud. However, the reconstruction attack was proposed recently to recover the original input image from intermediate outputs that can be collected from local models in collaborative inference. For addressing such privacy issues, a promising technique is to adopt differential privacy so that the intermediate outputs are protected with a small accuracy loss. In this paper, we provide the first systematic study to reveal insights regarding the effectiveness of differential privacy for collaborative inference against the reconstruction attack. We specifically explore the privacy-accuracy trade-offs for three collaborative inference models with four datasets (SVHN, GTSRB, STL-10, and CIFAR-10). Our experimental analysis demonstrates that differential privacy can practically be applied to collaborative inference when a dataset has small intra-class variations in appearance. With the (empirically) optimized privacy budget parameter in our study, the differential privacy technique incurs accuracy loss of 0.476%, 2.066%, 5.021%, and 12.454% on SVHN, GTSRB, STL-10, and CIFAR-10 datasets, respectively, while thwarting the reconstruction attack.



中文翻译:

差分隐私实际上可以保护物联网的协作深度学习推理吗?

通过将 DNN 模型拆分为资源受限的 IoT 设备和云之间的几个子模型,协作推理最近已成为将深度学习应用于物联网 (IoT) 应用程序的有吸引力的框架。然而,最近提出了重建攻击,以从可以从协作推理中从局部模型收集的中间输出中恢复原始输入图像。为了解决此类隐私问题,一种有前途的技术是采用差分隐私,以便以较小的精度损失保护中间输出。在本文中,我们提供了第一个系统研究,以揭示关于差分隐私对协作推理对抗重建攻击的有效性的见解。我们专门探讨了具有四个数据集(SVHN、GTSRB、STL-10 和 CIFAR-10)的三个协作推理模型的隐私准确性权衡。我们的实验分析表明,当数据集在外观上具有小的类内变化时,差异隐私实际上可以应用于协作推理。在我们的研究中,使用(经验上)优化的隐私预算参数,差分隐私技术在 SVHN、GTSRB、STL-10 和 CIFAR-10 数据集上的准确度损失分别为 0.476%、2.066%、5.021% 和 12.454%,同时阻止重建攻击。我们的实验分析表明,当数据集在外观上具有小的类内变化时,差异隐私实际上可以应用于协作推理。在我们的研究中,使用(经验上)优化的隐私预算参数,差分隐私技术在 SVHN、GTSRB、STL-10 和 CIFAR-10 数据集上的准确度损失分别为 0.476%、2.066%、5.021% 和 12.454%,同时阻止重建攻击。我们的实验分析表明,当数据集在外观上具有小的类内变化时,差异隐私实际上可以应用于协作推理。在我们的研究中,使用(经验上)优化的隐私预算参数,差分隐私技术在 SVHN、GTSRB、STL-10 和 CIFAR-10 数据集上的准确度损失分别为 0.476%、2.066%、5.021% 和 12.454%,同时阻止重建攻击。

更新日期:2022-09-06
down
wechat
bug