当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-03-16 , DOI: arxiv-2003.12365
Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal

A new collaborative learning, called split learning, was recently introduced, aiming to protect user data privacy without revealing raw input data to a server. It collaboratively runs a deep neural network model where the model is split into two parts, one for the client and the other for the server. Therefore, the server has no direct access to raw data processed at the client. Until now, the split learning is believed to be a promising approach to protect the client's raw data; for example, the client's data was protected in healthcare image applications using 2D convolutional neural network (CNN) models. However, it is still unclear whether the split learning can be applied to other deep learning models, in particular, 1D CNN. In this paper, we examine whether split learning can be used to perform privacy-preserving training for 1D CNN models. To answer this, we first design and implement an 1D CNN model under split learning and validate its efficacy in detecting heart abnormalities using medical ECG data. We observed that the 1D CNN model under split learning can achieve the same accuracy of 98.9\% like the original (non-split) model. However, our evaluation demonstrates that split learning may fail to protect the raw data privacy on 1D CNN models. To address the observed privacy leakage in split learning, we adopt two privacy leakage mitigation techniques: 1) adding more hidden layers to the client side and 2) applying differential privacy. Although those mitigation techniques are helpful in reducing privacy leakage, they have a significant impact on model accuracy. Hence, based on those results, we conclude that split learning alone would not be sufficient to maintain the confidentiality of raw sequential data in 1D CNN models.

中文翻译:

我们可以在一维 CNN 模型上使用拆分学习进行隐私保护训练吗?

最近引入了一种新的协作学习,称为拆分学习,旨在保护用户数据隐私,而不会将原始输入数据泄露给服务器。它协同运行一个深度神经网络模型,其中该模型分为两部分,一个用于客户端,另一个用于服务器。因此,服务器无法直接访问在客户端处理的原始数据。到目前为止,分裂学习被认为是保护客户原始数据的一种很有前途的方法;例如,在使用 2D 卷积神经网络 (CNN) 模型的医疗保健图像应用程序中,客户的数据受到保护。然而,目前还不清楚分裂学习是否可以应用于其他深度学习模型,特别是一维 CNN。在本文中,我们检查是否可以使用拆分学习对一维 CNN 模型进行隐私保护训练。为了回答这个问题,我们首先在分​​裂学习下设计并实现了一个一维 CNN 模型,并验证了它在使用医学 ECG 数据检测心脏异常方面的有效性。我们观察到分裂学习下的一维 CNN 模型可以达到与原始(非分裂)模型相同的 98.9\% 的准确率。然而,我们的评估表明,分割学习可能无法保护一维 CNN 模型上的原始数据隐私。为了解决分裂学习中观察到的隐私泄漏问题,我们采用了两种隐私泄漏缓解技术:1)向客户端添加更多隐藏层,2)应用差分隐私。尽管这些缓解技术有助于减少隐私泄漏,但它们对模型准确性有重大影响。因此,
更新日期:2020-03-30
down
wechat
bug