Information Sciences ( IF 8.1 ) Pub Date : 2021-06-02 , DOI: 10.1016/j.ins.2021.05.073 Zijie Pan , Jiajin Zeng , Riqiang Cheng , Hongyang Yan , Jin Li
The success of deep neural networks has contributed to many fields, such as finance, medic and speech recognition. Machine learning models adopted in these fields are always trained with a massive amount of distributed and highly personalized data harvested directly from users. Concerns for data privacy and the demand for better data exploitation have prompted the design of several secure schemes that allow an untrusted server to train ML models for one or multiple parties. However, these existing schemes only focus on network parameter, and hardly extend their optimization range to model architecture scope. Sine the performance of a neural network is closely related to both parameter and its architecture, service providers are difficult to deliver customized and flexible neural networks to each client. To this end, in this paper we propose PNAS, a novel MLaaS framework that enables a server to jointly optimize network parameter and architecture while ensuring the privacy of training sets. A double-encryption scheme is derived to prevent privacy leakage from sample itself, as well as intermediate feature maps during training. Specifically, we adopt functional encryption and feature transformation to secure forward and back propagation. Extensive experiments have demonstrated the superiority of our proposal.
中文翻译:
PNAS:神经架构搜索服务的隐私保护框架
深度神经网络的成功为许多领域做出了贡献,例如金融、医疗和语音识别。这些领域采用的机器学习模型总是使用直接从用户那里收集的大量分布式和高度个性化的数据进行训练。对数据隐私的担忧和对更好的数据利用的需求促使设计了几种安全方案,这些方案允许不受信任的服务器为一方或多方训练 ML 模型。然而,这些现有方案仅关注网络参数,几乎没有将其优化范围扩展到模型架构范围。由于神经网络的性能与参数及其架构密切相关,服务提供商很难为每个客户提供定制的、灵活的神经网络。为此,在本文中我们提出了 PNAS,一种新颖的 MLaaS 框架,使服务器能够在确保训练集隐私的同时联合优化网络参数和架构。导出了双重加密方案,以防止从样本本身以及训练期间的中间特征图泄露隐私。具体来说,我们采用功能加密和特征转换来保护前向和反向传播。大量的实验证明了我们提议的优越性。我们采用功能加密和特征转换来保护前向和反向传播。大量的实验证明了我们提议的优越性。我们采用功能加密和特征转换来保护前向和反向传播。大量的实验证明了我们提议的优越性。