当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lightweight Privacy-Preserving Training and Evaluation for Discretized Neural Networks
IEEE Internet of Things Journal ( IF 8.2 ) Pub Date : 9-18-2019 , DOI: 10.1109/jiot.2019.2942165
Jialu Chen , Jun Zhou , Zhenfu Cao , Athanasios V. Vasilakos , Xiaolei Dong , Kim-Kwang Raymond Choo

Machine learning, particularly the neural network (NN), is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this article, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized NNs (DiNNs) is proposed. First, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain, including multivariate polynomial, nonlinear activation function, gradient function, and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for DiNNs is proposed, which can also be extended to convolutional NN. Finally, we give the formal security proofs for dataset privacy, model training privacy, and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in DiNN to demonstrate the high efficiency and accuracy of our proposed LPTE.

中文翻译:


离散神经网络的轻量级隐私保护训练和评估



机器学习,特别是神经网络(NN),在令人眼花缭乱的应用中得到了广泛的利用。为了减轻资源受限客户端的计算负担,需要将大量历史私有数据集外包给半信任或恶意云进行模型训练和评估。为了实现隐私保护,现有的大多数工作要么利用公钥全同态加密(FHE)技术,导致相当大的计算成本和密文扩展,要么采用需要用户和云之间多轮交互的安全多方计算(SMC)。为了解决这些问题,本文提出了一种用于离散神经网络(DiNN)的轻量级隐私保护模型训练和评估方案 LPTE。首先,我们提出了一种无需利用公钥FHE的高效单密钥全同态数据封装机制(SFH-DEM)。基于SFH-DEM,设计了加密域上的一系列原子计算作为构建块,包括多元多项式、非线性激活函数、梯度函数和最大运算。此外,提出了一种用于 DiNN 的轻量级隐私保护模型训练和评估方案 LPTE,该方案也可以扩展到卷积神经网络。最后,我们给出了半诚实环境下数据集隐私、模型训练隐私和模型评估隐私的形式化安全证明,并在真实数据集 MNIST 上进行了 DiNN 中手写数字识别的实验,以证明我们提出的方法的高效性和准确性。 LPTE。
更新日期:2024-08-22
down
wechat
bug