当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Blind Faith: Privacy-Preserving Machine Learning using Function Approximation
arXiv - CS - Cryptography and Security Pub Date : 2021-07-29 , DOI: arxiv-2107.14338
Tanveer Khan, Alexandros Bakas, Antonis Michalas

Over the past few years, a tremendous growth of machine learning was brought about by a significant increase in adoption of cloud-based services. As a result, various solutions have been proposed in which the machine learning models run on a remote cloud provider. However, when such a model is deployed on an untrusted cloud, it is of vital importance that the users' privacy is preserved. To this end, we propose Blind Faith -- a machine learning model in which the training phase occurs in plaintext data, but the classification of the users' inputs is performed on homomorphically encrypted ciphertexts. To make our construction compatible with homomorphic encryption, we approximate the activation functions using Chebyshev polynomials. This allowed us to build a privacy-preserving machine learning model that can classify encrypted images. Blind Faith preserves users' privacy since it can perform high accuracy predictions by performing computations directly on encrypted data.

中文翻译:

盲目信仰:使用函数逼近的隐私保护机器学习

在过去几年中,基于云的服务的采用显着增加,带来了机器学习的巨大增长。因此,已经提出了各种解决方案,其中机器学习模型在远程云提供商上运行。但是,当这样的模型部署在不受信任的云上时,保护用户隐私至关重要。为此,我们提出了 Blind Faith——一种机器学习模型,其中训练阶段发生在明文数据中,但用户输入的分类是在同态加密的密文上执行的。为了使我们的构造与同态加密兼容,我们使用 Chebyshev 多项式近似激活函数。这使我们能够构建一个可以对加密图像进行分类的隐私保护机器学习模型。
更新日期:2021-08-02
down
wechat
bug