当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Customizing Trusted AI Accelerators for Efficient Privacy-Preserving Machine Learning
arXiv - CS - Hardware Architecture Pub Date : 2020-11-12 , DOI: arxiv-2011.06376
Peichen Xie, Xuanle Ren, Guangyu Sun

The use of trusted hardware has become a promising solution to enable privacy-preserving machine learning. In particular, users can upload their private data and models to a hardware-enforced trusted execution environment (e.g. an enclave in Intel SGX-enabled CPUs) and run machine learning tasks in it with confidentiality and integrity guaranteed. To improve performance, AI accelerators have been widely employed for modern machine learning tasks. However, how to protect privacy on an AI accelerator remains an open question. To address this question, we propose a solution for efficient privacy-preserving machine learning based on an unmodified trusted CPU and a customized trusted AI accelerator. We carefully leverage cryptographic primitives to establish trust and protect the channel between the CPU and the accelerator. As a case study, we demonstrate our solution based on the open-source versatile tensor accelerator. The result of evaluation shows that the proposed solution provides efficient privacy-preserving machine learning at a small design cost and moderate performance overhead.

中文翻译:

定制可信 AI 加速器以实现高效的隐私保护机器学习

使用可信硬件已成为实现隐私保护机器学习的有前途的解决方案。特别是,用户可以将他们的私人数据和模型上传到硬件强制的可信执行环境(例如支持英特尔 SGX 的 CPU 中的飞地),并在保证机密性和完整性的情况下在其中运行机器学习任务。为了提高性能,人工智能加速器已被广泛用于现代机器学习任务。然而,如何在 AI 加速器上保护隐私仍然是一个悬而未决的问题。为了解决这个问题,我们提出了一种基于未修改的可信 CPU 和定制的可信 AI 加速器的高效隐私保护机器学习解决方案。我们谨慎地利用加密原语来建立信任并保护 CPU 和加速器之间的通道。作为案例研究,我们展示了基于开源多功能张量加速器的解决方案。评估结果表明,所提出的解决方案以较小的设计成本和适度的性能开销提供了高效的隐私保护机器学习。
更新日期:2020-11-13
down
wechat
bug