当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Efficient Parallel Secure Machine Learning Framework on GPUs
IEEE Transactions on Parallel and Distributed Systems ( IF 5.6 ) Pub Date : 2021-02-12 , DOI: 10.1109/tpds.2021.3059108
Feng Zhang 1 , Zheng Chen 1 , Chenyang Zhang 1 , Amelie Chi Zhou 2 , Jidong Zhai 3 , Xiaoyong Du 1
Affiliation  

Machine learning is widely used in our daily lives. Large amounts of data have been continuously produced and transmitted to the cloud for model training and data processing, which raises a problem: how to preserve the security of the data. Recently, a secure machine learning system named SecureML has been proposed to solve this issue using two-party computation. However, due to the excessive computation expenses of two-party computation, the secure machine learning is about 2× slower than the original machine learning methods. Previous work on secure machine learning mostly focused on novel protocols or improving accuracy, while the performance metric has been ignored. In this article, we propose a GPU-based framework ParSecureML to improve the performance of secure machine learning algorithms based on two-party computation. The main challenges of developing ParSecureML lie in the complex computation patterns, frequent intra-node data transmission between CPU and GPU, and complicated inter-node data dependence. To handle these challenges, we propose a series of novel solutions, including profiling-guided adaptive GPU utilization, fine-grained double pipeline for intra-node CPU-GPU cooperation, and compressed transmission for inter-node communication. Moreover, we integrate architecture specific optimizations, such as Tensor Cores, into ParSecureML. As far as we know, this is the first GPU-based secure machine learning framework. Compared to the state-of-the-art framework, ParSecureML achieves an average of 33.8× speedup. ParSecureML can also be applied to inferences, which achieves 31.7× speedup on average.

中文翻译:


GPU 上的高效并行安全机器学习框架



机器学习广泛应用于我们的日常生活中。大量的数据不断产生并传输到云端进行模型训练和数据处理,这就提出了一个问题:如何保护数据的安全。最近,一种名为 SecureML 的安全机器学习系统被提出来使用两方计算来解决这个问题。然而,由于两方计算的计算开销过大,安全机器学习比原始机器学习方法慢约2倍。以前有关安全机器学习的工作主要集中在新颖的协议或提高准确性上,而忽略了性能指标。在本文中,我们提出了一种基于GPU的框架ParSecureML,以提高基于两方计算的安全机器学习算法的性能。开发ParSecureML的主要挑战在于复杂的计算模式、CPU和GPU之间频繁的节点内数据传输以及复杂的节点间数据依赖。为了应对这些挑战,我们提出了一系列新颖的解决方案,包括分析引导的自适应 GPU 使用、用于节点内 CPU-GPU 协作的细粒度双管道以及用于节点间通信的压缩传输。此外,我们还将特定于架构的优化(例如 Tensor Core)集成到 ParSecureML 中。据我们所知,这是第一个基于GPU的安全机器学习框架。与最先进的框架相比,ParSecureML 平均实现了 33.8 倍的加速。 ParSecureML 还可以应用于推理,平均实现 31.7 倍的加速。
更新日期:2021-02-12
down
wechat
bug