当前位置: X-MOL 学术Mach. Learn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication-efficient distributed multi-task learning with matrix sparsity regularization
Machine Learning ( IF 7.5 ) Pub Date : 2019-10-07 , DOI: 10.1007/s10994-019-05847-6
Qiang Zhou , Yu Chen , Sinno Jialin Pan

This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization without a need to centralize the data. We theoretically prove that our proposed method enjoys a fast convergence rate for different types of loss functions in the distributed environment. To further reduce the communication cost during the distributed optimization procedure, we propose a data screening approach to safely filter inactive features or variables. Finally, we conduct extensive experiments on both synthetic and real-world datasets to demonstrate the effectiveness of our proposed method.

中文翻译:

具有矩阵稀疏正则化的通信高效分布式多任务学习

这项工作侧重于使用矩阵稀疏正则化进行多任务学习的分布式优化。我们提出了一种快速通信高效的分布式优化方法来解决这个问题。使用所提出的方法,不同任务的训练数据可以地理分布在不同的本地机器上,并且可以通过矩阵稀疏正则化联合学习任务,而无需集中数据。我们从理论上证明了我们提出的方法对于分布式环境中不同类型的损失函数具有快速的收敛速度。为了进一步降低分布式优化过程中的通信成本,我们提出了一种数据筛选方法来安全地过滤非活动特征或变量。最后,
更新日期:2019-10-07
down
wechat
bug