当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Randomized Neural Networks Based Decentralized Multi-Task Learning via Hybrid Multi-Block ADMM
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2021-05-11 , DOI: 10.1109/tsp.2021.3078625
Yu Ye , Ming Xiao , Mikael Skoglund

In multi-task learning (MTL), related tasks learn jointly to improve generalization performance. To exploit the high learning speed of feed-forward neural networks (FNN), we apply the randomized single-hidden layer FNN (RSF) to the MTL problem, where the output weights of RSFs for all the tasks are learned collaboratively. We first present the RSF based MTL problem in the centralized setting, which is solved by the proposed MTL-RSF algorithm. Due to the fact that many data sets of different tasks are geo-distributed, decentralized machine learning is studied. We formulate the decentralized MTL problem based on RSF as majorized multi-block optimization with coupled bi-convex objective functions. To solve the problem, we propose the DMTL-RSF algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM). Further, to reduce the computation load of DMTL-RSF, DMTL-RSF with first-order approximation (FO-DMTL-RSF) is presented. Theoretical analysis shows that the convergence to the stationary point of proposed decentralized algorithms can be guaranteed conditionally. Through simulations, we demonstrate the convergence of presented algorithms, and also show that they can outperform existing MTL methods. Moreover, by adjusting the dimension of hidden feature space, there exists a trade-off between communication load and learning accuracy for DMTL-RSF.

中文翻译:


通过混合多块 ADMM 基于随机神经网络的分散多任务学习



在多任务学习(MTL)中,相关任务共同学习以提高泛化性能。为了利用前馈神经网络(FNN)的高学习速度,我们将随机单隐藏层 FNN(RSF)应用于 MTL 问题,其中所有任务的 RSF 的输出权重都是协作学习的。我们首先在集中式设置中提出基于 RSF 的 MTL 问题,该问题通过所提出的 MTL-RSF 算法来解决。由于不同任务的许多数据集是地理分布的,因此研究了分散式机器学习。我们将基于 RSF 的分散 MTL 问题表述为具有耦合双凸目标函数的主要多块优化。为了解决这个问题,我们提出了 DMTL-RSF 算法,它是一种混合雅可比行列式和高斯-赛德尔近端多块乘法器交替方向法(ADMM)。此外,为了减少DMTL-RSF的计算量,提出了具有一阶近似的DMTL-RSF(FO-DMTL-RSF)。理论分析表明,所提出的分散算法能够有条件地保证收敛到驻点。通过模拟,我们证明了所提出算法的收敛性,并且还表明它们可以优于现有的 MTL 方法。此外,通过调整隐藏特征空间的维度,DMTL-RSF在通信负载和学习精度之间存在权衡。
更新日期:2021-05-11
down
wechat
bug