当前位置: X-MOL 学术IEEE Trans. Signal Inf. Process. Over Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed Sparse Optimization With Weakly Convex Regularizer: Consensus Promoting and Approximate Moreau Enhanced Penalties Towards Global Optimality
IEEE Transactions on Signal and Information Processing over Networks ( IF 3.0 ) Pub Date : 6-13-2022 , DOI: 10.1109/tsipn.2022.3181729
Kei Komuro 1 , Masahiro Yukawa 1 , Renato L.G. Cavalcante 2
Affiliation  

We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first problem formulation relies on statistical properties of the signals, and it uses an approximate Moreau enhanced penalty. In contrast, the second formulation does not rely on any statistical assumptions, and it uses an additional consensus promoting penalty (CPP) that convexifies the cost function over the whole network. To solve both problems, we propose a distributed proximal debiasing-gradient (DPD) method, which uses the exact first-order proximal gradient algorithm. The DPD method features a pair of proximity operators that play complementary roles: one sparsifies the estimate, and the other reduces the bias caused by the sparsification. Owing to the overall convexity of the whole cost functions, the proposed method guarantees convergence to a global minimizer, as demonstrated by numerical examples. In addition, the use of CPP improves the convergence speed significantly.

中文翻译:


弱凸正则化器的分布式稀疏优化:共识促进和近似莫罗增强惩罚以实现全局最优



我们提出了一个有前途的基于弱凸正则化器的分布式稀疏优化框架。更具体地说,我们提出了两个分布式优化问题来恢复网络中的稀疏信号。第一个问题的表述依赖于信号的统计特性,并且它使用近似的莫罗增强惩罚。相比之下,第二种公式不依赖于任何统计假设,它使用额外的共识促进惩罚(CPP)来凸化整个网络的成本函数。为了解决这两个问题,我们提出了一种分布式近端除偏梯度(DPD)方法,该方法使用精确的一阶近端梯度算法。 DPD 方法具有一对发挥互补作用的邻近算子:一个稀疏估计,另一个减少稀疏化引起的偏差。由于整个成本函数的整体凸性,所提出的方法保证收敛到全局最小化,如数值例子所示。此外,CPP的使用显着提高了收敛速度。
更新日期:2024-08-28
down
wechat
bug