当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Benefits of Progressively Increasing Sampling Sizes in Stochastic Greedy Weak Submodular Maximization
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-07-29 , DOI: 10.1109/tsp.2022.3195089
Abolfazl Hashemi 1 , Haris Vikalo 2 , Gustavo de Veciana 2
Affiliation  

Many problems in signal processing and machine learning can be formalized as weak submodular optimization tasks. For such problems, a simple greedy algorithm ( Greedy ) is guaranteed to find a solution achieving the objective with a value no worse than $1-e^{-1/c}$ of the optimal, where $c$ is the multiplicative weak-submodularity constant. Due to the high cost of querying large-scale systems, the complexity of Greedy becomes prohibitive in contemporary applications. In this work, we study the tradeoff between performance and complexity when one resorts to random sampling strategies to reduce the query complexity of Greedy . Specifically, we quantify the effect of uniform sampling strategies on Greedy ’s performance through two metrics: (i) asymptotic probability of identifying an optimal subset, and (ii) suboptimality with respect to the optimal solution. The latter implies that uniform sampling strategies with a fixed sampling size achieve a non-trivial approximation factor; however, we show that with overwhelming probability, these methods fail to find the optimal subset. Our analysis shows that the failure of uniform sampling strategies with fixed sample size can be circumvented by successively increasing the size of the search space. Building upon this insight, we propose a simple progressive stochastic greedy algorithm and study its approximation guarantees. Moreover, we demonstrate effectiveness of the proposed method in dimensionality reduction applications and feature selection tasks for clustering and object tracking.

中文翻译:

关于在随机贪婪弱子模最大化中逐步增加采样大小的好处

信号处理和机器学习中的许多问题可以形式化为弱子模块优化任务。对于这样的问题,一个简单的贪心算法( 贪婪 ) 保证找到一个解决方案来实现目标,其值不低于$1-e^{-1/c}$的最优值,其中$c$是乘法弱子模常数。由于查询大规模系统的成本很高,贪婪在当代应用中变得令人望而却步。在这项工作中,我们研究了在使用随机抽样策略来降低查询复杂度时性能和复杂度之间的权衡。贪婪的 。具体来说,我们量化了统一采样策略对Greedy 的表现通过两个指标:(i)识别最优子集的渐近概率,以及(ii)关于最优解的次优性。后者意味着具有固定采样大小的统一采样策略实现了非平凡的近似因子;然而,我们表明,这些方法以压倒性的概率无法找到最佳子集。我们的分析表明,可以通过连续增加搜索空间的大小来规避具有固定样本大小的统一采样策略的失败。基于这一见解,我们提出了一种简单的渐进式随机贪心算法并研究其近似保证。此外,我们证明了所提出的方法在降维应用和聚类和对象跟踪的特征选择任务中的有效性。
更新日期:2022-07-29
down
wechat
bug