当前位置:
X-MOL 学术
›
arXiv.cs.DS
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Instance Specific Approximations for Submodular Maximization
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-02-23 , DOI: arxiv-2102.11911 Eric Balkanski, Sharon Qian, Yaron Singer
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-02-23 , DOI: arxiv-2102.11911 Eric Balkanski, Sharon Qian, Yaron Singer
For many optimization problems in machine learning, finding an optimal
solution is computationally intractable and we seek algorithms that perform
well in practice. Since computational intractability often results from
pathological instances, we look for methods to benchmark the performance of
algorithms against optimal solutions on real-world instances. The main
challenge is that an optimal solution cannot be efficiently computed for
intractable problems, and we therefore often do not know how far a solution is
from being optimal. A major question is therefore how to measure the
performance of an algorithm in comparison to an optimal solution on instances
we encounter in practice. In this paper, we address this question in the context of submodular
optimization problems. For the canonical problem of submodular maximization
under a cardinality constraint, it is intractable to compute a solution that is
better than a $1-1/e \approx 0.63$ fraction of the optimum. Algorithms like the
celebrated greedy algorithm are guaranteed to achieve this $1-1/e$ bound on any
instance and are used in practice. Our main contribution is not a new algorithm for submodular maximization but
an analytical method that measures how close an algorithm for submodular
maximization is to optimal on a given problem instance. We use this method to
show that on a wide variety of real-world datasets and objectives, the
approximation of the solution found by greedy goes well beyond $1-1/e$ and is
often at least 0.95. We develop this method using a novel technique that lower
bounds the objective of a dual minimization problem to obtain an upper bound on
the value of an optimal solution to the primal maximization problem.
中文翻译:
子模最大化的实例特定近似
对于机器学习中的许多优化问题,找到最佳解决方案在计算上是棘手的,因此我们寻求在实践中表现良好的算法。由于计算难解性通常是由病理实例导致的,因此我们寻求针对实际实例中最佳解决方案对算法性能进行基准测试的方法。主要挑战在于,对于棘手的问题,无法有效地计算出最优解,因此,我们通常不知道最优解有多远。因此,一个主要问题是,与我们在实践中遇到的实例的最佳解决方案相比,如何衡量算法的性能。在本文中,我们在次模块优化问题的背景下解决了这个问题。对于在基数约束下的次模极大化的规范问题,计算出优于最优值的1-1 / e \约0.63 $分数的解决方案是很棘手的。像著名的贪婪算法这样的算法可以保证在任何实例上都达到$ 1-1 / e $的约束,并在实践中使用。我们的主要贡献不是用于子模最大化的新算法,而是一种分析方法,用于测量在给定问题实例上,子模最大化的算法与最优算法的接近程度。我们使用这种方法来表明,在各种现实世界的数据集和目标上,由greedy找到的解决方案的近似值远远超过$ 1-1 / e $,通常至少为0.95。
更新日期:2021-02-25
中文翻译:
子模最大化的实例特定近似
对于机器学习中的许多优化问题,找到最佳解决方案在计算上是棘手的,因此我们寻求在实践中表现良好的算法。由于计算难解性通常是由病理实例导致的,因此我们寻求针对实际实例中最佳解决方案对算法性能进行基准测试的方法。主要挑战在于,对于棘手的问题,无法有效地计算出最优解,因此,我们通常不知道最优解有多远。因此,一个主要问题是,与我们在实践中遇到的实例的最佳解决方案相比,如何衡量算法的性能。在本文中,我们在次模块优化问题的背景下解决了这个问题。对于在基数约束下的次模极大化的规范问题,计算出优于最优值的1-1 / e \约0.63 $分数的解决方案是很棘手的。像著名的贪婪算法这样的算法可以保证在任何实例上都达到$ 1-1 / e $的约束,并在实践中使用。我们的主要贡献不是用于子模最大化的新算法,而是一种分析方法,用于测量在给定问题实例上,子模最大化的算法与最优算法的接近程度。我们使用这种方法来表明,在各种现实世界的数据集和目标上,由greedy找到的解决方案的近似值远远超过$ 1-1 / e $,通常至少为0.95。