当前位置: X-MOL 学术Algorithmica › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Online Multistage Subset Maximization Problems
Algorithmica ( IF 0.9 ) Pub Date : 2021-05-25 , DOI: 10.1007/s00453-021-00834-7
Evripidis Bampis , Bruno Escoffier , Kevin Schewior , Alexandre Teiller

Numerous combinatorial optimization problems (knapsack, maximum-weight matching, etc.) can be expressed as subset maximization problems: One is given a ground set \(N=\{1,\dots ,n\}\), a collection \(\mathcal {F}\subseteq 2^N\) of subsets thereof such that \(\emptyset \in \mathcal {F}\), and an objective (profit) function \(p:\mathcal {F}\rightarrow \mathbb {R}_+\). The task is to choose a set \(S\in \mathcal {F}\) that maximizes p(S). We consider the multistage version (Eisenstat et al., Gupta et al., both ICALP 2014) of such problems: The profit function \(p_t\) (and possibly the set of feasible solutions \(\mathcal {F}_t\)) may change over time. Since in many applications changing the solution is costly, the task becomes to find a sequence of solutions that optimizes the trade-off between good per-time solutions and stable solutions taking into account an additional similarity bonus. As similarity measure for two consecutive solutions, we consider either the size of the intersection of the two solutions or the difference of n and the Hamming distance between the two characteristic vectors. We study multistage subset maximization problems in the online setting, that is, \(p_t\) (along with possibly \(\mathcal {F}_t\)) only arrive one by one and, upon such an arrival, the online algorithm has to output the corresponding solution without knowledge of the future. We develop general techniques for online multistage subset maximization and thereby characterize those models (given by the type of data evolution and the type of similarity measure) that admit a constant-competitive online algorithm. When no constant competitive ratio is possible, we employ lookahead to circumvent this issue. When a constant competitive ratio is possible, we provide almost matching lower and upper bounds on the best achievable one.



中文翻译:

在线多级子集最大化问题

大量的组合优化问题(背包,最大权重匹配等)可以表示为子集最大化问题:一个给定的基础集\(N = \ {1,\ dots,n \} \),一个集合\( \ mathcal {F} \ subseteq 2 ^ N \的子集,使得\(\ emptyset \ in \ mathcal {F} \)和目标(利润)函数\(p:\ mathcal {F} \ rightarrow \ mathbb {R} _ + \)。任务是选择一组\(S \中\ mathcal {F} \)最大化p小号)。我们考虑以下问题的多阶段版本(Eisenstat等人,Gupta等人,两者均为ICALP 2014):利润函数\(p_t \)(以及可行解的集合\(\ mathcal {F} _t \))可能会随时间变化。由于在许多应用程序中更改解决方案的成本很高,因此要考虑找到一系列解决方案,从而在考虑到额外的相似性奖励的同时,优化良好的按时间解决方案与稳定解决方案之间的权衡。作为两个连续解的相似性度量,我们考虑两个解的交点的大小或两个特征向量之间的n之差和汉明距离。我们研究在线设置中的多级子集最大化问题,即\(p_t \)(以及可能的\(\ mathcal {F} _t \))仅一个接一个地到达,并且一旦到达,在线算法就必须输出相应的解决方案,而无需了解未来。我们开发了用于在线多级子集最大化的通用技术,从而表征了那些接受恒定竞争在线算法的模型(由数据演化的类型和相似性度量的类型给定)。当无法保持恒定的竞争比率时,我们会提前采取措施来解决此问题。当有可能保持恒定的竞争比率时,我们将在最佳可实现目标上提供几乎匹配的上限和下限。

更新日期:2021-05-25
down
wechat
bug