当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Double Coverage with Machine-Learned Advice
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-03-02 , DOI: arxiv-2103.01640
Alexander Lindermayr, Nicole Megow, Bertrand Simon

We study the fundamental online $k$-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g. machine-learned predictions) on an algorithm's decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined trustiness parameter, and which interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm heavily exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.

中文翻译:

双重覆盖和机器学习建议

我们在学习增强的环境中研究基本的在线$ k $服务器问题。虽然在传统的在线模型中,算法没有有关请求序列的信息,但我们假设针对算法的决策给出了一些建议(例如,机器学习的预测)。但是,不能保证预测的质量,而且可能离正确的目标还很遥远。我们的主要结果是对在线k-server的著名Double Coverage算法进行了学习增强(Chrobak等,SIDMA 1991),其中我们将预测以及我们对它们的信任融入了质量。我们给出了一个误差相关的竞争比率,该比率是用户定义的信任度参数的函数,并且可以在最佳一致性,所有预测均正确的情况下平滑地插值,和最佳可能的鲁棒性,无论预测质量如何。当给出良好的预测时,我们会在无建议的情况下改善在线算法的已知下限。我们进一步证明,在考虑局部和无记忆属性的确定性算法类别中,我们的算法可针对任何k实现几乎最佳的一致性-鲁棒性折衷。我们的算法优于以前提出的(更通用的)学习增强算法。值得注意的是,先前的算法大量利用内存,而我们的算法是无内存的。最后,我们在实验中证明了我们的算法在实际数据上的实用性和优越的性能。我们进一步证明,在考虑局部和无记忆属性的确定性算法类别中,我们的算法可针对任何k实现几乎最佳的一致性-鲁棒性折衷。我们的算法优于以前提出的(更通用的)学习增强算法。值得注意的是,先前的算法大量利用内存,而我们的算法是无内存的。最后,我们在实验中证明了我们的算法在实际数据上的实用性和优越的性能。我们进一步证明,在考虑局部和无记忆属性的确定性算法类别中,我们的算法可针对任何k实现几乎最佳的一致性-鲁棒性折衷。我们的算法优于以前提出的(更通用的)学习增强算法。值得注意的是,先前的算法大量利用内存,而我们的算法是无内存的。最后,我们在实验中证明了我们的算法在实际数据上的实用性和优越的性能。而我们的算法是无记忆的。最后,我们在实验中证明了我们的算法在实际数据上的实用性和优越的性能。而我们的算法是无记忆的。最后,我们在实验中证明了我们的算法在实际数据上的实用性和优越的性能。
更新日期:2021-03-03
down
wechat
bug