当前位置: X-MOL 学术Theory Pract. Log. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Managing caching strategies for stream reasoning with reinforcement learning
Theory and Practice of Logic Programming ( IF 1.4 ) Pub Date : 2020-09-21 , DOI: 10.1017/s147106842000037x
CARMINE DODARO , THOMAS EITER , PAUL OGRIS , KONSTANTIN SCHEKOTIHIN

Efficient decision-making over continuously changing data is essential for many application domains such as cyber-physical systems, industry digitalization, etc. Modern stream reasoning frameworks allow one to model and solve various real-world problems using incremental and continuous evaluation of programs as new data arrives in the stream. Applied techniques use, e.g., Datalog-like materialization or truth maintenance algorithms to avoid costly re-computations, thus ensuring low latency and high throughput of a stream reasoner. However, the expressiveness of existing approaches is quite limited and, e.g., they cannot be used to encode problems with constraints, which often appear in practice. In this paper, we suggest a novel approach that uses the Conflict-Driven Constraint Learning (CDCL) to efficiently update legacy solutions by using intelligent management of learned constraints. In particular, we study the applicability of reinforcement learning to continuously assess the utility of learned constraints computed in previous invocations of the solving algorithm for the current one. Evaluations conducted on real-world reconfiguration problems show that providing a CDCL algorithm with relevant learned constraints from previous iterations results in significant performance improvements of the algorithm in stream reasoning scenarios.

中文翻译:

使用强化学习管理流推理的缓存策略

对不断变化的数据进行有效决策对于许多应用领域(如网络物理系统、行业数字化等)至关重要。现代流推理框架允许使用增量和持续评估程序作为新的程序来建模和解决各种现实世界问题数据到达流中。应用技术使用例如类似数据日志的物化或真值维护算法来避免代价高昂的重新计算,从而确保流推理器的低延迟和高吞吐量。然而,现有方法的表达能力非常有限,例如,它们不能用于编码具有约束的问题,这在实践中经常出现。在本文中,我们提出了一种新颖的方法,该方法使用冲突驱动的约束学习 (CDCL) 通过使用学习约束的智能管理来有效地更新遗留解决方案。特别是,我们研究了强化学习的适用性,以持续评估在当前求解算法的先前调用中计算的学习约束的效用。对现实世界的重新配置问题进行的评估表明,为 CDCL 算法提供从先前迭代中学习到的相关约束可以显着提高算法在流推理场景中的性能。我们研究了强化学习的适用性,以持续评估在当前求解算法的先前调用中计算的学习约束的效用。对现实世界的重新配置问题进行的评估表明,为 CDCL 算法提供从先前迭代中学习到的相关约束可以显着提高算法在流推理场景中的性能。我们研究了强化学习的适用性,以持续评估在当前求解算法的先前调用中计算的学习约束的效用。对现实世界的重新配置问题进行的评估表明,为 CDCL 算法提供从先前迭代中学习到的相关约束可以显着提高算法在流推理场景中的性能。
更新日期:2020-09-21
down
wechat
bug