当前位置: X-MOL 学术arXiv.cs.SY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Counter-example Guided Learning of Bounds on Environment Behavior
arXiv - CS - Systems and Control Pub Date : 2020-01-20 , DOI: arxiv-2001.07233
Yuxiao Chen, Sumanth Dathathri, Tung Phan-Minh, and Richard M. Murray

There is a growing interest in building autonomous systems that interact with complex environments. The difficulty associated with obtaining an accurate model for such environments poses a challenge to the task of assessing and guaranteeing the system's performance. We present a data-driven solution that allows for a system to be evaluated for specification conformance without an accurate model of the environment. Our approach involves learning a conservative reactive bound of the environment's behavior using data and specification of the system's desired behavior. First, the approach begins by learning a conservative reactive bound on the environment's actions that captures its possible behaviors with high probability. This bound is then used to assist verification, and if the verification fails under this bound, the algorithm returns counter-examples to show how failure occurs and then uses these to refine the bound. We demonstrate the applicability of the approach through two case-studies: i) verifying controllers for a toy multi-robot system, and ii) verifying an instance of human-robot interaction during a lane-change maneuver given real-world human driving data.

中文翻译:

环境行为边界的反例引导学习

人们对构建与复杂环境交互的自主系统越来越感兴趣。与获取此类环境的准确模型相关的困难对评估和保证系统性能的任务提出了挑战。我们提出了一个数据驱动的解决方案,允许在没有准确的环境模型的情况下评估系统的规范一致性。我们的方法涉及使用系统所需行为的数据和规范来学习环境行为的保守反应界限。首先,该方法首先学习环境行为的保守反应界限,该界限以高概率捕获其可能的行为。这个bound然后用来辅助验证,如果在这个bound下验证失败,该算法返回反例以显示失败是如何发生的,然后使用这些反例来细化界限。我们通过两个案例研究证明了该方法的适用性:i) 验证玩具多机器人系统的控制器,以及 ii) 在给定真实世界的人类驾驶数据的情况下验证变道操作期间人机交互的实例。
更新日期:2020-02-07
down
wechat
bug