当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Risk verification of stochastic systems with neural network controllers
Artificial Intelligence ( IF 14.4 ) Pub Date : 2022-09-01 , DOI: 10.1016/j.artint.2022.103782
Matthew Cleaveland , Lars Lindemann , Radoslav Ivanov , George J. Pappas

Motivated by the fragility of neural network (NN) controllers in safety-critical applications, we present a data-driven framework for verifying the risk of stochastic dynamical systems with NN controllers. Given a stochastic control system, an NN controller, and a specification equipped with a notion of trace robustness (e.g., constraint functions or signal temporal logic), we collect trajectories from the system that may or may not satisfy the specification. In particular, each of the trajectories produces a robustness value that indicates how well (severely) the specification is satisfied (violated). We then compute risk metrics over these robustness values to estimate the risk that the NN controller will not satisfy the specification. We are further interested in quantifying the difference in risk between two systems, and we show how the risk estimated from a nominal system can provide an upper bound the risk of a perturbed version of the system. In particular, the tightness of this bound depends on the closeness of the systems in terms of the closeness of their system trajectories. For Lipschitz continuous and incrementally input-to-state stable systems, we show how to exactly quantify system closeness with varying degrees of conservatism, while we estimate system closeness for more general systems from data in our experiments. We demonstrate our risk verification approach on two case studies, an underwater vehicle and an F1/10 autonomous car.



中文翻译:

带有神经网络控制器的随机系统的风险验证

受安全关键应用中神经网络 (NN) 控制器的脆弱性的启发,我们提出了一个数据驱动的框架,用于验证具有 NN 控制器的随机动态系统的风险。给定一个随机控制系统、一个神经网络控制器和一个带有轨迹鲁棒性概念的规范(例如,约束函数或信号时间逻辑),我们从系统中收集可能满足或不满足规范的轨迹。特别是,每个轨迹都会产生一个稳健性值,该值表明规范得到满足(违反)的程度(严重程度)。然后,我们计算这些稳健性值的风险度量,以估计 NN 控制器不满足规范的风险。我们更感兴趣的是量化两个系统之间的风险差异,我们展示了从名义系统估计的风险如何提供系统扰动版本的风险上限。特别是,这个界限的紧密程度取决于系统在系统轨迹的接近程度方面的接近程度。对于 Lipschitz 连续和增量输入到状态的稳定系统,我们展示了如何准确量化具有不同程度保守性的系统接近度,同时我们根据实验中的数据估计更一般系统的系统接近度。我们在两个案例研究中展示了我们的风险验证方法,即水下航行器和 F1/10 自动驾驶汽车。这个界限的紧密程度取决于系统在系统轨迹的接近程度方面的接近程度。对于 Lipschitz 连续和增量输入到状态的稳定系统,我们展示了如何准确量化具有不同程度保守性的系统接近度,同时我们根据实验中的数据估计更一般系统的系统接近度。我们在两个案例研究中展示了我们的风险验证方法,即水下航行器和 F1/10 自动驾驶汽车。这个界限的紧密程度取决于系统在系统轨迹的接近程度方面的接近程度。对于 Lipschitz 连续和增量输入到状态的稳定系统,我们展示了如何准确量化具有不同程度保守性的系统接近度,同时我们根据实验中的数据估计更一般系统的系统接近度。我们在两个案例研究中展示了我们的风险验证方法,即水下航行器和 F1/10 自动驾驶汽车。

更新日期:2022-09-01
down
wechat
bug