当前位置: X-MOL 学术arXiv.cs.GT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Secure Planning Against Stealthy Attacks via Model-Free Reinforcement Learning
arXiv - CS - Computer Science and Game Theory Pub Date : 2020-11-03 , DOI: arxiv-2011.01882
Alper Kamil Bozkurt, Yu Wang, and Miroslav Pajic

We consider the problem of security-aware planning in an unknown stochastic environment, in the presence of attacks on control signals (i.e., actuators) of the robot. We model the attacker as an agent who has the full knowledge of the controller as well as the employed intrusion-detection system and who wants to prevent the controller from performing tasks while staying stealthy. We formulate the problem as a stochastic game between the attacker and the controller and present an approach to express the objective of such an agent and the controller as a combined linear temporal logic (LTL) formula. We then show that the planning problem, described formally as the problem of satisfying an LTL formula in a stochastic game, can be solved via model-free reinforcement learning when the environment is completely unknown. Finally, we illustrate and evaluate our methods on two robotic planning case studies.

中文翻译:

通过无模型强化学习针对隐形攻击进行安全规划

我们考虑了在未知随机环境中存在对机器人控制信号(即执行器)的攻击的安全意识规​​划问题。我们将攻击者建模为一个代理,他完全了解控制器以及所采用的入侵检测系统,并希望在保持隐身的同时阻止控制器执行任务。我们将问题表述为攻击者和控制器之间的随机博弈,并提出了一种将此类代理和控制器的目标表示为组合线性时间逻辑 (LTL) 公式的方法。然后,我们证明了规划问题,正式描述为满足随机博弈中的 LTL 公式的问题,当环境完全未知时,可以通过无模型强化学习来解决。最后,
更新日期:2020-11-04
down
wechat
bug