当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Visual Analytics for RNN-Based Deep Reinforcement Learning.
IEEE Transactions on Visualization and Computer Graphics ( IF 5.2 ) Pub Date : 2021-04-30 , DOI: 10.1109/tvcg.2021.3076749
Junpeng Wang , Wei Zhang , Hao Yang , Chin-Chia Michael Yeh , Liang Wang

Deep reinforcement learning (DRL) targets to train an autonomous agent to interact with a pre-defined environment and strives to achieve specific goals through deep neural networks (DNN). Recurrent neural network (RNN) based DRL has demonstrated superior performance, as RNNs can effectively capture the temporal evolution of the environment and respond with proper agent actions. However, apart from the outstanding performance, little is known about how RNNs understand the environment internally and what has been memorized over time. Revealing these details is extremely important for deep learning experts to understand and improve DRLs, which in contrast, is also challenging due to the complicated data transformations inside these models. In this paper, we propose Deep Reinforcement Learning Interactive Visual Explorer (DRLIVE), a visual analytics system to effectively explore, interpret, and diagnose RNN-based DRLs. Focused on DRL agents trained for different Atari games, DRLIVE targets to accomplish three tasks: game episode exploration, RNN hidden/cell state examination, and interactive model perturbation. Using the system, one can flexibly explore a DRL agent through interactive visualizations, discover interpretable RNN cells by prioritizing RNN hidden/cell states with a set of metrics, and further diagnose the DRL model by interactively perturbing its inputs. Through concrete studies with multiple deep learning experts, we validated the efficacy of DRLIVE.

中文翻译:

基于RNN的深度强化学习的可视化分析。

深度强化学习(DRL)旨在训练自治代理与预定义的环境进行交互,并努力通过深度神经网络(DNN)实现特定的目标。基于递归神经网络(RNN)的DRL具有出色的性能,因为RNN可以有效捕获环境的时间演变并以适当的代理动作做出响应。但是,除了出色的性能外,对于RNN如何在内部了解环境以及随着时间的推移所记忆的内容知之甚少。揭示这些细节对于深度学习专家理解和改进DRL非常重要,相反,由于这些模型内部复杂的数据转换,这也具有挑战性。在本文中,我们提出了深度强化学习交互式Visual Explorer(DRLIVE),一个可视分析系统,可以有效地探索,解释和诊断基于RNN的DRL。DRLIVE专注于针对不同Atari游戏进行了培训的DRL代理,旨在完成三项任务:游戏情节探索,RNN隐藏/单元状态检查以及交互式模型扰动。使用该系统,可以通过交互式可视化来灵活地探索DRL代理,通过使用一组度量对RNN隐藏/单元状态进行优先级排序来发现可解释的RNN单元,并通过交互地干扰其输入来进一步诊断DRL模型。通过与多位深度学习专家的具体研究,我们验证了DRLIVE的功效。RNN隐藏/单元状态检查,以及交互式模型扰动。使用该系统,可以通过交互式可视化来灵活地探索DRL代理,通过使用一组度量对RNN隐藏/单元状态进行优先级排序来发现可解释的RNN单元,并通过交互地干扰其输入来进一步诊断DRL模型。通过与多位深度学习专家的具体研究,我们验证了DRLIVE的功效。RNN隐藏/单元状态检查,以及交互式模型扰动。使用该系统,可以通过交互式可视化来灵活地探索DRL代理,通过使用一组度量对RNN隐藏/单元状态进行优先级排序来发现可解释的RNN单元,并通过交互地干扰其输入来进一步诊断DRL模型。通过与多位深度学习专家的具体研究,我们验证了DRLIVE的功效。
更新日期:2021-04-30
down
wechat
bug