当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Visual Analytics for RNN-Based Deep Reinforcement Learning
IEEE Transactions on Visualization and Computer Graphics ( IF 5.2 ) Pub Date : 2021-04-30 , DOI: 10.1109/tvcg.2021.3076749
Junpeng Wang 1 , Wei Zhang 1 , Hao Yang 1 , Chin-Chia Michael Yeh 1 , Liang Wang 1
Affiliation  

Deep reinforcement learning (DRL) targets to train an autonomous agent to interact with a pre-defined environment and strives to achieve specific goals through deep neural networks (DNN). Recurrent neural network (RNN) based DRL has demonstrated superior performance, as RNNs can effectively capture the temporal evolution of the environment and respond with proper agent actions. However, apart from the outstanding performance, little is known about how RNNs understand the environment internally and what has been memorized over time. Revealing these details is extremely important for deep learning experts to understand and improve DRLs, which in contrast, is also challenging due to the complicated data transformations inside these models. In this article, we propose Deep Reinforcement Learning Interactive Visual Explorer ( DRL IVE), a visual analytics system to effectively explore, interpret, and diagnose RNN-based DRLs. Having focused on DRL agents trained for different Atari games, DRL IVE accomplishes three tasks: game episode exploration, RNN hidden/cell state examination, and interactive model perturbation. Using the system, one can flexibly explore a DRL agent through interactive visualizations, discover interpretable RNN cells by prioritizing RNN hidden/cell states with a set of metrics, and further diagnose the DRL model by interactively perturbing its inputs. Through concrete studies with multiple deep learning experts, we validated the efficacy of DRL IVE.

中文翻译:

基于 RNN 的深度强化学习的可视化分析

深度强化学习 (DRL) 旨在训练自主代理与预定义环境进行交互,并努力通过深度神经网络 (DNN) 实现特定目标。基于递归神经网络 (RNN) 的 DRL 已展示出卓越的性能,因为 RNN 可以有效地捕捉环境的时间演化并以适当的代理动作做出响应。然而,除了出色的性能外,人们对 RNN 如何在内部理解环境以及随着时间的推移记忆了什么知之甚少。揭示这些细节对于深度学习专家理解和改进 DRL 极其重要,相比之下,由于这些模型内部的数据转换复杂,这也具有挑战性。在这篇文章中,我们提出深度强化学习交互式视觉资源管理器( DRL IVE),一个可视化分析系统,可以有效地探索、解释和诊断基于 RNN 的 DRL。专注于为不同的 Atari 游戏训练的 DRL 智能体,DRL IVE 完成三个任务:游戏情节探索、RNN 隐藏/细胞状态检查和交互式模型扰动。使用该系统,人们可以通过交互式可视化灵活地探索 DRL 代理,通过使用一组指标对 RNN 隐藏/单元状态进行优先排序来发现可解释的 RNN 单元,并通过交互式扰动其输入来进一步诊断 DRL 模型。通过与多位深度学习专家的具体研究,我们验证了我很喜欢。
更新日期:2021-04-30
down
wechat
bug