当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Reinforcement Learning based Visual Navigation with Information-Theoretic Regularization
arXiv - CS - Robotics Pub Date : 2019-12-09 , DOI: arxiv-1912.04078
Qiaoyun Wu; Kai Xu; Jun Wang; Mingliang Xu; Dinesh Manocha

To enhance the cross-target and cross-scene generalization of target-driven visual navigation based on deep reinforcement learning (RL), we introduce an information-theoretic regularization term into the RL objective. The regularization maximizes the mutual information between the action and the current and next visual observations of the agent. This way, the agent understands the causality between navigation actions and the changes in its observations, thus making more informative decisions. By maximizing the variational lower bound of the mutual information, we learn a generative model of the action-observation dynamics. Based on the model, the agent generates (imagines) the next observation and predicts the next action via comparing the current and the imagined next observations. Cross-target and cross-scene evaluations on the AI2-THOR framework[44] and the Active Vision Dataset (AVD)[1] show that our method attains at least 7% improvement of average success rate over some state of the art methods on the two datasets.
更新日期:2020-04-08

 

全部期刊列表>>
智控未来
聚焦商业经济政治法律
跟Nature、Science文章学绘图
控制与机器人
招募海内外科研人才,上自然官网
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
x-mol收录
湖南大学化学化工学院刘松
上海有机所
李旸
南方科技大学
西湖大学
伊利诺伊大学香槟分校
徐明华
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug