当前位置: X-MOL 学术Trends Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning offline: memory replay in biological and artificial reinforcement learning
Trends in Neurosciences ( IF 15.9 ) Pub Date : 2021-09-01 , DOI: 10.1016/j.tins.2021.07.007
Emma L Roscow 1 , Raymond Chua 2 , Rui Ponte Costa 3 , Matt W Jones 4 , Nathan Lepora 5
Affiliation  

Learning to act in an environment to maximise rewards is among the brain’s key functions. This process has often been conceptualised within the framework of reinforcement learning, which has also gained prominence in machine learning and artificial intelligence (AI) as a way to optimise decision making. A common aspect of both biological and machine reinforcement learning is the reactivation of previously experienced episodes, referred to as replay. Replay is important for memory consolidation in biological neural networks and is key to stabilising learning in deep neural networks. Here, we review recent developments concerning the functional roles of replay in the fields of neuroscience and AI. Complementary progress suggests how replay might support learning processes, including generalisation and continual learning, affording opportunities to transfer knowledge across the two fields to advance the understanding of biological and artificial learning and memory.



中文翻译:

离线学习:生物和人工强化学习中的记忆重放

学习在环境中行动以最大化奖励是大脑的关键功能之一。这个过程通常在强化学习的框架内被概念化,强化学习在机器学习和人工智能 (AI) 中也越来越突出,作为优化决策的一种方式。生物和机器强化学习的一个共同方面是重新激活以前经历过的情节,称为重放。重放对于生物神经网络中的记忆巩固很重要,并且是稳定深度神经网络学习的关键。在这里,我们回顾了有关重放在神经科学和人工智能领域中的功能作用的最新进展。补充进展表明重放如何支持学习过程,包括泛化和持续学习,

更新日期:2021-09-28
down
wechat
bug