当前位置: X-MOL 学术Phys. Life Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neurocomputational theories of homeostatic control.
Physics of Life Reviews ( IF 13.7 ) Pub Date : 2019-07-19 , DOI: 10.1016/j.plrev.2019.07.005
Oliver J Hulme 1 , Tobias Morville 1 , Boris Gutkin 2
Affiliation  

Homeostasis is a problem for all living agents. It entails predictively regulating internal states within the bounds compatible with survival in order to maximise fitness. This can be achieved physiologically, through complex hierarchies of autonomic regulation, but it must also be achieved via behavioural control, both reactive and proactive. Here we briefly review some of the major theories of homeostatic control and their historical cognates, addressing how they tackle the optimisation of both physiological and behavioural homeostasis. We start with optimal control approaches, setting up key concepts, exploring their strengths and limitations. We then concentrate on contemporary neurocomputational approaches to homeostatic control. We primarily focus on a branch of reinforcement learning known as homeostatic reinforcement learning (HRL). A central premise of HRL is that reward optimisation is directly coupled to homeostatic control. A central construct in this framework is the drive function which maps from homeostatic state to motivational drive, where reductions in drive are operationally defined as reward values. We explain HRL's main advantages, empirical applications, and conceptual insights. Notably, we show how simple constraints on the drive function can yield a normative account of predictive control, as well as account for phenomena such as satiety, risk aversion, and interactions between competing homeostatic needs. We illustrate how HRL agents can learn to avoid hazardous states without any need to experience them, and how HRL can be applied in clinical domains. Finally, we outline several challenges to HRL, and how survival constraints and active inference models could circumvent these problems.

中文翻译:

稳态控制的神经计算理论。

稳态对所有活着的人都是一个问题。它需要在与生存相容的范围内预测性地调节内部状态,以使适应度最大化。这可以通过复杂的自主调节层次从生理上实现,但也必须通过反应性和主动性的行为控制来实现。在这里,我们简要回顾一些稳态控制的主要理论及其历史认识,探讨它们如何解决生理和行为稳态的优化问题。我们从最佳控制方法开始,建立关键概念,探索其优势和局限性。然后,我们集中于当代的神经计算方法来进行体内平衡控制。我们主要关注强化学习的一个分支,称为稳态强化学习(HRL)。HRL的中心前提是奖励优化与稳态控制直接相关。在此框架中,一个核心结构是驱动功能,它从稳态状态映射到动机驱动,其中驱动的减少在操作上被定义为奖励值。我们将说明HRL的主要优势,经验应用和概念见解。值得注意的是,我们显示了对驱动功能的简单约束如何能够产生预测控制的规范说明,并说明诸如饱足感,风险厌恶以及相互竞争的稳态需求之间的相互作用等现象。我们将说明HRL药剂如何无需经历任何危险状态即可学会避免危险状态,以及如何将HRL应用于临床领域。最后,我们概述了HRL的几个挑战,
更新日期:2019-07-19
down
wechat
bug