当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Chance-Constrained Active Inference
Neural Computation ( IF 2.9 ) Pub Date : 2021-09-16 , DOI: 10.1162/neco_a_01427
Thijs van de Laar 1 , İsmail Şenöz 2 , Ayça Özçelikkale 1 , Henk Wymeersch 3
Affiliation  

Active inference (ActInf) is an emerging theory that explains perception and action in biological agents in terms of minimizing a free energy bound on Bayesian surprise. Goal-directed behavior is elicited by introducing prior beliefs on the underlying generative model. In contrast to prior beliefs, which constrain all realizations of a random variable, we propose an alternative approach through chance constraints, which allow for a (typically small) probability of constraint violation, and demonstrate how such constraints can be used as intrinsic drivers for goal-directed behavior in ActInf. We illustrate how chance-constrained ActInf weights all imposed (prior) constraints on the generative model, allowing, for example, for a trade-off between robust control and empirical chance constraint violation. Second, we interpret the proposed solution within a message passing framework. Interestingly, the message passing interpretation is not only relevant to the context of ActInf, but also provides a general-purpose approach that can account for chance constraints on graphical models. The chance constraint message updates can then be readily combined with other prederived message update rules without the need for custom derivations. The proposed chance-constrained message passing framework thus accelerates the search for workable models in general and can be used to complement message-passing formulations on generative neural models.



中文翻译:

机会约束的主动推理

主动推理 (ActInf) 是一种新兴理论,它从最小化贝叶斯意外上的自由能界限来解释生物制剂的感知和行动。目标导向的行为是通过在潜在的生成模型上引入先验信念来引发的。与约束随机变量所有实现的先验信念相反,我们提出了一种通过机会约束的替代方法,它允许(通常很小)违反约束的概率,并演示如何将此类约束用作目标的内在驱动因素-ActInf 中的定向行为。我们说明了机会约束的 ActInf 如何对生成模型施加所有(先验)约束的权重,例如,允许在鲁棒控制和经验机会约束违反之间进行权衡。第二,我们在消息传递框架内解释所提出的解决方案。有趣的是,消息传递解释不仅与 ActInf 的上下文相关,而且还提供了一种通用方法,可以解释图形模型上的机会约束。然后,机会约束消息更新可以很容易地与其他预先导出的消息更新规则相结合,而无需自定义推导。因此,所提出的机会约束消息传递框架总体上加速了对可行模型的搜索,并可用于补充生成神经模型上的消息传递公式。但也提供了一种通用方法,可以解释图形模型的机会约束。然后,机会约束消息更新可以很容易地与其他预先导出的消息更新规则相结合,而无需自定义推导。因此,所提出的机会约束消息传递框架总体上加速了对可行模型的搜索,并可用于补充生成神经模型上的消息传递公式。但也提供了一种通用方法,可以解释图形模型的机会约束。然后,机会约束消息更新可以很容易地与其他预先导出的消息更新规则相结合,而无需自定义推导。因此,所提出的机会约束消息传递框架总体上加速了对可行模型的搜索,并可用于补充生成神经模型上的消息传递公式。

更新日期:2021-09-17
down
wechat
bug