Inference as a fundamental process in behavior
Introduction
In a changing environment, learning, decision-making and cognitive control are critical functions for adaptive behavior under uncertainty. Making decisions involves integrating multiple pieces of information from multiple sources with varying degrees of certainty. When a given decision invariably produces the same outcome, inferring the consequences of choices is straightforward. In the real world, however, uncertainty is common because the information available to a decision-making agent is typically incomplete or hidden by noise.
When facing a novel environment, two scenarios are possible: a) the animal has to learn environmental contingencies from scratch, repeatedly sampling noisy information to form associations between choices and corresponding outcomes, using model-free reinforcement learning strategies to drive decision making [1]; or b) some prior information is available, allowing the animal to infer various environmental features using model-based learning strategies. Because environmental regularities occur, and these can be used to build priors, actively making inferences about the state-of-the-world is often the best solution.
Inference from incomplete information occurs at multiple levels of cognition. At the perceptual level, percepts are formed by combining noisy or incomplete sensory information with prior beliefs (i.e. models), acquired through experience, to infer features of a sensory object. Inference increases processing speed and reduces the energy necessary to make perceptual decisions [2,3]. Moreover, perceptual errors (e.g. hallucinations in mental disease) have been associated with faulty perceptual inference [4].
In higher cognition, animals form beliefs about the world from environmental regularities and use them to infer future outcomes and optimize decision-making. Bayesian-like computations, that combine prior probability distributions with currently available information are used to make inferences, though typically in a (mathematically) suboptimal way [5,6]. Here we review recent progress regarding the neural underpinnings of inference in decision making, focusing on state inference, state-transition inference, and hierarchically organized inference processes.
Section snippets
Inferring the current state-of-the-world
Without prior information about the potential outcomes of decisions, organisms first need to learn the values of actions (Box 1). Much attention has been devoted to this initial learning process. Reinforcement Learning (RL) is the brute force approach to estimating values of options or actions given the current environment, which results in the gradual development of choice preferences (Figure 1a). This strategy is called model-free learning, because it does not rely on prior beliefs. RL is
Conclusion
Inference processes are ubiquitous in cognition, from the interpretation of sensory inputs to cognitive control. Inference is critical for adaptive behaviors in a changing and noisy environment, both for determining the current state and state transitions. Furthermore, learned behaviors can be considered sequences of states. Hierarchically organized inference processes are the fundamental component that shapes these sequences, thus having a fundamental role in behavior.
Conflict of interest statement
Nothing declared.
References and recommended reading
Papers of particular interest, published within the period of review, have been highlighted as:
• of special interest
•• of outstanding interest
CRediT authorship contribution statement
Ramon Bartolo: Conceptualization, Writing - original draft, Writing - review & editing. Bruno B Averbeck: Funding acquisition, Writing - original draft, Writing - review & editing.
Acknowledgements
This work was supported by the Intramural Research Program, National Institute of Mental Health/N.I.H. (ZIA MH002928).
References (52)
- et al.
The perceptual prediction paradox
Trends Cogn Sci
(2020) - et al.
What is optimal in optimal inference?
Curr Opin Behav Sci
(2019) - et al.
Primate orbitofrontal cortex codes information relevant for managing explore-exploit tradeoffs
J Neurosci
(2020) - et al.
Orbitofrontal neurons signal sensory associations underlying model-based inference in a sensory preconditioning task
eLife
(2018) - et al.
Context effects on probability estimation
PLoS Biol
(2020) - et al.
prefrontal cortex predicts state switches during reversal learning
Neuron
(2020) - et al.
Neural basis of reinforcement learning and decision making
Annu Rev Neurosci
(2012) Amygdala and ventral striatum population codes implement multiple learning rates for reinforcement learning
IEEE Symposium Series on Computational Intelligence
(2017)- et al.
Amygdala contributions to stimulus-reward encoding in the macaque medial and orbital frontal cortex during learning
J Neurosci
(2017) - et al.
The ubiquity of model-based reinforcement learning
Curr Opin Neurobiol
(2012)
Reinforcement Learning: An Introduction
Perceptual awareness and active inference
Neurosci Conscious
A perceptual inference mechanism for hallucinations linked to striatal dopamine
Curr Biol
A biased Bayesian inference for decision-making and cognitive control
Front Neurosci
Recent advances in understanding the role of phasic dopamine activity
F1000Res
Cognitive control over learning: creating, clustering, and generalizing task-set structure
Psychol Rev
The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive
Psychol Sci
Effects of ventral striatum lesions on stimulus-based versus action-based reinforcement learning
J Neurosci
Orbitofrontal circuits control multiple reinforcement-learning processes
Neuron
Metaplasticity as a neural substrate for adaptive learning and choice under uncertainty
Neuron
Inference-based decisions in a hidden state foraging task: differential contributions of prefrontal cortical areas
Neuron
Human orbitofrontal cortex represents a cognitive map of state space
Neuron
Medial orbitofrontal cortex mediates outcome retrieval in partially observable task situations
Neuron
Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning
Neuron
Hierarchical reasoning by neural circuits in the frontal cortex
Science
Value, search, persistence and model updating in anterior cingulate cortex
Nat Neurosci
Cited by (8)
Dopamine-independent effect of rewards on choices through hidden-state inference
2024, Nature NeuroscienceHierarchical inference as a source of human biases
2023, Cognitive, Affective and Behavioral NeuroscienceNudging societally relevant behavior by promoting cognitive inferences
2022, Scientific Reports