Inference as a fundamental process in behavior

https://doi.org/10.1016/j.cobeha.2020.06.005Get rights and content

Highlights

  • Inference is a ubiquitous process in cognition and behavior.

  • State inference optimizes behavioral flexibility and stability under uncertainty.

  • The neural mechanisms underlie different aspects of hidden state inference.

  • Inference provides a framework for hierarchically organized decision making.

In the real world, uncertainty is omnipresent due to incomplete or noisy information. This makes inferring the state-of-the-world difficult. Furthermore, the state-of-the-world often changes over time, though with some regularity. This makes learning and decision-making challenging. Organisms have evolved to take advantage of environmental regularities, that allow organisms to acquire a model of the world and perform model-based inference to robustly make decisions and adjust behavior efficiently under uncertainty. Recent research has shed light on many aspects of model-based inference and its neural underpinnings. Here we review recent progress on hidden-state inference, state transition inference, and hierarchical inference processes.

Introduction

In a changing environment, learning, decision-making and cognitive control are critical functions for adaptive behavior under uncertainty. Making decisions involves integrating multiple pieces of information from multiple sources with varying degrees of certainty. When a given decision invariably produces the same outcome, inferring the consequences of choices is straightforward. In the real world, however, uncertainty is common because the information available to a decision-making agent is typically incomplete or hidden by noise.

When facing a novel environment, two scenarios are possible: a) the animal has to learn environmental contingencies from scratch, repeatedly sampling noisy information to form associations between choices and corresponding outcomes, using model-free reinforcement learning strategies to drive decision making [1]; or b) some prior information is available, allowing the animal to infer various environmental features using model-based learning strategies. Because environmental regularities occur, and these can be used to build priors, actively making inferences about the state-of-the-world is often the best solution.

Inference from incomplete information occurs at multiple levels of cognition. At the perceptual level, percepts are formed by combining noisy or incomplete sensory information with prior beliefs (i.e. models), acquired through experience, to infer features of a sensory object. Inference increases processing speed and reduces the energy necessary to make perceptual decisions [2,3]. Moreover, perceptual errors (e.g. hallucinations in mental disease) have been associated with faulty perceptual inference [4].

In higher cognition, animals form beliefs about the world from environmental regularities and use them to infer future outcomes and optimize decision-making. Bayesian-like computations, that combine prior probability distributions with currently available information are used to make inferences, though typically in a (mathematically) suboptimal way [5,6]. Here we review recent progress regarding the neural underpinnings of inference in decision making, focusing on state inference, state-transition inference, and hierarchically organized inference processes.

Section snippets

Inferring the current state-of-the-world

Without prior information about the potential outcomes of decisions, organisms first need to learn the values of actions (Box 1). Much attention has been devoted to this initial learning process. Reinforcement Learning (RL) is the brute force approach to estimating values of options or actions given the current environment, which results in the gradual development of choice preferences (Figure 1a). This strategy is called model-free learning, because it does not rely on prior beliefs. RL is

Conclusion

Inference processes are ubiquitous in cognition, from the interpretation of sensory inputs to cognitive control. Inference is critical for adaptive behaviors in a changing and noisy environment, both for determining the current state and state transitions. Furthermore, learned behaviors can be considered sequences of states. Hierarchically organized inference processes are the fundamental component that shapes these sequences, thus having a fundamental role in behavior.

Conflict of interest statement

Nothing declared.

References and recommended reading

Papers of particular interest, published within the period of review, have been highlighted as:

  • • of special interest

  • •• of outstanding interest

CRediT authorship contribution statement

Ramon Bartolo: Conceptualization, Writing - original draft, Writing - review & editing. Bruno B Averbeck: Funding acquisition, Writing - original draft, Writing - review & editing.

Acknowledgements

This work was supported by the Intramural Research Program, National Institute of Mental Health/N.I.H. (ZIA MH002928).

References (52)

  • R.S. Sutton et al.

    Reinforcement Learning: An Introduction

    (1998)
  • T. Parr et al.

    Perceptual awareness and active inference

    Neurosci Conscious

    (2019)
  • C.M. Cassidy et al.

    A perceptual inference mechanism for hallucinations linked to striatal dopamine

    Curr Biol

    (2018)
  • K. Matsumori et al.

    A biased Bayesian inference for decision-making and cognitive control

    Front Neurosci

    (2018)
  • W. Schultz

    Recent advances in understanding the role of phasic dopamine activity

    F1000Res

    (2019)
  • A.G. Collins et al.

    Cognitive control over learning: creating, clustering, and generalizing task-set structure

    Psychol Rev

    (2013)
  • A.R. Otto et al.

    The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive

    Psychol Sci

    (2013)
  • K.M. Rothenhoefer et al.

    Effects of ventral striatum lesions on stimulus-based versus action-based reinforcement learning

    J Neurosci

    (2017)
  • S.M. Groman et al.

    Orbitofrontal circuits control multiple reinforcement-learning processes

    Neuron

    (2019)
  • S. Farashahi et al.

    Metaplasticity as a neural substrate for adaptive learning and choice under uncertainty

    Neuron

    (2017)
  • P. Vertechi et al.

    Inference-based decisions in a hidden state foraging task: differential contributions of prefrontal cortical areas

    Neuron

    (2020)
  • N.W. Schuck et al.

    Human orbitofrontal cortex represents a cognitive map of state space

    Neuron

    (2016)
  • L.A. Bradfield et al.

    Medial orbitofrontal cortex mediates outcome retrieval in partially observable task situations

    Neuron

    (2015)
  • D. Durstewitz et al.

    Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning

    Neuron

    (2010)
  • M. Sarafyazd et al.

    Hierarchical reasoning by neural circuits in the frontal cortex

    Science

    (2019)
  • N. Kolling et al.

    Value, search, persistence and model updating in anterior cingulate cortex

    Nat Neurosci

    (2016)
  • Cited by (8)

    View all citing articles on Scopus
    View full text