Trends in Cognitive Sciences
Volume 2, Issue 11, 1 November 1998, Pages 455-462
Journal home page for Trends in Cognitive Sciences

Review
Six principles for biologically based computational models of cortical cognition

https://doi.org/10.1016/S1364-6613(98)01241-8Get rights and content

Abstract

This review describes and motivates six principles for computational cognitive neuroscience models: biological realism, distributed representations, inhibitory competition, bidirectional activation propagation, error-driven task learning, and Hebbian model learning. Although these principles are supported by a number of cognitive, computational and biological motivations, the prototypical neural-network model (a feedforward back-propagation network) incorporates only two of them, and no widely used model incorporates all of them. It is argued here that these principles should be integrated into a coherent overall framework, and some potential synergies and conflicts in doing so are discussed.

Section snippets

The principles

The six principles can be grouped into three categories. The first principle, biological realism, is in a category by itself, providing a general overriding constraint on the framework. The next three principles, distributed representations, inhibitory competition, and bidirectional activation propagation (interactivity), are concerned with the architecture of the network and the general behavior of the neuron-like processing units within it. The final two principles, error-driven task learning

(2) Distributed representations

The cortex is widely believed to use distributed representations to encode information. A distributed representation uses multiple active neuron-like processing units to encode information (as opposed to a single unit, localist representation), and the same unit can participate in multiple representations. Each unit in a distributed representation can be thought of as representing a single feature, with information being encoded by particular combinations of such features. Electrophysiological

Learning principles

Learning is essential for shaping the representations of neural networks according to the structure of the environment. A key issue is what aspects of the environmental structure should be learned, with the understanding that not everything can or should be represented. The following two learning principles exploit two complementary aspects of environmental structure: task demands, and the extent to which different things co-occur. The first is referred to as `task learning' for obvious

Interactions among the principles

The preceding discussion provided specific and compelling motivations for each of the individual principles. In this section, three examples of interactions (synergies and conflicts) among the six principles will be discussed. The first example comes from the GRAIN framework, and deals with the consequences of interactivity and noise. The second explores the interactions between distributed representations and competition, which can be at odds with each other. The last explores the interactions

Outstanding questions

  • Are there cognitive phenomena or biological facts that appear to contradict directly the core principles outlined here?

  • Is it possible that different parts of the cortex emphasize some principles over others? How might this influence functional specialization in the cortex?

  • How many other important principles are missing from the present list?

  • Can complex, sequential cognitive processing be shown to emerge from such basic principles as those discussed here, or does this require a whole new set of

Acknowledgements

I thank Yuko Munakata and Jerry Rudy for their helpful comments. This work was supported in part by NIH program project grant MH47566.

References (65)

  • Georgopoulos, A.P. (1990) Neurophysiology and reaching, in Attention and Performance (Vol. 13) (Jeannerod, M., ed.),...
  • S.C. Rao et al.

    Integration of what and where in the primate prefrontal cortex

    Science

    (1997)
  • Hinton, G.E., McClelland, J.L. and Rumelhart, D.E. (1986) Distributed Representations, in Parallel Distributed...
  • P.L.A. Gabbot et al.

    Quantitative distribution of GABA-immunoreactive neurons in the visual cortex (area 17) of the cat

    Exp. Brain Res.

    (1986)
  • Kohonen, T. (1984) Self-Organization and Associative Memory,...
  • J.L. McClelland et al.

    An interactive activation model of context effects in letter perception: 1

    An account of basic findings. Psychol. Rev.

    (1981)
  • Rumelhart, D.E. and Zipser, D. (1986) Feature discovery by competitive learning, in Parallel Distributed Processing...
  • S. Grossberg

    Adaptive pattern classification and universal recoding I: Parallel development and coding of neural feature detectors

    Biol. Cybern.

    (1976)
  • H.B. Barlow

    Unsupervised learning

    Neural Comput.

    (1989)
  • D.J. Field

    What is the goal of sensory coding?

    Neural Comput.

    (1994)
  • B.A. Olshausen et al.

    Emergence of simple-cell receptive-field properties by learning a sparse code for natural images

    Nature

    (1996)
  • D.J. Felleman et al.

    Distributed hierarchical processing in the primate cerebral cortex

    Cereb. Cortex

    (1991)
  • J.B. Levitt

    Topography of pyramidal neuron intrinsic connections in macaque monkey prefrontal cortex (areas 9 and 46)

    J. Comp. Neurol.

    (1993)
  • White, E.L. (1989) Cortical Circuits: Synaptic Organization of the Cerebral Cortex: Structure, Function, and Theory,...
  • Smolensky, P. (1986) Information processing in dynamical systems: foundations of harmony theory, in Parallel...
  • S.P. Vecera et al.

    Figure–ground organization and object recognition processes: an interactive account

    J. Exp. Psychol. Hum. Percept. Perform.

    (1998)
  • M.A. Peterson

    Object recognition processes can and do operate before figure ground organization

    Curr. Dir. Psychol. Sci.

    (1994)
  • Rumelhart, D.E, Hinton, G.E. and Williams, R.J. (1986) Learning internal representations by error propagation, in...
  • F.H.C. Crick

    The recent excitement about neural networks

    Nature

    (1989)
  • D. Zipser et al.

    A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons

    Nature

    (1988)
  • R.C. O'Reilly

    Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm

    Neural Comput.

    (1996)
  • Hinton, G.E. and McClelland, J.L. (1988) Learning representations by recirculation, in Neural Information Processing...
  • Cited by (227)

    View all citing articles on Scopus
    View full text