Skip to main content

Advertisement

Log in

Short term memory properties of sensory neural architectures

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

A functional role of the cerebral cortex is to form and hold representations of the sensory world for behavioral purposes. This is achieved by a sheet of neurons, organized in modules called cortical columns, that receives inputs in a peculiar manner, with only a few neurons driven by sensory inputs through thalamic projections, and a vast majority of neurons receiving mainly cortical inputs. How should cortical modules be organized, with respect to sensory inputs, in order for the cortex to efficiently hold sensory representations in memory? To address this question we investigate the memory performance of trees of recurrent networks (TRN) that are composed of recurrent networks, modeling cortical columns, connected with each others through a tree-shaped feed-forward backbone of connections, with sensory stimuli injected at the root of the tree. On these sensory architectures two types of short-term memory (STM) mechanisms can be implemented, STM via transient dynamics on the feed-forward tree, and STM via reverberating activity on the recurrent connectivity inside modules. We derive equations describing the dynamics of such networks, which allow us to thoroughly explore the space of possible architectures and quantify their memory performance. By varying the divergence ratio of the tree, we show that serial architectures, where sensory inputs are successively processed in different modules, are better suited to implement STM via transient dynamics, while parallel architectures, where sensory inputs are simultaneously processed by all modules, are better suited to implement STM via reverberating dynamics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Amit, D.J. (1989). Modeling brain function. Cambridge University Press.

  • Amit, D.J., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.

    Article  CAS  PubMed  Google Scholar 

  • Bosking, W., Zhang, Y., Schofield, B., Fitzpatrick, D. (1997). Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. Journal of Neuroscience, 17, 2112–2127.

    Article  CAS  PubMed  Google Scholar 

  • Braitenberg, V., & Schütz, A. (1991). Anatomy of the cortex. Springer.

  • Brunel, N. (2016). Is cortical connectivity optimized for storing information [quest]? Nature Neuroscience, 19 (5), 749–755.

    Article  CAS  PubMed  Google Scholar 

  • Coolen, A., & Viana, L. (1996). Feed-forward chains of recurrent attractor neural networks near saturation. Journal of Physics A: Mathematical and General, 29(24), 7855.

    Article  Google Scholar 

  • DeFelipe, J., Conley, M., Jones, E.G. (1986). Long-range focal collateralization of axons arising from corticocortical cells in monkey sensory-motor cortex. Journal of Neuroscience, 6, 3749–3766.

    Article  CAS  PubMed  Google Scholar 

  • Dubreuil, A.M., & Brunel, N. (2016). Storing structured sparse memories in a multi-modular cortical network model. Journal of Computational Neuroscience, 40(2), 157–175.

    Article  PubMed  Google Scholar 

  • Evans, M. (1989). Random dilution in a neural network for biased patterns. Journal of Physics A: Mathematical and General, 22(12), 2103.

    Article  Google Scholar 

  • Fuster, J.M. (1995). Memory in the cerebral cortex. MIT Press.

  • Ganguli, S, Huh, D, Sompolinsky, H. (2008). Memory traces in dynamical systems. Proceedings of the National Academy of Sciences of the United States of America, 105, 18970–18975.

    Article  PubMed  PubMed Central  Google Scholar 

  • Gilbert, C.D., & Wiesel, T. (1989). Columnar specificity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience, 9, 2432–2442.

    Article  CAS  PubMed  Google Scholar 

  • Girard, P., & Bullier, J. (1989). Visual activity in area v2 during reversible inactivation of area 17 in the macaque monkey. Journal of neurophysiology, 62(6), 1287–1302.

    Article  CAS  PubMed  Google Scholar 

  • Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79, 2554–2558.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Jaeger, H., & Haas, H. (2004). Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science, 304(5667), 78–80.

    Article  CAS  PubMed  Google Scholar 

  • Johansson, C., & Lansner, A. (2007). Imposing biological constraints onto an abstract neocortical attractor network model. Neural Computation, 19(7), 1871–1896. https://doi.org/10.1162/neco.2007.19.7.1871 https://doi.org/10.1162/neco.2007.19.7.1871.

    Article  PubMed  Google Scholar 

  • Klampfl, S., David, S.V., Yin, P., Shamma, S.A., Maass, W. (2012). A quantitative analysis of information about past and present stimuli encoded by spikes of a1 neurons. Journal of Neurophysiology, 108(5), 1366–1380.

    Article  PubMed  PubMed Central  Google Scholar 

  • Lim, S., & Goldman, M.S. (2012). Noise tolerance of attractor and feedforward memory models. Neural Computation, 24(2), 332–390.

    Article  PubMed  Google Scholar 

  • Maass, W., Natschläger, T, Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.

    Article  PubMed  Google Scholar 

  • Mari, C.F., & Treves, A. (1998). Modeling neocortical areas with a modular neural network. Bio Systems, 48 (1), 47–55.

    Article  Google Scholar 

  • Markov, N., Misery, P., Falchier, A., Lamy, C., Vezoli, J., Quilodran, R., Gariel, M., Giroud, P., Ercsey-Ravasz, M., Pilaz, L., et al. (2010). Weight consistency specifies regularities of macaque cortical networks. Cerebral Cortex, 21(6), 1254–1272.

    Article  PubMed  Google Scholar 

  • Nikolić, D, Häusler, S, Singer, W., Maass, W. (2009). Distributed fading memory for stimulus properties in the primary visual cortex. PLoS Biology, 7(12), e1000260.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • O’Kane, D, & Treves, A. (1992). Short-and long-range connections in autoassociative memory. Journal of Physics A: Mathematical and General, 25, 5055.

    Article  Google Scholar 

  • Pucak, M.L., Levitt, J.B., Lund, J.S., Lewis, D.A. (1996). Patterns of intrinsic and associational circuitry in monkey prefrontal cortex. Journal of Comparative Neurology, 338, 360–376.

    Google Scholar 

  • Sejnowski, T.J. (1977). Storing covariance with nonlinearly interacting neurons. Journal of Mathematical Biology, 4, 303–.

  • Sompolinsky, H., & White, O. (2003). Theory of large recurrent networks: from spikes to behavior. In Methods and models in neurophysics, volume session LXXX: lecture notes of the Les Houches Summer School (p Chapter 8).

  • Thorpe, S.J., & Fabre-Thorpe, M. (2001). Seeking categories in the brain. Science, 291(5502), 260–263.

    Article  CAS  PubMed  Google Scholar 

  • Tkačik, G, Prentice, J.S., Balasubramanian, V., Schneidman, E. (2010). Optimal population coding by noisy spiking neurons. Proceedings of the National Academy of Sciences, 107(32), 14419–14424.

    Article  Google Scholar 

  • Tsodyks, M., & Feigel’man, M.V. (1988). The enhanced storage capacity in neural networks with low activity level. Europhysics Letters, 6, 101–105.

    Article  Google Scholar 

  • White, O.L., Lee, D.D., Sompolinsky, H. (2004). Short-term memory in orthogonal neural networks. Physical Review Letters, 92(14), 148102.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

I would like to thank Nicolas Brunel and Haim Sompolinsky for useful discussions throughout the course of this work. I also would like to thank Gaetan Bouchet for his help with the figures.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. M. Dubreuil.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Additional information

Action Editor: P. Dayan

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(PDF 357 KB)

Appendix

Appendix

1.1 A.1 Steady-state mean activity profiles for f ≪ 1

We consider a module with mean activity μ = O(f) receiving feed-forward inputs from a module whose mean activity is \(g=\frac {f}{s}\). From (15), the fixed point equations relating μ and g is

$$ \mu=gH\left( \frac{\theta-1}{\lambda\sqrt{\alpha\mu}}\right)+(1-g)H\left( \frac{\theta}{\lambda\sqrt{\alpha\mu}}\right) $$
(24)

Using the estimate \(H(x) \underset {x \gg 1}{\simeq }\frac {1}{x\sqrt {2\pi }}e^{\frac {-x^{2}}{2}} \simeq e^{\frac {-x^{2}}{2}}\), and \(\frac {f|\log f|}{g|\log g|}\simeq \frac {f}{g}\) the fixed point equation can be rewritten

$$ \mu \approx g\left( 1+g^{sx-1}-g^{sx(1-\theta^{-1})^{2}}\right) $$
(25)

with \(x=\frac {\theta ^{2}}{2\alpha f|\log f|}\). Comparing the two last terms of (25) allows to understand whether μ increases or decreases compared to g and thus to describe the two regimes of mean activity profiles of Section 4.2. Moreover, for the regime of increasing activity along the path, the transition from μ = O(f) to μ = O(1) arises for s = 1/x, i.e. g = xf. By differentiating (24) as shown in Supplementary materials, this allows to give expressions for the depth Lc at which the transition occurs. For \(\theta <\frac {1}{2}\)

$$ L_{c} = 2\sqrt{\frac{\pi}{x f^{2(x-1)} |\log f |}} $$
(26)

and for \(\theta >\frac {1}{2}\) and α > (2𝜃− 1𝜃− 2)αc the critical depth scale as

$$ 1/L_{c}\propto\frac{1}{(\theta^{-1}-1)\sqrt{x|\log f|}}f^{x(1-\theta^{-1})^{2}}-\frac{1}{\sqrt{x|\log f|}}f^{x-1} $$
(27)

1.2 A.2 Dynamical equations for memory retrieval in a path

In order to describe the retrieval of pattern \(\boldsymbol {\xi }^{l_{0},1}\) in a module l0 receiving feed-forward inputs from module l0 − 1 (see Section 4.4), we have used the following dynamical equations, whose derivation is detailed in Supplementary materials. \(m^{l_{0}}\) (resp. \(m^{l_{0}-1}\)) is the overlap between the activity in module l0 (resp. l0 − 1) and \(\boldsymbol {\xi }^{l_{0},1}\).

$$ \begin{array}{@{}rcl@{}} m^{l_{0}}(t+1) &=& m^{l_{0}-1}(t)\{ (1-f)[ I_{t}(1,1) - I_{t}(1,0) ] \\ & & + f [ I_{t}(0,1) - I_{t}(0,0)]\} \\ & & + \mu^{l_{0}-1}(t)\{I_{t}(1,1) - I_{t}(1,0) \\ & & - [I_{t}(0,1) - I_{t}(0,0)]\} \\ & & + I_{t}(1,0) - I_{t}(0,0) \end{array} $$
(28)

and

$$ \begin{array}{@{}rcl@{}} \mu^{l_{0}}(t+1) &=& m^{l_{0}-1}(t) f(1-f)\{I_{t}(1,1) - I_{t}(1,0) \\ & & - [I_{t}(0,1) - I_{t}(0,0)]\} \\ & & + \mu^{l_{0}-1}(t)\{ f[ I_{t}(1,1) - I_{t}(1,0) ] \\ & & + (1-f) [ I_{t}(0,1) - I_{t}(0,0)]\} \\ & & + f I_{t}(1,0 ) + (1-f) I_{t}(0,0) \end{array} $$
(29)

with

$$ I_{t}(a,b) = H\left( \frac{\theta-(a-f)m^{l_{0}}(t)-b}{\sqrt{\alpha\mu^{l_{0}}(t)}}\right) $$
(30)

1.3 A.3 Impact of feed-forward noise on persistent activity

The random sequence of inputs is a form of noise that reduces the capacity for retrieval states. To evaluate the existence of a retrieval state \(\boldsymbol {\xi }^{\mu _{0},l_{0}}\) under these conditions, we examine the stability of the module l assuming that it receives random feed-forward inputs from the module l − 1 with a steady state mean activity \(\mu _{eq}^{l-1}\). We re-write the equations for retrieval in module l in terms of the order parameters \({f_{0}^{l}}\) and \({f_{1}^{l}}\), which measure the fraction of background neurons (neurons i such that \(\xi _{i}^{\mu _{0},l_{0}}=0\)) that are active and the fraction of foreground (neurons i such that \(\xi _{i}^{\mu _{0},l_{0}}=1\)) neurons that are active. These order parameters are related to ml and μl by \(m^{l}={f_{1}^{l}} -{f_{0}^{l}}\) and \(\mu ^{l}=f*{f_{1}^{l}} +(1-f)*{f_{0}^{l}}\).

$$ \begin{array}{@{}rcl@{}} {f_{1}^{l}}(t+1)&=&\mu_{eq}^{l-1}H\left( \frac{\theta- m^{l}(t)-1}{\sqrt{\alpha\mu^{l}(t)}}\right)+ \\ &&(1-\mu_{eq}^{l-1})H\left( \frac{\theta- m^{l}(t)}{\sqrt{\alpha\mu^{l}(t)}}\right), t\geq1 \\ {f_{0}^{l}}(t+1)&=&\mu_{eq}^{l-1}H\left( \frac{\theta-1}{\sqrt{\alpha\mu^{l}(t)}}\right)+ \\ &&(1-\mu_{eq}^{l-1})H\left( \frac{\theta}{\sqrt{\alpha\mu^{l}(t)}}\right), t\ge1 \end{array} $$
(31)

If we assume that a pattern has been retrieved, i.e. ml(t = 1) ≃ 1 and \(\mu ^{l}(t=1) = f + \mu _{eq}^{l}\). This pattern remains stable if \({f_{1}^{l}}\) remains of order 1 and \({f_{0}^{l}}\) remains of order f, which comes down to have, for \(\mu _{eq}^{l-1} = O(f) \ll 1\),

$$ \begin{array}{@{}rcl@{}} H\left( \frac{\theta-1}{\sqrt{\alpha(f + \mu_{eq}^{l})}}\right) = 1-f^{\frac{x(1-\theta^{-1})^{2}}{1+\frac{\mu_{eq}^{l}}{f}}} &=& O(1) \\ H\left( \frac{\theta}{\sqrt{\alpha(f + \mu_{eq}^{l})}}\right) = f^{\frac{x}{1+\frac{\mu_{eq}^{l}}{f}}-1} &\ll& 1 \\ \end{array} $$
(32)

For f ≪ 1 this is satisfied if \(x>1+\frac {\mu _{eq}^{l}}{f}\), hence the condition (19) on α.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dubreuil, A.M. Short term memory properties of sensory neural architectures. J Comput Neurosci 46, 321–332 (2019). https://doi.org/10.1007/s10827-019-00720-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-019-00720-w

Keywords

Navigation