Skip to main content
Log in

Theoretical analysis of reverse-time correlation for idealized orientation tuning dynamics

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

A theoretical analysis is presented of a reverse-time correlation method used in experimentally investigating orientation tuning dynamics of neurons in the primary visual cortex. An exact mathematical characterization of the method is developed, and its connection with the Volterra–Wiener nonlinear systems theory is described. Various mathematical consequences and possible physiological implications of this analysis are illustrated using exactly solvable idealized models of orientation tuning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Aguera y Arcas, B., Fairhall, A. L., & Bialek, W. (2003). Computation in a single neuron: Hodgkin and Huxley revisited. Neural Computation, 15, 1715–1794.

    PubMed  Google Scholar 

  • Alitto, H. J., Weyand, T. G., & Usrey, W. M. (2005). Distinct properties of stimulus-evoked bursts in the lateral geniculate nucleus. Journal of Neuroscience, 25(2), 514–523.

    PubMed  CAS  Google Scholar 

  • Alonso, J.-M., Usrey, W. M., & Reid, R. C. (2001). Rules of connectivity between geniculate cells and simple cells in cat primary visual cortex. Journal of Neuroscience, 21, 4002–4015.

    PubMed  CAS  Google Scholar 

  • Ben-Yishai, R., Bar-Or, R., & Sompolinski, H. (1995). Theory of orientation tuning in the visual cortex. Proceedings of the National Academy of Science USA, 92, 3844–3848.

    CAS  Google Scholar 

  • Benardete, E., & Kaplan, E. (1999). The dynamics of primate M retinal ganglion cells. Visual Neuroscience, 16, 355–368.

    PubMed  CAS  Google Scholar 

  • Blasdel, G. (1992a). Differential imaging of ocular dominance and orientation selectivity in monkey striate cortex. Journal of Neuroscience, 12, 3115–3138.

    PubMed  CAS  Google Scholar 

  • Blasdel, G. (1992b). Orientation selectivity, preference, and continuity in the monkey striate cortex. Journal of Neuroscience, 12, 3139–3161.

    PubMed  CAS  Google Scholar 

  • Bonhoeffer, T., & Grinvald, A. (1991). Iso-orientation domains in cat visual cortex are arranged in pinwheel like patterns. Nature, 353, 429–431.

    PubMed  CAS  Google Scholar 

  • Borghuis, B. G., Perge, J. A., Vajda, I., van Wezel, R. J. A., van de Grind, W. A., & Lankheet, M. J. M. (2003). The motion reverse correlation (MRC) method: A linear systems approach in the motion domain. Journal of Neuroscience Methods, 123, 153–166.

    PubMed  Google Scholar 

  • Bredfeldt, C. E., & Ringach, D. L. (2002). Dynamics of spatial frequency tuning in macaque V1. Journal of Neuroscience, 22(5), 1976–1984.

    PubMed  CAS  Google Scholar 

  • Brodie, S. E., Knight, B. W., & Ratliff, F. (1978a). The response of the limulus retina to moving stimuli: A prediction by fourier synthesis. Journal of General Physiology, 72, 129–166.

    PubMed  CAS  Google Scholar 

  • Brodie, S. E., Knight, B. W., & Ratliff, F. (1978b). The spatiotemporal transfer function of the limulus lateral eye. Journal of General Physiology, 72, 167–202.

    PubMed  CAS  Google Scholar 

  • Brugge, J. F., Reale, R. A., Jenison, R. L., & Schnupp, J. (2001). Auditory cortical spatial receptive fields. Audiology and Neurootology, 6, 173–177.

    CAS  Google Scholar 

  • Celebrini, S., Thorpe, S., Trotter, Y., & Imbert, M. (1993). Dynamics of orientation coding in area V1 of the awake primate. Visual Neuroscience, 10, 811–825.

    PubMed  CAS  Google Scholar 

  • Chen, G., Dan, Y., & Li, C.-Y. (2005). Stimulation of non-classical receptive field enhances orientation selectivity in cat. Journal of Physiology, 564(1), 233–243.

    PubMed  CAS  Google Scholar 

  • Chichilnisky, E. J. (2001). A simple white noise analysis of neuronal light responses. Network, 12, 199–213.

    PubMed  CAS  Google Scholar 

  • Citron, M. C., & Emerson, R. C. (1983). White noise analysis of cortical directional selectivity in cat. Brain Research, 279, 271–277.

    PubMed  CAS  Google Scholar 

  • Cottaris, N. P., & DeValois, R. L. (1998). Temporal dynamics of chromatic tuning in macaque primary visual cortex. Nature, 395(6705), 896–900.

    PubMed  CAS  Google Scholar 

  • Daugman, J. G. (1985). Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. Journal of the Optical Society of America A, 2(7), 1160–1169.

    CAS  Google Scholar 

  • DeAngelis, G., Ohzawa, I., & Freeman, R. (1993). Spatiotemporal organization of simple-cell receptive fields in the cat’s striate cortex. I. general characteristics and postnatal development. Journal of Neurophysiology, 69, 1091–1117.

    PubMed  CAS  Google Scholar 

  • DeAngelis, G., Ohzawa, I., & Freeman, R. (1995). Receptive-field dynamics in the central visual pathways. Trends in Neurosciences, 18(10), 451–458.

    PubMed  CAS  Google Scholar 

  • de Boer, E. (1967). Correlation studies applied to frequency resolution of the cochlea. Journal of Auditory Research, 7, 209–217.

    Google Scholar 

  • de Boer, E., & de Jongh, H. R. (1978). On cochlear encoding: Potentialities and limitations of the reverse-correlation technique. Journal of the Acoustical Society of America, 63(1):115–135.

    PubMed  Google Scholar 

  • de Ruyter van Steveninck, R., & Bialek, W. (1988). Real-time performance of a movement-sensitive neuron in the blowfly visual system: Coding and information transfer in short spike sequences. Proceedings of the Royal Society of London B, 234, 379–414.

    Google Scholar 

  • Douglas, R., Koch, C., Mahowald, M., Martin, K., & Suarez, H. (1995). Recurrent excitation in neocortical circuits. Science, 269, 981–985.

    PubMed  CAS  Google Scholar 

  • Dragoi, V., Sharma, J., Miller, E. K., & Sur, M. (2002). Dynamics of neuronal sensitivity in visual cortex and local feature discrimination. Nature Neuroscience, 5(9), 883–891.

    PubMed  CAS  Google Scholar 

  • Ernst, U. A., Pawelzik, K. R., Sahar-Pikielny, C., & Tsodyks, M. V. (2001). Intracortical origin of visual maps. Nature Neuroscience, 4(4), 431–436.

    PubMed  CAS  Google Scholar 

  • Feller, W. (1966). An introduction to probability theory and its applications, (Vol. 2). New York: Wiley.

    Google Scholar 

  • Felsen, G., Shen, Y.-S., Yao, H., Spor, G., Li, C., & Dan, Y. (2002). Dynamic modification of cortical orientation tuning mediated by recurrent connections. Neuron, 35, 945–954.

    Google Scholar 

  • Franz, M. O., & Schölkopf, B. (2006). A unifying view of Wiener and Volterra theory and polynomial kernel regression. Neural Computation, 18, 3097–3118.

    PubMed  Google Scholar 

  • Giannakis, G. B., & Serpedin, E. (2001). A bibliography on nonlinear system identification. Signal Processing, 81, 533–580.

    Google Scholar 

  • Gielen, C., van Gisbargen, J., & Ventric, A. (1981). Characterization of spatial and temporal properties of monkey LGN Y-cells. Biological Cybernetics, 40, 157–170.

    PubMed  CAS  Google Scholar 

  • Gillespie, D. C., Lampl, I., Anderson, J. S., & Ferster, D. (2001). Dynamics of the orientation-tuned membrane potential response in cat primary visual cortex. Nature Neuroscience, 4, 1014–1019.

    PubMed  CAS  Google Scholar 

  • Jones, J. P., & Palmer, L. A. (1987). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1187–1211.

    PubMed  CAS  Google Scholar 

  • Juusola, M., & French, A. S. (1997). Visual acuity for moving objects in first- and second-order neurons of the fly compound eye. Journal of Neurophysiology, 77,1487–1495.

    PubMed  CAS  Google Scholar 

  • Kang, K., Shelley, M. J., & Sompolinsky, H. (2003). Mexican hats and pinwheels in visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 100, 2848–2853.

    PubMed  CAS  Google Scholar 

  • Klein, D. J., Simon, J. Z., Depireux, D. A., & Shamma, S. A. (2006). Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex. Journal of Computational Neuroscience, 20, 111–136.

    PubMed  Google Scholar 

  • Klein, S. A., & Yatsui, S. (1979). Nonlinear systems analysis with non-gaussian white stimuli: General basis functionals and kernels. IEEE Transactions on Information Theory, 25, 495–500.

    Google Scholar 

  • Lee, Y. N., & Schetzen, M. (1965). Measurement of the wiener kernels of a nonlinear system by cross correlation. International Journal of Control, 2, 237–254.

    Google Scholar 

  • Lewis, E. R., Henry, K. R., & Yamada, W. M. (2002). Tuning and timing in the gerbil ear: Wiener-kernel analysis. Heart Research, 174, 206–221.

    Google Scholar 

  • Maldonado, P., Godecke, I., Gray, C., & Bonhoeffer, T. (1997). Orientation selectivity in pinwheel centers in cat striate cortex. Science, 276, 1551–1555.

    PubMed  CAS  Google Scholar 

  • Marcelja, S. (1980). Mathematical description of responses of simple cortical cells. Journal of the Optical Society of America, 70(11), 1297–1300.

    Article  PubMed  CAS  Google Scholar 

  • Marino, J., Schummers, J., Lyon, D. C., Schwabe, L., Beck, O., Wiesing, P., et al. (2005). Invariant computations in local cortical networks with balanced excitation and inhibition. Nature Neuroscience, 8, 194–201.

    PubMed  CAS  Google Scholar 

  • Marmarelis, P. Z., & McCann, G. D. (1973). Development and application of white noise modeling techniques for studies of insect visual nervous system. Kybernetik, 12, 74–89.

    PubMed  CAS  Google Scholar 

  • Marmarelis, P. Z., & Marmarelis, V. Z. (1978). Analysis of physiological systems. New York: Plenum.

    Google Scholar 

  • Mata, M. L., & Ringach, D. L. (2004). Spatial overlap of ON and OFF subregions and its relation to response modulation ratio in macaque primary visual cortex. Journal of Neurophysiology, 93, 919–928.

    PubMed  Google Scholar 

  • Matthews, V. J., & Sicuranza, G. L. (2002). Polynomial signal processing. New York: Wiley.

    Google Scholar 

  • Mazer, J., Vinje, W., McDermott, J., Schiller, P., & Gallant, J. (2002). Spatial frequency and orientation tuning dynamics in area V1. Proceedings of the National Academy of Sciences of the United States of America, 99(3), 1645–1650.

    PubMed  CAS  Google Scholar 

  • Miller, L. M., Escabi, M. A., Read, H. L., & Schreiner, C. E. (2002). Spectrotemporal receptive fields in the lemniscal auditory thalamus and cortex. Journal of Neurophysiology, 87, 516–527.

    PubMed  Google Scholar 

  • Müller, J. R., Metha, A. B., Krauskopf, J., & Lennie, P. (2001). Information conveyed by onset transients in responses of striate cortical neurons. Journal of Neuroscience, 21, 6978–6990.

    PubMed  Google Scholar 

  • McCann, G. D. (1974). Nonlinear identification theory models for successive stages of visual nervous systems of flies. Journal of Neurophysiology, 37, 869–895.

    PubMed  CAS  Google Scholar 

  • McLaughlin, D., Shapley, R., Shelley, M., & Wielaard, J. (2000). A neuronal network model of macaque primary visual cortex (V1): Orientation selectivity and dynamics in the input layer 4. Proceedings of the National Academy of Sciences of the United States of America, 97, 8087–8092.

    PubMed  CAS  Google Scholar 

  • Naka, K.-I., Marmarelis, P. Z., & Chan, R. Y. (1975). Morphological and functional identifications of catfish retinal neurons. III. functional identification. Journal of Neurophysiology, 38, 92–131.

    PubMed  CAS  Google Scholar 

  • Nelson, J. I., & Frost, B. (1978). Orientation selective inhibition from beyond the classical receptive field. Brain Research, 139, 359–365.

    PubMed  CAS  Google Scholar 

  • Neri, P. (2004a). Estimation of nonlinear psychophysical kernels. Journal of Vision, 4, 82–91.

    PubMed  Google Scholar 

  • Neri, P. (2004b). Attentional effects on sensory tuning for single-feature detection and double-feature conjunction. Vision Research, 44, 3053–3064.

    PubMed  Google Scholar 

  • Neri, P., & Levi, D. M. (2006). Receptive versus perceptive fields from the reverse-correlation viewpoint. Vision Research, 46, 2465–2474.

    PubMed  Google Scholar 

  • Nishimoto, S., Arai, M., & Ohzawa, I. (2005). Accuracy of subspace mapping of spatiotemporal frequency domain visual receptive fields. Journal of Neurophysiology, 93, 3524–3536.

    PubMed  Google Scholar 

  • Nykamp, D. Q. (2003). Measuring linear and quadratic contributions to neuronal response. Network, 14, 673–702.

    Article  PubMed  Google Scholar 

  • Nykamp, D. Q., & Ringach, D. L. (2002). Full identification of a linear-nonlinear system via cross-correlation analysis. Journal of Vision, 2, 1–11.

    PubMed  Google Scholar 

  • Ohzawa, I., Sclar, G., & Freeman, R. D. (1985). Contrast gain control in the cats visual system. Journal of Neurophysiology, 3, 651–667.

    Google Scholar 

  • Palmer, L. A., & Nafziger, J. S. (2002). Effects of surround motion on receptive-field gain and structure in area 17 of the cat. Visual Neuroscience, 19, 335–353.

    PubMed  Google Scholar 

  • Pack, C. C., Conway, B. R., Born, R. T., & Livingstone, M. S. (2006). Spatiotemporal structure of nonlinear subunits in macaque visual cortex. Journal of Neuroscience, 26(3), 893–907.

    PubMed  CAS  Google Scholar 

  • Pei X, X., Vidyasagar, T. R., Volgushev, M., & Creutzfeldt, O. D. (1994). Receptive field analysis and orientation selectivity of postsynaptic potentials of simple cells in cat visual cortex. Journal of Neuroscience, 14, 7130–7140.

    PubMed  CAS  Google Scholar 

  • Pernberg, J., Jirmann, K.-U., & Eysel, U. T. (1998). Structure and dynamics of receptive fields in the visual cortex of the cat (area 18) and the influence of GABAergic inhibition. European Journal of Neuroscience, 10, 3596–3606.

    PubMed  CAS  Google Scholar 

  • Pinter, R. B., & Nabet, B. (1992). Nonlinear vision: Determination of neural receptive fields, function, and networks. Boca Raton: CRC Press.

    Google Scholar 

  • Powers, R. K., Dai, Y., Bell, B. M., Percival, D. B., & Binder, M. D. (2006). Contributions to the input signal and prior activation history to the discharge behaviour of rat motoneurones. Journal of Physiology, 562(3), 707–724.

    Google Scholar 

  • Pugh, M., Ringach, D., Shapley, R., & Shelley, M. (2000). Computational modeling of orientation tuning dynamics in V1 neurons. Journal of Computational Neuroscience, 8, 143–159.

    PubMed  CAS  Google Scholar 

  • Ramdya, P., Reiter, B., & Engert, F. (2006). Reverse correlation of rapid calcium signals in the zebrafish optic tectum in vivo. Journal of Neuroscience Methods, 157, 230–237.

    PubMed  CAS  Google Scholar 

  • Recio-Spinoso, A., Temchin, A. N., Van Dijk, P., Fan, Y.-H., & Ruggero, M. A. (2005). Wiener-kernel analysis of responses to noise of chinchilla auditory-nerve fibers. Journal of Neurophysiology, 93, 3615–3634.

    PubMed  Google Scholar 

  • Recio-Spinoso, A., & Van Dijk, P. (2006). Analysis of responses to noise in the ventral cochlear nucleus using Wiener kernels. Heart Research, 216–217, 7–18.

    Google Scholar 

  • Reid, R. C., & Alonso, J.-M. (1995). Specificity of monosynaptic connections from thalamus to visual cortex. Nature, 378, 281–284.

    PubMed  CAS  Google Scholar 

  • Reid, R. C., & Shapley, R. (1992). Spatial structure of cone inputs to receptive fields in primate lateral geniculate nucleus. Nature, 356, 716–718.

    PubMed  CAS  Google Scholar 

  • Reid, R. C., & Shapley, R. (2002). Space and time maps of cone photoreceptor signals in macaque lateral geniculate nucleus. Journal of Neuroscience, 22(14), 6158–6175.

    PubMed  CAS  Google Scholar 

  • Reid, R. C., Victor, J. D., & Shapley, R. M. (1997). The use of m-sequences in the analysis of visual neurons: Linear receptive field properties. Visual Neuroscience, 14(6), 1015–1027.

    PubMed  CAS  Google Scholar 

  • Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1996). Spikes: exploring the neural code. Computational Neuroscience. Cambridge: MIT Press.

    Google Scholar 

  • Ringach, D. L. (1998). Tuning of orientation detectors in human vision. Vision Research, 38, 963–972.

    PubMed  CAS  Google Scholar 

  • Ringach, D. L. (2002). Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88, 455–463.

    PubMed  Google Scholar 

  • Ringach, D., & Shapley, R. (2004). Reverse correlation in neurophysiology. Cognitive Science, 28, 147–166.

    Google Scholar 

  • Ringach, D., Hawken, M., & Shapley, R. (1997a). Dynamics of orientation tuning in macaque primary visual cortex. Nature, 387, 281–284.

    PubMed  CAS  Google Scholar 

  • Ringach, D., Sapiro, G., & Shapley, R. (1997b). A subspace reverse correlation technique for the study of visual neurons. Vision Research, 37, 2455–2464.

    PubMed  CAS  Google Scholar 

  • Ringach, D. L., Bredfeldt, C. E., Shapley, R. M., & Hawken, M. J. (2002). Suppression of neural responses to nonoptimal stimuli correlates with tuning selectivity in macaque V1. Journal of Neurophysiology, 87(14), 1018–1027.

    PubMed  Google Scholar 

  • Ringach, D., Hawken, M., & Shapley, R. (2003). Dynamics of orientation tuning in macaque V1: The role of global and tuned suppression. Journal of Neurophysiology, 90, 342–352.

    PubMed  Google Scholar 

  • Rugh, W. J. (1981). Nonlinear system theory: The Volterra/Wiener approach. Baltimore: Johns Hopkins University Press.

    Google Scholar 

  • Rutkowski, R. G., Shackleton, T. M., Schnupp, J. W. H., Wallace, M. N., & Palmer, A. R. (2002). Spectrotemporal receptive field properties of single units in the primary, dorsocaudal and ventrorostral auditory cortex of the guinea pig. Audiology and Neurotology, 7, 214–227.

    PubMed  Google Scholar 

  • Schetzen, M. (1980). The Volterra and Wiener theories of nonlinear systems. New York: Wiley.

    Google Scholar 

  • Shapley, R., Hawken, M., & Ringach, D. L. (2003). Dynamics of orientation selectivity in the primary visual cortex and the importance of cortical inhibition. Neuron, 38, 689–699.

    PubMed  CAS  Google Scholar 

  • Sharon, D., & Grinvald, A. (2002). Dynamics and constancy in cortical spatiotemporal patterns of orientation processing. Science, 295, 512–515.

    PubMed  CAS  Google Scholar 

  • Shelley, M., & McLaughlin, D. (2002). Coarse-grained reduction and analysis of a network model of cortical response. I. drifting grating stimuli. Journal of Computational Neuroscience, 12(2), 97–122.

    PubMed  Google Scholar 

  • Shelley, M., & Tao, L. (2001). Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 11, 111–119.

    PubMed  CAS  Google Scholar 

  • Sillito, A. M., Kemp, J. A., Milson, J. A., & Berardi, A. (1980). A re-evaluation of the mechanisms underlying simple cell orientation selectivity. Brain Research, 194, 517–520.

    PubMed  CAS  Google Scholar 

  • Somers, D., Nelson, S., & Sur, M. (1995). An emergent model of orientation selectivity in cat visual cortical simple cells. Journal of Neuroscience, 15, 5448–5465.

    PubMed  CAS  Google Scholar 

  • Sompolinsky, H., & Shapley, R. M. (1997). New perspectives on the mechanisms for orientation selectivity. Current Opinion in Neurobiology, 7(4), 514–522.

    PubMed  CAS  Google Scholar 

  • Suder, K., Funke, K., Zhao, Y., Kerscher, N., Wennekers, T., & Wörgötter, F. (2002). Spatial dynamics of receptive fields in cat primary visual cortex related to the temporal structure of thalamocortical feedforward activity. Experimental Brain Research, 144, 430–444.

    Google Scholar 

  • Svirskis, G., Dodla, R., & Rinzel, J. (2003). Subthreshold outward currents enhance temporal integration in auditory neurons. Biological Cybernetics, 89, 333–340.

    PubMed  Google Scholar 

  • Tanaka, K. (1985). Organization of geniculate inputs to visual cortical cells in the cat. Vision Research, 25, 357–364.

    PubMed  CAS  Google Scholar 

  • Tao, L., Cai, D., McLaughlin, D. W., Shelley, M. J., & Shapley, R. (2006). Orientation selectivity in visual cortex by fluctuation-controlled criticality. Proceedings of the National Academy of Sciences of the United States of America, 103, 12911–12916.

    PubMed  CAS  Google Scholar 

  • Tao, L., Shelley, M., McLaughlin, D., & Shapley, R. (2004). An egalitarian network model for the emergence of simple and complex cells in visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 101(1), 366–371.

    PubMed  CAS  Google Scholar 

  • Vajda, I., Borghuis, B. G., van de Grind, W. A., & Lankheet, M. J. M. (2006). Temporal interactions in direction-selective complex cells of area 18 and the posteromedial lateral suprasylvian cortex (PMLS) of the cat. Visual Neuroscience, 23, 233–246.

    PubMed  Google Scholar 

  • van Kleef, J., James, A. C., & Stange, G. (2005). A spatiotemporal white noise analysis of photoreceptor responses to UV and green light in the dragonfly median ocellus. Journal of General Physiology, 126(5), 481–497.

    PubMed  Google Scholar 

  • Victor, J. D. (1979). Nonlinear systems analysis: Comparison of white noise and sum of sinusoids in a biological system. Proceedings of the National Academy of Sciences of the United States of America, 76, 996–998.

    PubMed  CAS  Google Scholar 

  • Victor, J. D. (1992). Nonlinear systems analysis in vision: Overview of kernel methods. In B. Nabet & R. B. Pinter (Eds.), Nonlinear vision: Determination of neural receptive fields, function, and networks (pp. 1–37). Boca Raton: CRC Press.

    Google Scholar 

  • Victor, J. D., & Knight, B. W. (1979). Nonlinar analysis with an arbitrary stimulus ensemble. Quarterly of Applied Mathematics, 37, 113–136.

    Google Scholar 

  • Victor, J. D., & Shapley, R. M. (1978). The nonlinear pathway of Y ganglion cells in the cat retina. Journal of General Physiology, 74, 671–689.

    Google Scholar 

  • Victor, J. D., & Shapley, R. M. (1980). Method of non-linear analysis in the frequency-domain. Biophysical Journal, 29(3), 459–483.

    Article  PubMed  CAS  Google Scholar 

  • Vislay-Meltzer, R. L., Kampff, A. R., & Engert, F. (2006). Spatiotemporal specificity of neuronal activity directs the modification of receptive fields in the developing retinotectal system. Neuron, 50, 101–114.

    PubMed  CAS  Google Scholar 

  • Volgushev, M., Vidyasagar, T. R., & Pei, X. (1995). Dynamics of the orientation tuning of postsynaptic potentials in the cat visual cortex. Visual Neuroscience, 12, 621–628.

    PubMed  CAS  Google Scholar 

  • Volterra, V. (1930). Theory of functionals and of integral and integro-differential equations. London: Blackey.

    Google Scholar 

  • Wielaard, J., Shelley, M., McLaughlin, D., & Shapley, R. (2001). How Simple cells are made in a nonlinear network model of the visual cortex. Journal of Neuroscience, 21(14), 5203–5211.

    PubMed  CAS  Google Scholar 

  • Wiener, N. (1958). Nonlinear problems in random theory. Technology Press Research Monographs. Cambridge: The Technology Press of Massachusetts Institute of Technology.

    Google Scholar 

  • Wolfe, J., & Palmer, L. A. (1998). Temporal diversity in the lateral geniculate nucleus of cat. Visual Neuroscience, 15, 653–675.

    PubMed  CAS  Google Scholar 

  • Xing, D., Shapley, R. M., Hawken, M. J., & Ringach, D. L. (2005). Effect of stimulus size on the dynamics of orientation selectivity in macaque V1. Journal of Neurophysiology, 94, 799–812.

    PubMed  Google Scholar 

  • Yamada, W. M., & Lewis, E. R. (1999). Predicting the temporal responses of nonphase-locking bullfrog auditory units to complex acoustic wave-forms. Heart Research, 130, 155–170.

    CAS  Google Scholar 

Download references

Acknowledgements

The authors would like to thank J. Biello, P. Kramer, D. McLaughlin, D. Nykamp, A. Rangan, R. Shapley, and E. Vanden Eijnden for helpful discussions. G.K. was supported by NSF grants IGMS-0308943 and DMS-0506287, and gratefully acknowledges the hospitality of the Courant Institute of Mathematical Sciences and Center for Neural Science during his visits at New York University in 2000/01 and 2003/04. L.T. was supported by the NSF grant DMS-0506257. D.C. and M.J.S. were partly supported by the NSF grant DMS-0506396.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Kovačič.

Additional information

Action Editor: Jonathan D. Victor

Appendices

Appendix

1 Probabilistic lemma

Lemma

Let x be drawn from the uniform distribution on [0,1), and y taken independently from another arbitrary distribution. Consider the random variable\(\tilde{z}=x+y+k\), where k is that unique integer for which 0 ≤ x + y + k < 1. Then, the distribution of\(\tilde{z}\) is the uniform distribution on [0,1).

(That is,\(\tilde{z}\) is the random variable z = x + y shifted into the unit interval. This shifting operation is how we understand the equivalence of the random variables ψ and ψ + 2π in studying sin ψ.)

Proof

Let P(y) be the density of the random variable y. The density of the random variable z = x + y is given then by the relation

$$ R(z)=\int_0^1 P(z-x) dx~, $$

and hence the distribution of \(\tilde{z}\) is given by

$$ \tilde{R}(\tilde{z})=\sum_{k=-\infty}^{\infty} R(\tilde{z}+k)=\int_{-\infty}^{+\infty} P(x)dx\equiv 1~, $$

i.e., the uniform distribution.□

2 Discretization effects in the Volterra–Wiener theory

In this appendix, we discuss the error introduced in the reconstruction (58) of the Wiener kernels by using approximate binary random noise with finite presentation time ν instead of the Gaussian white noise as the stimulus. For simplicity, we assume that the system in question can be described by a convergent Volterra series of the form (8). In particular, we show that the error is of order \({\mathcal O}(\nu^{1/2})\).

We do not address the related question of how to represent the responses to bandwidth-limited stimuli, such as orientation angles that can only change every ν milliseconds, by a discrete version of the Volterra–Wiener theory. This type of a discrete version is developed, for instance in Klein and Yatsui (1979).

Approximation of the Gaussian white-noise stimulus by binary white noise

As already discussed in Section 3, in order to find the Wiener kernels of a given nonlinear system, we “probe” this system with Gaussian white-noise stimuli. Such stimuli, say s 0(t), have the correlation properties

$$ \left\langle s_0(t_1) s_0(t_2)\right\rangle = A^2\delta(t_1-t_2), $$
(46a)
$$ \left\langle s_0(t_1) \ldots s_0(t_{2n+1})\right\rangle = 0, $$
(46b)
$$ \left\langle s_0(t_1) \ldots s_0(t_{2n})\right\rangle = \sum\prod \left\langle s_0(t_i) s_0(t_j)\right\rangle. $$
(46c)

Here, \(\langle \cdot\rangle\) denotes the ensemble averaging over all realizations of the white noise s 0(t), A 2 is the power of the stimulus, δ(t) is the Dirac function, and the sum in the last equation runs over all distinct products of n two-point correlations.

In practice, we can replace the probing Gaussian white-noise stimulus s 0(t) by the binary white noise stimulus with finite presentation time

$$ \begin{array}{lll}\label{walk} s_\nu(t)&=&\frac{A}{\sqrt\nu}\left\{\begin{array}{ll}p,\quad &\mbox{with probability } q,\\[10pt] -q,\quad &\mbox{with probability } p,\end{array}\right.\nonumber\\ &&{\kern22.6pt} k\nu< t < (k+1)\nu,\quad k=\ldots,-1,0,1,\ldots,\nonumber\\ \end{array} $$
(47)

where p + q = 1. The values of s ν (t) in different intervals  < t < (k + 1)ν are assumed to be independent. The correlation properties of this binary white noise are as follows: First,

$$\label{singleavg} \langle s_\nu(t)\rangle =0. $$
(48a)

Second, if t 1,...,t m all lie in the same interval  < t < (k + 1)ν, then

$$\label{boxcorrelations} \langle s_\nu(t_1)\ldots s_\nu(t_m)\rangle =A^m\nu^{-\frac{m}{2}} \left[p^mq +p (-q)^m\right]. $$
(48b)

Finally,

$$\label{nucorrelations} \langle s_\nu(t_1)\ldots s_\nu(t_n)\rangle=\sum\prod \langle s_\nu(t_i)\ldots s_\nu(t_j)\rangle, $$
(48c)

where within each factor \(\langle s_\nu(t_i)\ldots s_\nu(t_j)\rangle\), all the times t l lie in the same interval  < t < (k + 1)ν, every s ν (t l ) is represented in each term once and only once, and the sum is performed over all distinct such terms.

Let us now show that the correlation function relations (48c) converge to the correlation function relations (46) of the Gaussian white noise s 0(t) as ν→0. To this end, let us compute the integral

$$\label{showdelta} \int_{-\infty}^\infty\ldots \int_{-\infty}^\infty L(\tau_1,\ldots,\tau_n) \left\langle s(t-\tau_1)\ldots s(t-\tau_n)\right\rangle \,d\tau_1 \ldots d\tau_n, $$
(49)

with an arbitrary function L(τ 1,...,τ n ), which we can assume to be symmetric in all its arguments. Formula (48c) and the symmetry of the function L show that the integral in Eq. (49) breaks up into a linear combination of integrals of the form

$$\label{prodint} \int_{-\infty}^\infty\!\ldots\! \int_{-\infty}^\infty\!\! L(\tau_1,\ldots,\tau_n)\!\prod_{j=1}^l \!\left\langle s\left(t\!-\!\tau_{p_{j-1}+1}\right)\ldots s\left(t\!-\!\tau_{p_j}\right)\right\rangle d\tau_1 \ldots d\tau_n $$
(50)

where 0 = p 0 < p 1 < ... < p l − 1 < p l  = n, and all the indices in the product, regardless of the angle brackets, are ordered as 1,...,n. In formula (50), all the times τ k , with p j − 1 < k < p j , are contained in the same intervals m j ν < t < (m j  + 1)ν. Formula (48b) shows that the integral in Eq. (50) is proportional to the expression

$$ \begin{array}{lll}\label{sumandint} &&{\kern-6pt} A^n \nu^{-\frac{n}{2}}\sum\limits_{m_1=-\infty}^\infty \ldots \sum\limits_{m_l=-\infty}^\infty\int_0^\nu\ldots \int_0^\nu L(t-m_1\nu-\sigma_1, \ldots,t-m_1\nu-\sigma_{p_1},\nonumber \\[2pt]&&\qquad\qquad\qquad\qquad\qquad\qquad\quad \ldots,t-m_l\nu-\sigma_{p_{l-1}+1},\ldots,t-m_l\nu-\sigma_{p_l}) \,d\sigma_1 \ldots d\sigma_n \nonumber \\[2pt]&&{\kern11pt} =A^n\nu^{-\frac{n}{2}}\nu^{n-l}\left[\sum\limits_{m_1=-\infty}^\infty \ldots \sum\limits_{m_l=-\infty}^\infty L(t-m_1\nu, \ldots,t-m_1\nu,\ldots,t-m_l\nu,\ldots,t-m_l\nu)\nu^l +{\mathcal O}(\nu)\right] \nonumber \\[2pt]&&{\kern11pt} =A^n\nu^{\frac{n}{2}-l}\left[\int_{-\infty}^\infty\ldots\int_{-\infty}^\infty L(t_1, \ldots,t_1,\ldots,t_l,\ldots,t_l)\,dt_1\ldots dt_l +{\mathcal O}(\nu)\right]. \end{array} $$
(51)

In the last two lines, m j and t j , respectively, appear as arguments of the function L precisely p j  − p j − 1 times. The estimate showing that the error inside the square brackets is really \({\mathcal O}(\nu)\) will be presented later.

First, however, let us discuss the scaling in the formula (51). By Eq. (48a), it should be clear that the smallest number of factors within any pair of angle brackets in formula (50) can be two. Therefore, l ≤ n/2. Moreover, l = n/2 precisely when n is even, and each pair of angle brackets in Eq. (50) contains precisely two factors. If n is odd, the largest terms are of the order \(\nu^{\frac{1}{2}}\). In other words, as ν→0, in the sense of distributions, the correlation function relation (48c) limits onto the correlation function relations (46) with an error of order \(\nu^{\frac{1}{2}}\).

We now make the \({\mathcal O}(\nu)\) estimate in formula (51) precise. In fact, we only show that the expression on the first two lines equals the expression on the last line. The equality with the middle line is shown in a similar fashion. We need to estimate the difference

$$ \begin{array}{lll}\label{differenceofintegrals} &&{\kern-6pt} \nu^{l-n}\!\!\sum\limits_{m_1=-\infty}^\infty \!\!\ldots\!\! \sum\limits_{m_l=-\infty}^\infty\!\int_0^\nu\!\!\ldots\!\! \int_0^\nu\!\! L(t\!-\!m_1\nu\!-\!\sigma_1, \ldots,t\!-\!m_1\nu-\sigma_{p_1},\nonumber \\[2pt]&& {\kern5pt} \ldots,t-m_l\nu-\sigma_{p_{l-1}+1},\ldots,t-m_l\nu-\sigma_{p_l}) \,d\sigma_1 \ldots d\sigma_n \nonumber \\[2pt] &&{\kern5pt} -\int_{-\infty}^\infty\ldots\int_{-\infty}^\infty L(t_1, \ldots,t_1,\ldots,t_l,\ldots,t_l)\,dt_1\ldots dt_l, \end{array} $$
(52)

in which t j appears in precisely those slots in L in which m j appears. This difference can be rewritten as

$$ \begin{array}{lll} &&{\kern-6pt} \nu^{l-n}\!\sum\limits_{m_1=-\infty}^\infty \!\!\ldots\!\! \sum\limits_{m_l=-\infty}^\infty\!\int_0^\nu\!\!\ldots\!\! \int_0^\nu \Bigl[\! L(t\!-\!m_1\nu\!-\!\sigma_1, \ldots,t\!-\!m_1\nu\!-\!\sigma_{p_1},\nonumber \\[6pt]&&{\kern12pt} \ldots,t-m_l\nu-\sigma_{p_{l-1}+1},\ldots,t-m_l\nu-\sigma_{p_l})\nonumber \\[2pt] &&{\kern5pt} -L(t-m_1\nu-\sigma_{p_1}, \ldots,t-m_1\nu-\sigma_{p_1},\nonumber \\[2pt]&&{\kern12pt} \ldots,t-m_l\nu-\sigma_{p_l},\ldots,t-m_l\nu-\sigma_{p_l}) \Bigr]\,d\sigma_1 \ldots d\sigma_n. \end{array} $$
(53)

The absolute value of the expression in the square brackets can be estimated as smaller than

$$ \begin{array}{lll} &&{\kern-6pt} R\big(t-m_1\nu-\sigma_{p_1}, t-m_2\nu-\sigma_{p_2},\ldots,t-m_l\nu-\sigma_{p_l}\big)\nonumber\\[3pt] &&{\kern6pt}\times \bigl( |\sigma_1-\sigma_{p_1}|+\ldots+|\sigma_{p_1-1}-\sigma_{p_1}| +|\sigma_{p_1+1}-\sigma_{p_2}|+\nonumber \\[3pt] &&{\kern19pt} \ldots + |\sigma_{p_2-1}-\sigma_{p_2}|+\ldots + |\sigma_{p_{l-1}+1}-\sigma_{p_l}|+\nonumber\\[3pt] &&{\kern19pt} \ldots +|\sigma_{p_l-1}-\sigma_{p_l}|\bigr), \end{array} $$
(54)

where R is some non-negative integrable function of l arguments. In particular, the suprema of the partial derivatives of L in some fixed-size neighborhood of the point

$$ (t-m_1\nu-\sigma_{p_1}, \ldots,t-m_1\nu-\sigma_{p_1},\ldots,t-m_l\nu-\sigma_{p_l},\ldots,t-m_l\nu-\sigma_{p_l}) $$

will be the ingredients of R. The integral \(\int_0^\nu |\sigma_i-\sigma_j|\,d\sigma_i\) can easily be evaluated to be

$$ \int_0^\nu |\sigma_i-\sigma_j|\,d\sigma_i=\frac{\nu^2}{4}+\left(\frac{\nu}{2}-\sigma_j\right)^2, $$
(55)

which is a parabola, whose maximal value in the interval 0 ≤ σ j  ≤ ν is ν 2/2. Therefore, the difference in Eq. (52) can be bounded in absolute value by the expression

$$ \begin{array}{lll} &&\nu^{l-n}(n-l)\frac{\nu^{n+1-l}}{2} \sum\limits_{m_1=-\infty}^\infty \ldots \sum\limits_{m_l=-\infty}^\infty\int_0^\nu\ldots\nonumber\\&&{\kern12pt} \int_0^\nu R(t-m_1\nu-u_1,\ldots,t-m_l\nu-u_l)\,du_1\ldots du_l\nonumber \\[12pt]&&{\kern20pt} =\frac{(n-l)\nu}{2}\int_{-\infty}^\infty\ldots \int_{-\infty}^\infty R(v_1,\ldots,v_l)\,dv_1\ldots dv_l \nonumber\\[12pt]&&{\kern20pt} = {\mathcal O}(\nu), \end{array} $$
(56)

as promised.

Reconstruction of the Wiener kernels by using binary white noise with finite presentation time

We proceed to study the error incurred by using only the binary white noise s ν (t) in Eq. (47) instead of all the realizations of the Gaussian white noise s 0(t) to probe the system (8). We first recall that the Wiener representation of Eq. (8) consists of a series

$$\label{wienerseries} r[s](t)\!=\!P_0[s]\! +\!P_1[s](t)\!+\!P_2[s](t)\!+\cdots +\! P_n[s](t)\! + \cdots, $$
(57)

in which the terms P n [s](t) are called the n-th order Wiener functionals. Each is an n-th order Volterra polynomial in the stimulus s(t) with the leading-order Volterra kernel W n (T 1,...,T n ), which we will call the n-th leading-order Wiener kernel (cf. Schetzen 1980). The functionals P n [s](t) are pairwise orthogonal with respect to the inner product \(\left\langle P_m[s_0](t) P_n[s_0](t)\right\rangle\), with \(\left\langle \cdot \right\rangle\) denoting ensemble averaging over all possible realizations of the Gaussian white noise stimulus, s 0(t). It is known that the n-th leading-order Wiener kernel uniquely determines the entire Wiener functional P n [s](t), that is, all the lower-order Volterra kernels in it, see Wiener (1958), Schetzen (1980), Victor (1992).

In practice, one probes for the Wiener kernels, W m (T 1,...,T m ) by using the Lee–Schetzen cross correlation algorithm (Schetzen 1980). In particular, for pairwise distinct values T 1,...,T m , the m-th leading-order Wiener kernel W m (T 1,...,T m ) can be computed by cross correlation as

$$\label{kernels} W_m(T_1,\ldots,T_m)= \frac{1}{A^{2m} m!}\left\langle r[s_0](t) s_0(t-T_1)\ldots s_0(t-T_m)\right\rangle , $$
(58)

where s 0(t) is zero-mean Gaussian white noise [it therefore has the correlation properties (46)]. Assuming ergodicity, one can write Eq. (58) in the form

$$ \begin{array}{lll}\label{kernels1} &&{\kern-6pt} W_m(T_1,\ldots,T_m)\nonumber\\&&= \frac{1}{A^{2m} m!} \lim\limits_{T\to\infty} \frac{1}{2T}\int_{-T}^T r[s_0](t) s_0(t-T_1)\ldots s_0(t-T_m)\,ds\nonumber\\\end{array}$$
(59)

for any particular realization of the white noise s 0(t). For arbitrary T 1,...,T m , in order to avoid singularities when some of the T j ’s are equal, we must use the formula

$$ \begin{array}{lll}\label{kernels2} &&{\kern-6pt} W_m(T_1,\ldots,T_m)\nonumber\\&& =\! \frac{1}{A^{2m} m!}\left\langle \left\{r[s_0](t)\!-\!r_{m-1}[s_0](t)\right\} s_0(t\!-\!T_1)\ldots s_0(t\!-\!T_m)\right\rangle ,\nonumber\\ \end{array} $$
(60)

where r m − 1[s 0](t) is the (m − 1)-st order Wiener approximation of the response r[s 0](t). We will not use this last result here, but will instead handle the singularities explicitly. In any given system, the leading-order Weiner kernels completely determine the dependence of the response r[s](t) on any stimulus s(t) via the Wiener series (57). For the important case m = 1, the corresponding Wiener functional contains just one term and equals

$$ P_1[s](t) = \int_0^\infty W_1(T_1) s(t-T_1)\,dT_1, $$
(61)

with the first-order Wiener kernel W 1(T 1) computed from Eq. (9).

We now take a closer look at the expression (58). Let us use the notation \(\tilde W_m\) for the right-hand side of Eq. (58) with s 0(t) replaced by s ν (t), regardless of the mutual position of T 1,...,T m . Using Eq. (8), we have

$$ \begin{array}{lll}\label{probe1} &&{\kern-6pt} \tilde W_m(T_1,\ldots,T_m)\nonumber\\ &&{\kern6pt} =\left\langle r[s_\nu](t)s_\nu(t\!-\!T_1)\ldots s_\nu(t\!-\!T_m)\right\rangle \nonumber \\[2pt] &&{\kern6pt} = L_0 \left\langle s_\nu(t\!-\!T_1)\ldots s_\nu(t\!-\!T_m)\right\rangle\nonumber\\ &&{\kern18pt} +\int_0^\infty L_1(\tau_1) \left\langle s_\nu(t\!-\!T_1)\ldots s_\nu(t\!-\!T_m)s_\nu(t\!-\!\tau_1)\right\rangle\,d\tau_1\nonumber \\[2pt]&&{\kern18pt} +\frac{1}{2!}\int_0^\infty \int_0^\infty L_2(\tau_1,\tau_2) \big\langle s_\nu(t\!-\!T_1)\ldots s_\nu(t\!-\!T_m)\nonumber\\ &&{\kern18pt}\times s_\nu(t\!-\!\tau_1)s_\nu(t\!-\!\tau_2)\big\rangle\,d\tau_1d\tau_2 +\ldots \nonumber \\[2pt]&&{\kern18pt} + \frac{1}{n!}\int_0^\infty\!\!\ldots\! \int_0^\infty\!\! L_n(\tau_1,\ldots,\tau_n)\left\langle s_\nu(t\!-\!T_1)\ldots s_\nu(t\!-\!T_m) \right.\nonumber \\[2pt]&&{\kern18pt} \left. \times s_\nu(t\!-\!\tau_1)\ldots s_\nu(t\!-\!\tau_n)\right\rangle \,d\tau_1 \ldots d\tau_n + \ldots. \nonumber\\ \end{array} $$
(62)

(Here and from now on, we abuse the notation \(\left\langle \cdot \right\rangle\) to now denote ensemble averaging only over all the realizations of the noise s ν (t), with fixed ν.) Extending all the integrals to the entire real axis by defining the kernels to vanish if any of the arguments is negative and using Eq. (48c), we find, as in showing Eq. (50), that the general term in formula (62) is equal to a linear combination of terms of the form

$$ \begin{array}{lll}&&{\kern-7pt} \left\langle s_\nu(t-T_1)\ldots s_\nu(t-T_{q_1})\right\rangle \ldots\left\langle s_\nu(t-T_{q_{i-1}+1})\ldots s_\nu(t-T_{q_i})\right\rangle\nonumber \\[6pt]&&{\kern.7pc} \int_{-\infty}^\infty\ldots \int_{-\infty}^\infty L_n(\tau_1,\ldots,\tau_n)\big\langle s_\nu(t-\tau_1) \ldots s_\nu(t-\tau_{p_1})\big\rangle \ldots\nonumber \\[6pt]&&{\kern40pt} \times\left\langle s_\nu(t\!-\!\tau_{p_{j\!-\!1}+1})\ldots s_\nu(t\!-\!\tau_{p_j})\right\rangle \big\langle s_\nu(t\!-\!T_{q_i+1})\nonumber \ldots\\[6pt]&&{\kern40pt} \times s_\nu(t\!-\!T_{q_{i+1}})s_\nu(t\!-\!\tau_{p_j+1})\ldots s_\nu(t\!-\!\tau_{p_{j+1}})\big\rangle\dots\nonumber \\[6pt]&& {\kern40pt} \times \big\langle s_\nu(t\!-\!T_{q_{h-1}+1})\ldots s_\nu(t-T_{q_h})s_\nu(t-\tau_{p_{k-1}+1})\ldots\nonumber\\&&{\kern40pt} \times s_\nu(t-\tau_{p_k})\big\rangle d\tau_1\ldots\, d\tau_n. \end{array} $$
(63)

Due to the symmetry of the kernels L n and \(\tilde W_m\) in all their arguments, all the indices in T i ’s and τ j ’s can be considered ordered, so that q h  = m and p k  = n. Since, again, the smallest number of factors inside a pair of angle brackets is two, we must have i ≤ m/2, j ≤ n/2 and i + j + l ≤ (m + n)/2, where l is the number of correlations in which both s ν (t − T α )’s and s ν (t − τ β )’s appear.

Let us define the functions B(t 1,...,t p ) as follows. First, B(t) = 1. Second, for p > 1, B(t 1,...,t p ) = 1 if κν < t 1,...,t p  < (κ + 1)ν for some integer κ, and B(t 1,...,t p ) = 0 otherwise.

Returning to Eq. (63), using Eq. (48b), we find that Eq. (63) is proportional to

$$ \begin{array}{lll}\label{nwin} &&{\kern-6pt} \nu^{-\frac{n+m}{2}}B(T_1,\ldots,T_{q_1})\ldots B(T_{q_{i-1}+1},\ldots,T_{q_i})B(T_{q_i+1},\ldots,T_{q_{i+1}})\ldots B(T_{q_{h-1}+1},\ldots,T_{q_h})\nonumber \\[3pt] &&{\kern2pt} \times\sum\limits_{m_1=-\infty}^\infty\ldots \sum\limits_{m_i=-\infty}^\infty\int_0^\nu\ldots\int_0^\nu L_n(t-m_1\nu-\sigma_1, \ldots, t-m_1\nu-\sigma_{p_1},\ldots,\nonumber \\[3pt] &&{\kern6pt} t-m_i\nu-\sigma_{p_{i-1}+1}, \ldots, t-m_i\nu-\sigma_{p_i}, t-k_1\nu-\sigma_{p_i+1},\ldots,t-k_1\nu-\sigma_{p_{i+1}},\ldots ,\nonumber \\[3pt] &&{\kern6pt} t-k_l\nu-\sigma_{p_{k-1}+1},\ldots,t-k_l\nu-\sigma_{p_k})\,d\sigma_1\ldots d\sigma_n \nonumber \\ &&{\kern18pt} =\nu^{-\frac{n+m}{2}}B(T_1,\ldots,T_{q_1})\ldots B(T_{q_{i-1}+1},\ldots,T_{q_i})B(T_{q_i+1},\ldots,T_{q_{i+1}})\ldots B(T_{q_{h-1}+1},\ldots,T_{q_h})\nonumber \\[3pt] &&{\kern30pt} \times\nu^{p_j-j}\nu^{n-p_j}\Biggl[ \int_{-\infty}^\infty\ldots \int_{-\infty}^\infty L_n(u_1,\ldots,u_1,\ldots,u_j,\ldots u_j,t\!-\!k_1\nu,\ldots,t\!-\!k_1\nu,\ldots\nonumber\\[3pt]&&{\kern90pt} \ldots,t\!-\!k_l\nu,\ldots,t\!-\!k_l\nu)\,du_1\ldots du_j \!+\!{\mathcal O}(\nu)\!\Biggr] \nonumber \\ &&{\kern18pt} =\nu^{\frac{n-m}{2}-j}B(T_1,\ldots,T_{q_1})\ldots B(T_{q_{i-1}+1},\ldots,T_{q_i})B(T_{q_i+1},\ldots,T_{q_{i+1}})\ldots B(T_{q_{h-1}+1},\ldots,T_{q_h}) \Biggl[ \int_{-\infty}^\infty\ldots \int_{-\infty}^\infty \nonumber \\[3pt] &&{\kern30pt} \times\! L_n(u_1,\ldots,u_1,\ldots,u_j,\ldots u_j,t-k_1\nu,\ldots,t-k_1\nu,\ldots,t-k_l\nu,\ldots,t-k_l\nu)\,du_1\ldots du_j + {\mathcal O}(\nu) \Biggr]. \end{array} $$
(64)

Here,

$$ \begin{array}{lll}&&{\kern-6pt} k_1\nu<T_{q_i+1},\ldots,T_{q_{i+1}}<(k_1+1)\nu.\\&&{\kern-6pt} \ldots\\&&{\kern-6pt} k_l\nu<T_{q_{h-1}+1},\ldots,T_{q_h}<(k_l+1)\nu. \end{array}$$

Integrating expression (64) against a function of T 1,...,T m , we see that the result scales as \(\nu^{\frac{n+m}{2}-i-j-l}\). This result can never become singular, and \({\mathcal O}(1)\) terms arise precisely when i + j + l = (n + m)/2, that is, when all the correlations are among pairs, which gives the correct integral against the leading-order Wiener kernel, W m (T 1,...,T m ), in the limit as ν→0. The first-order error is \({\mathcal O}(\nu^{\frac{1}{2}})\).

Expression (64) clearly can have singularities as ν→0. Since j ≤ n/2, the worst possible singularity is of order \(\nu^{-\frac{m}{2}}\). In this case, we must also have l = 0. As explained in the previous paragraph, this singularity may or may not contribute to an integral against a function of T 1,...,T m , depending on the structure of the product of the B’s. The singularities whose integrals are nonzero must be eventually subtracted from the stimulus in order to determine the n-th leading-order Wiener kernel as described by formula (62). Lower-order singularities pollute the discrete leading-order Wiener kernel pointwise, but are harmless in the actual Wiener representation of the systems, and can therefore simply be discarded.

Let us finally concentrate more specifically on the first Wiener kernel, \(\tilde W(T_1)\). For this kernel, m = 1, and therefore necessarily also l = 1. Terms of \({\mathcal O}(1)\) arise for odd n and j = (n − 1)/2. These sum up to the first-order Wiener kernel W 1(T 1). The \({\mathcal O}(\nu^{\frac{1}{2}})\) terms arise for even n, and comprise the lowest-order error. There are no singular terms in this case.

3 Volterra–Wiener theory for correlated white noise stimuli

In this section we present the detailed derivations of the results presented in Section 3 that connect the quantities related to orientation tuning in V1 under the stimulus (1), used in the experiments of Ringach et al. (1997a, 2003), and the Volterra–Wiener theory of nonlinear systems.

Experimental stimulus as a collection of correlated binary noise and its Gaussian white noise limit

First, we establish in what manner the stimulus (1) can be mapped into a set of IJ binary random noise stimuli. We begin by rewriting Eq. (1) in the form

$$ \label{flashchar} {\mathcal I}({\bf X},t) =\!{\mathcal I}_0\nu^{-1/2}\left\{1\!+\!\varepsilon \sum\limits_{i=1}^I\sum\limits_{j=1}^J\chi_{\theta_i,\phi_j}(t)\sin[{\bf k}(\theta_i)\cdot{\bf X}\!-\!\phi_j]\right\}, $$
(65)

where each \(\chi_{\theta_i,\phi_j}(t)\) is the indicator function of the set of intervals in which the grating with the orientation angle θ i and the spatial phase ϕ j was presented. In other words, the value of \(\chi_{\theta_i,\phi_j}(t)\) equals 1 during those time intervals in which the grating with the orientation-phase pair (θ i ,ϕ j ) was presented, and 0 during the time intervals when any other grating was presented.

We now further rewrite Eq. (65) as

$$ \label{flashwalk} {\mathcal I}({\bf X},t) =\!{\mathcal I}_0\!\left\{\!\nu^{-1/2}\!+\!\varepsilon \sum\limits_{i=1}^I\sum\limits_{j=1}^J \left[s_{\nu i j}(t)\!+\!\frac{1}{\sqrt{\nu}\, I J} \right]\sin[{\bf k}(\theta_i)\cdot{\bf X}-\phi_j]\!\right\}, $$
(66)

where

$$ s_{\nu i j}(t)\!=\!\nu^{-1/2}\left[\!\chi_{\theta_i,\phi_j}(t)\!-\!\frac{1}{IJ}\!\right],\;\, i\!=\!1,\ldots,I,\;\, j\!=\!1,\ldots,J, $$

is now indeed a collection of IJ binary noise processes. The process s νi j corresponds to the flashing of the grating with the orientation θ i and spatial phase ϕ j . Since, within any given time interval  < t < (k + 1)ν, a grating with the orientation-phase pair (θ i ,ϕ j ) is presented with probability 1/IJ and thus not presented with probability 1 − 1/IJ, each process s νi j assumes the values

$$ \begin{array}{lll}\label{onewalk} s_{\nu ij}(t)&=&\frac{1}{\sqrt\nu}\left\{\begin{array}{ll} p,\quad &\mbox{with probability } q,\\[4pt] -q,\quad &\mbox{with probability } p,\\[4pt] \end{array}\right. \nonumber\\[2pt] && k\nu< t < (k+1)\nu, \quad k=\ldots,-1,0,1,\ldots,\nonumber\\ \end{array} $$
(67)

where

$$q=\frac{1}{IJ}, \qquad p=1-\frac{1}{IJ}.$$

The binary noise processes s νi j in Eqs. (66) and (67) must necessarily be correlated because only one grating is presented at any given time. In particular, two processes, s νij (t) and s νmn (t), are correlated through the following conditional probabilities: At any given time t, if \(s_{\nu ij}(t)=p/\sqrt\nu\), then \(s_{\nu mn}(t)=-q/\sqrt\nu\) with probability 1, that is, if the grating with the orientation θ i and spatial phase ϕ j is presented, no other grating can be presented at the same time. If \(s_{\nu ij}(t)=-q/\sqrt\nu\), then

$$\label{secondwalk} s_{\nu mn}(t)=\frac{1}{\sqrt\nu}\left\{\begin{array}{ll} p,\quad &\mbox{with probability } \displaystyle \frac{1}{IJ-1},\\[10pt] -q,\quad &\mbox{with probability } \displaystyle 1- \frac{1}{IJ-1},\end{array}\right. $$
(68)

which means that if the grating with the orientation θ i and spatial phase ϕ j is not presented, any other grating may be presented with probability 1 − 1/(IJ − 1). The values of s νij (t 1) and s νmn (t 2) in different intervals, k 1 ν < t 1 < (k 1 + 1)ν, k 2 ν < t 2 < (k 2 + 1)ν, are independent, as gratings in such intervals are presented randomly and independently.

From the above, noting that s νij is scaled as \(1/\sqrt\nu\), in the limit as ν→0, we can show that the binary processes s νij (t) tend to correlated Gaussian white noise signals s ij (t), with

$$\label{zeromean1} \left\langle s_{ij} (t)\right\rangle_{\vec s}=0, $$
(69a)

and correlation properties

$$ \left\langle s_{ij}(t_1) s_{ij}(t_2)\right\rangle_{\vec s} = \frac{IJ-1}{I^2J^2}\; \delta(t_1-t_2), $$
(69b)
$$ \left\langle s_{ij}(t_1) s_{kl}(t_2)\right\rangle_{\vec s} = -\frac{1}{I^2J^2}\; \delta(t_1-t_2), \qquad i\neq k \mbox{ or } j\neq l, $$
(69c)
$$ \left\langle s_{i_1j_1}(t_1) \ldots s_{i_{2n+1}j_{2n+1}}(t_{2n+1})\right\rangle_{\vec s} = 0, $$
(69d)
$$ \left\langle s_{i_1j_1}(t_1) \ldots s_{i_{2n}j_{2n}}(t_{2n})\right\rangle_{\vec s} = \sum\prod \left\langle s_{i_lj_l}(t_l) s_{i_mj_m}(t_m)\right\rangle_{\vec s}. $$
(69e)

where δ(·) denotes the Dirac delta function, \(\langle\cdot\rangle_{\vec s}\) denotes ensemble averaging over all realizations of the collection \({\vec s}\) of the correlated noise stimuli, \({\vec s}=\{s_{kl}\mid k=1,\ldots,I,\;l=1,\ldots,J\}\), and the sum in the last equation runs over all distinct products of n two-point correlations. Observe that the correlated noise signals s ij (t) decorrelate if I→ ∞ or J→ ∞. We do not present the derivation of Eq. (69) here, but only note that it is similar to the derivation of formulas (48) in Appendix 2 in the ν→0 limit. We do notice, however, another important property of the noises s ij (t), namely

$$\sum\limits_{i=1}^I\sum\limits_{j=1}^J s_{ij}(t) = 0. $$
(70)

Kernel orthogonalization for a collection of correlated white noise stimuli

We indicate the kernel orthogonalization procedure with respect to the set of correlated white noises, s ij (t). This procedure leads to the mutually-orthogonal Wiener functionals that can be used for the Wiener-series analog of formula (10).

First, any zeroth-order Wiener functional \(p_0[\vec S](t)\) is just a constant,

$$ p_0[\vec S](t) = w_0,$$

for any collection \(\vec S\) of stimuli s νij (t) or s ij (t) Any first-order Wiener functional

$$p_1[\vec S](t)= \sum\limits_{i=1}^I \sum\limits_{j=1}^J \int_0^\infty w_1^{ij}(\tau_1) S_{ij}(t-\tau_1)\,d\tau_1 + w_{1,0}$$

must satisfy the orthogonality condition \(\left\langle p_1[\vec s](t)\right.\) \(\left.p_0[\vec s](t)\right\rangle_{\vec s}=0\) for any zeroth-order Wiener functional \(p_0[\vec S](t)\), with \(\langle\cdot\rangle_{\vec s}\) again denoting the ensemble averaging over all realizations of the collection \({\vec s}\) of the correlated Gaussian white noise stimuli s ij (t). Equation (69a) immediately implies w 1,0 = 0, and so \(p_1[\vec S](t)\) must have the form

$$\label{firstfunc} p_1[\vec S](t)= \sum\limits_{i=1}^I \sum\limits_{j=1}^J \int_0^\infty w_1^{ij}(\tau_1) S_{ij}(t-\tau_1)\,d\tau_1.$$
(71)

Moreover, by Eq. (70), which holds for the binary noises s νij (t) as well as for the Gaussian noises s ij (t), and so all stimuli \(\vec S\) under our consideration, we see that we can subtract any function of t from all the kernels in Eq. (71) without changing the result. Therefore, we can assume with no loss of generality that

$$\label{zerosumker} \sum\limits_{i=1}^I \sum\limits_{j=1}^J w_1^{ij}(\tau_1)=0.$$
(72)

Likewise, any second-order Wiener functional \(p_2[\vec S](t)\) must be orthogonal to all zeroth- and first-order Wiener functionals and therefore must obey the equations \(\left\langle p_2[\vec s](t) p_0[\vec s](t)\right\rangle_{\vec s}=0\) and \(\left\langle p_2[\vec s](t) p_1[\vec s](t)\right\rangle_{\vec s}=0\) for all the realizations of the correlated Gaussian white noises \(\vec s(t)=\{s_{ij}(t)\}\). The correlation properties (69) and an argument similar to that leading to Eq. (72) imply the form

$$ \begin{array}{lll}p_2[\vec S](t)\!&=&\!\sum\limits_{i,k=1}^I\sum\limits_{j,l=1}^J \!\int_0^\infty \!\!\!\int_0^\infty \!\!w_2^{ik,jl}(\tau_1,\tau_2)S_{ij}(t\!-\!\tau_1)S_{kl}(t\!-\!\tau_2)\,d\tau_1 d\tau_2\\[6pt] && -\frac{1}{IJ}\sum\limits_{i=1}^I\sum\limits_{j=1}^J \int_0^\infty w_2^{ii,jj}(\tau_1,\tau_1)\,d\tau_1\\[6pt] &&+\frac{1}{I^2J^2} \sum\limits_{i,k=1}^I\sum\limits_{j,l=1}^J \int_0^\infty w_2^{ik,jl}(\tau_1,\tau_1)\,d\tau_1 . \end{array} $$

Formula (70) now implies

$$\sum\limits_{i=1}^I\sum\limits_{j=1}^J w_2^{ik,jl}(\tau_1,\tau_2)= \sum\limits_{k=1}^I\sum\limits_{l=1}^J w_2^{ik,jl}(\tau_1,\tau_2)=0, $$

so that finally

$$ \begin{array}{lll}p_2[\vec S](t)\!&=&\!\sum\limits_{i,k=1}^I\!\sum\limits_{j,l=1}^J \!\int_0^\infty \!\!\int_0^\infty\!\! w_2^{ik,jl}(\tau_1,\tau_2)S_{ij}(t\!-\!\tau_1)S_{kl}(t\!-\!\tau_2)\,d\tau_1 d\tau_2\\[4pt] &&-\frac{1}{IJ}\sum\limits_{i=1}^I\sum\limits_{j=1}^J \int_0^\infty w_2^{ii,jj}(\tau_1,\tau_1)\,d\tau_1 . \end{array}$$

We proceed in the same vein at higher orders.

First-order Lee–Schetzen formula for a collection of correlated white noise stimuli

Here, we develop an analog of the Lee–Schetzen formula (9) for the firing rate in our correlated-noise case. The orthogonalization developed in the previous subsection is used to ensure that terms of order two and higher become orthogonal to the lower-order terms. Thus, the terms containing higher-order Wiener kernels will vanish when we correlate the firing rate \(m\left[{\vec s}\,\right](t,\Theta,\Phi)\) against a single copy of the binary noise, s ij (t − τ). Using formulas (69), we compute

$$ \begin{array}{lll}\label{ratecorrel} &&{\kern-6pt} \langle m\left[{\vec s}\,\right](t,\Theta,\Phi)s_{ij} (t-\tau)\rangle_{\vec s}\nonumber\\[6pt]&&{\kern6pt} =\sum\limits_{k=1}^I\sum\limits_{l=1}^J \int_0^\infty W_1^{k,l}(\tau_1,\Theta,\Phi)s_{kl}(t-\tau_1)s_{ij}(t-\tau)\,d\tau_1\nonumber\\[6pt] &&{\kern6pt} =\frac{1}{IJ} W_1^{i,j}(\tau,\Theta,\Phi) - \frac{1}{(IJ)^2}\sum\limits_{k=1}^I\sum\limits_{l=1}^J W_1^{k,l}(\tau,\Theta,\Phi).\nonumber\\ \par \end{array} $$
(73)

This is a consistent singular linear system, since summing both sides on i and j yields the identity 0 = 0, which follows from Eq. (70) for the left-hand side of the first line, and by simple algebra for the last line. The system’s singular nature implies that the solution of Eq. (73) is not unique, and we can postulate a linear equation connecting the kernels \(W_1^{i,j}(\tau,\Theta,\Phi)\). As in the previous subsection, with no loss of generality, we can use Eq. (72), that is,

$$\sum\limits_{k=1}^I\sum\limits_{l=1}^J W_1^{k,l}(\tau,\Theta,\Phi)=0,$$

and we find for the first-order Wiener kernels the formulas

$$\label{1stker} W_1^{i,j}(\tau,\Theta,\Phi) = IJ\left\langle m\left[{\vec s}\,\right](t,\Theta,\Phi)s_{ij}(t-\tau)\right\rangle_{\vec s}, $$
(74)

which is the same as Eq. (11a). Here, as in Eq. (69), \(\langle \cdot\rangle_{\vec s}\) denotes ensemble averaging over all possible realizations of the random noise collection \(\vec s\).

Relationship between the experimental data and Wiener kernels

We now discuss the relationship between the Volterra-Wiener representations of the firing rate on the one hand and the correlation functions (3) and the “interval-specific firing rate averages” (7) on the other. Since the expressions (3) and (7) involve the stimulus in a linear fashion, we are indeed allowed to focus on the first-order, that is linear, kernels, given by formula (74). By ergodicity, we have

$$ \begin{array}{lll}\label{1stker1} W_1^{i,j}(\tau,\Theta,\Phi) &=& IJ\lim\limits_{T\to\infty}\frac{1}{2T}\int_{-T}^T m\left[{\vec s}\,\right](t,\Theta,\Phi)\notag\\ &&{\kern54pt} \times s_{ij}(t-\tau)\, dt \end{array} $$
(75)

for any given noise collection \(\vec s\) with the ij-th component s ij [this is Eq. (11b)]. We can rewrite this equation in the form

$$ \begin{array}{lll}\label{1stker2} W_1^{i,j}(\tau,\Theta,\Phi) &\!=\!& IJ\lim\limits_{T\to\infty}\lim\limits_{\nu\to 0}\frac{1}{2T\sqrt\nu}\!\int_{-T}^T m\!\left(t,\Theta,\Phi,\vec\theta,\vec\phi\right)\nonumber\\ &&\times\!\left[\chi_{\theta_i ,\phi_j}(t\!-\!\tau)\!-\!\frac{1}{IJ}\right]\, dt, \end{array} $$
(76)

where \(\chi_{\theta_i,\phi_j}(t)\) is the indicator function of the set of intervals on which the angle θ i and the phase ϕ j were presented simultaneously. If we rewrite Eq. (3a) as

$${\mathcal N}(\tau,\theta_i,\phi_j) \!=\!\lim\limits_{T\to\infty} \frac{1}{2T}\int_{-T}^T \chi_{\theta_i,\phi_j}(t\!-\!\tau)\, m (t,\Theta\!,\Phi\!,\vec\theta\!,\vec\phi)\,dt,$$

we find

$$ \begin{array}{lll}\label{1stker33} W_1^{i,j}(\tau,\Theta,\Phi) & = &\lim\limits_{\nu\to 0}\frac{1}{\sqrt\nu}\Bigl[IJ{\mathcal N}(\tau,\theta_i,\phi_j)-\bar m (\Theta,\Phi)\Bigr]\nonumber\\[12pt] & = &\lim\limits_{\nu\to 0}\frac{1}{\sqrt\nu}\Bigl[ N(\tau,\theta_i,\phi_j)-\langle N\rangle_{\tau,\theta^{(0)},\phi^{(0)}}\Bigr],\nonumber\\ \end{array} $$
(77)

with the “interval-specific firing rate average,” N(τ,θ i , ϕ j ), defined in Eq. (7a). Formula (77) is the same as Eq. (12). Here, \(\bar m \) denotes the time-averaged firing rate, and \(\langle\cdot\rangle_{\tau,\theta^{(0)},\phi^{(0)}}\) denotes averaging over all the arguments of the function N(τ,θ (0),ϕ (0)). The last line in Eq. (77) is true because, due to ergodicity, the time-averaged firing rate \(\bar m\) is independent of the presentation sequences \(\vec \theta\) and \(\vec\phi\). Therefore,

$$\label{allaverages}\bar m = \overline {\langle m\rangle_{\vec\theta,\vec\phi}},$$
(78)

where the overbar represents time averaging and \(\langle \cdot\rangle_{\vec\theta,\vec\phi}\) ensemble averaging over all the components of \(\vec \theta\) and \(\vec\phi\). But, by Eq. (7a), the right-hand side of Eq. (78) clearly equals \(\langle N\rangle_{\tau,\theta^{(0)},\phi^{(0)}}\).

Likewise, we consider the correlation function \({\mathcal M}(\tau,\theta_i)\) in Eq. (3b), corresponding to the process of finding the “preferred-phase averaged” firing rate \(\left\langle m(t,\Theta,\Phi)\right\rangle_\Phi\). As explained at the end of Section 3, from the definitions (3), and in accordance with the fact that in the experiment the phases ϕ j are averaged out, we see that the correlation functions \({\mathcal N}(\tau,\theta_i,\phi_j)\) and \({\mathcal M}(\tau,\theta_i)\) are simply related by the equation

$${\mathcal M}(\tau,\theta_i)=\sum\limits_{j=1}^J {\mathcal N}(\tau,\theta_i,\phi_j).$$

Therefore, the first Wiener kernel corresponding to \({\mathcal M}(\tau,\theta_i)\) is

$$ \begin{array}{lll}\label{1stkeravg2} W_1^i(\tau,\Theta)&=&\frac{1}{J}\sum\limits_{j=1}^J W_1^{i,j}(\tau,\Theta,\Phi)\nonumber \\[6pt]&=& \lim\limits_{\nu\to 0}\frac{1}{\sqrt\nu}\Bigl[I{\mathcal M}(\tau,\theta_i)-\bar m(\Theta,\Phi)\Bigr]\nonumber\\[6pt]& =&\lim\limits_{\nu\to 0}\frac{1}{\sqrt\nu}\Bigl[M(\tau,\theta_i)-\bar m(\Theta,\Phi)\Bigr] \nonumber\\[6pt]& =&\lim\limits_{\nu\to 0}\frac{1}{\sqrt\nu}\Bigl[M(\tau,\theta_i)-\langle M\rangle_{\tau,\theta^{(0)}}\Bigr], \end{array} $$
(79)

where M(τ,θ i ) is the interval-specific firing rate average (5b). This is Eq. (13).

Comparison with previous results

We present a very brief comparison with the results of Ringach et al. (1997b), Nykamp and Ringach (2002), and restrict ourselves to the limit as ν→ 0. The result of Ringach et al. (1997b) holds for neuronal models which consist of a linear filter followed by a static nonlinearity. This is indeed the case for the leaky feed-forward models (17) and (18). Since \(\int_{-\infty}^\infty G_{lgn}(t)\,dt=0\), the effect of the linear filter in Eq. (19) on the stimulus (1) is the same as if Eq. (1) consisted of the spatially mutually-orthogonal gratings \({\mathcal I}_0\varepsilon\nu^{-1/2}\sin[{\bf k}(\theta_i)\cdot{\bf X}-\phi_j]\). These are drawn at each time with uniform probability. The result of Ringach et al. (1997b) states that RTC reproduces, up to a factor, the (spatial) projection of the linear filter in Eq. (19) on this orthogonal set at the time τ. We have already computed this projection when we calculated formula (19), and the projection coefficients equal \(({\mathcal I}_0\varepsilon\nu^{-1/2}/2)e^{-\frac{1}{4}K^2\mu^2} f(\theta_i)G_{lgn}(\tau)\cos\left[\phi_j-\Delta(\theta_i)\right]\), which is proportional to the first Wiener kernel (36). Thus, the first Wiener kernels in the theory presented in this paper are proportional to the projection coefficients in Ringach et al. (1997b). Averaging these coefficients over the phases ϕ j yields zero in agreement with the result presented at the end of our Section 4. The physiological interpretation is also the same as that at the end of that Section: spatial-phase averaging of receptive fields which are identical in every aspect except their preferred spatial phase erases their sensitivity (at least in the first approximation) due to the cancellations of the on- and off-regions. Finally, our formula (27) is in agreement with formulas (2) through (5) in Nykamp and Ringach (2002).

4 Calculation of a probability density function

In this appendix, we compute the probability density function \(p_S(s,t,\theta^{(0)})\) (in formula (39)) of the random variable S in Eq. (37),

$$\label{rv1} S=\nu^{-1/2}\sum\limits_{n=-\infty}^\infty f(\theta^{(n)})\,{\mathcal G}(t-n\nu)\cos\phi^{(n)}, $$
(80)

in which all the phases ϕ (n) and all the angles θ (n) save one, θ (0), are random. Here, f(θ (n)) is defined by formula (23), and \({\mathcal G}(t)\) by Eq. (25). Clearly, S is an infinite sum

$$S=\sum\limits_{n=-\infty}^\infty S^{(n)},$$

of independent individual random variables

$$S^{(n)}=\nu^{-1/2}f(\theta^{(n)})\,{\mathcal G}(t-n\nu)\cos\phi^{(n)},\label{rvn1}$$
(81)

in which both θ (n) and ϕ (n) are independent random variables distributed uniformly on the intervals [0,π] and [0,2π], respectively, if n ≠ 0. For n = 0, in S (0), only ϕ (0) is an independent random variable distributed uniformly on the interval [0,2π] while θ (0) is fixed.

An exact formula for \(p_S(s,t,\theta^{(0)})\)

Let us first note that the probability density of the random variable X (n) = cosϕ (n) is given by the formula

$$\label{cosdist} p_{\!X^{(n)}}(x)= \left\{ \begin{array}{ll} \displaystyle \frac{1}{\pi\sqrt{1-x^2}}, \quad& x^2<1, \\[18pt] 0, & x^2\geq 1 . \end{array}\right. $$
(82)

The probability density of the random variable Y (n) = f(θ (n)) unfortunately cannot be computed explicitly, however, this does not matter here. Instead, we use the formula

$$ p_{\!AB}(z)=\int_0^\infty p_A\left(\frac{z}{y}\right)p_B(y)\frac{dy}{y}, $$
(83)

which is valid if the variable B is nonnegative, to compute the probability density of the random variable \(S^{(n)}=\nu^{-1/2}f(\theta^{(n)}){\mathcal G}(t-n\nu)\cos\phi^{(n)}\), for n ≠ 0, which is

$$\label{one_term} p_{S^{(n)}}(z)=\frac{1}{\pi}\int_0^\pi p_{X^{(n)}}\left(\frac{z}{\nu^{-1/2}f(\theta^{(n)}){\mathcal G}(t-n\nu)}\right) \times\frac{d\theta^{(n)}}{\nu^{-1/2}f(\theta^{(n)})|{\mathcal G}(t-n\nu)|}. $$
(84)

In this calculation, we also used the facts that

$$p_{\theta^{(n)}}(\theta)=\left\{\begin{array}{ll}\displaystyle \frac{1}{\pi},\quad &0\leq \theta<\pi,\\[12pt] 0, & \mbox{otherwise,}\end{array}\right.$$

and that the probability density of S (n) is independent of whether it contains \({\mathcal G}(t-n\nu)\) or \(|{\mathcal G}(t-n\nu)|\) as a factor.

The characteristic function for the variable S (n) is the Fourier transform of the probability density (84), which, after a scaling of the integration variable and interchanging of the order of integration, becomes

$$ \Phi_n(\kappa)\!=\!\frac{1}{\pi}\!\int_0^\pi\!\! d\theta^{(n)} \!\int_{-\infty}^\infty\!\! du\,e^{i\kappa u \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t-n\nu)} p_{X^{(n)}}(u). $$
(85)

The inner integral can written in the form

$$\frac{2}{\pi}\int_0^1 \frac{\cos\alpha u\;du}{\sqrt{1-u^2}}=J_0(\alpha),$$

with \(\alpha = \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t-n\nu)\), and J 0 being the Bessel function of order zero, so that

$$\label{chartwo} \Phi_n(\kappa)\!=\!\frac{1}{\pi}\!\int_0^\pi J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t\!-\!n\nu)\Bigr)\; d\theta^{(n)}, \:\: n\!\neq\! 0.$$
(86)

In a similar fashion, since θ (0) is fixed, the characteristic function of the zeroth term in the sum (37) can be computed simply as

$$ \Phi_0(\kappa)= J_0\Bigl(\kappa\nu^{-1/2} f(\theta^{(0)}){\mathcal G}(t)\Bigr). $$
(87)

The characteristic function for the variable S in formula (37) is written as the infinite product

$$ \begin{array}{lll}\label{charfun} \Phi_S(\kappa)&=&J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(0)}){\mathcal G}(t)\Bigr)\nonumber \\[-3pt]&& \times\prod\limits_{n=-\infty\atop n\neq 0}^\infty\Biggl\{ \frac{1}{\pi}\int_0^\pi J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t-n\nu)\Bigr)\; d\theta^{(n)}\Biggr\}. \nonumber\\[-3pt] \end{array} $$
(88)

Note that, since, for all large positive n, \({\mathcal G}(t-n\nu)=0\), this product is only infinite and not bi-infinite. The probability density function for the variable S in formula (37) is given as

$$ \begin{array}{lll}\label{distrib} &&{\kern-6pt} p_S(s,t,\theta^{(0)})\nonumber \\[-3pt]&&{\kern4pt} = \frac{1}{2\pi}\int_{-\infty}^\infty d\kappa\, e^{-i\kappa s} J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(0)}){\mathcal G}(t)\Bigr)\nonumber\\[-3pt]&&{\kern15pt} \times\!\prod\limits_{n=-\infty\atop n\neq 0}^\infty\Biggl\{ \frac{1}{\pi}\int_0^\pi\! J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t\!-\!n\nu)\Bigr)\; d\theta^{(n)}\Biggr\}.\nonumber\\ \end{array} $$
(89)

Some elementary properties of the probability density p S ( s, t, θ (0) )

First, note that, since |J 0(x)| ≤ 1 for all values of x, every characteristic function Φ n (κ), given by Eq. (86), satisfies the same inequality, |Φ n (κ)| < 1. This implies that the κ-integral in the probability density (89) converges no worse than the Fourier transform of the Bessel function alone.

For the random variable

$$\label{sigma} \Sigma = \nu^{-1/2}\sum\limits_{n=-\infty}^\infty f(\theta^{(n)}){\mathcal G}(t-n\nu)\cos\phi^{(n)}, $$
(90)

in which now all the angles θ (n) and all the phases ϕ (n) are random, the probability density function is

$$ \begin{array}{lll}\label{psigma} p_\Sigma (s,t)&\!=\!&\frac{1}{2\pi}\int_{-\infty}^\infty d\kappa\, e^{-i\kappa s} \nonumber\\[-3pt] &&\times\!\prod\limits_{n=-\infty}^\infty\Biggl\{ \frac{1}{\pi}\int_0^\pi\! J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t-n\nu)\Bigr) d\theta^{(n)}\Biggr\}, \nonumber\\[-3pt] \end{array} $$
(91)

which is periodic in t with period ν. Therefore, by Eq. (38), the average \(\left\langle M(t,\theta^{(0)})\right\rangle_{\theta^{(0)}}\) of the interval-specific firing rate average M(t,θ (0)) in Eq. (7b) is also periodic in t with the same period. (This can also be shown directly.)

Also, for t < 0, we have \({\mathcal G}(t)=0\) so that \(J_0\Bigl(\kappa \nu^{-1/2}\) \(f(\theta^{(0)}){\mathcal G}(t)\Bigr)=1\). If t is sufficiently negative, then other terms around it also equal 1. Therefore, near such a t, the probability density (89) and the interval-specific firing rate average (7b) repeat themselves with the period ν. The same is, approximately, true for large t ≫ 0.

The above two properties of Eq. (7b) stand in contrast to the analogous properties of the correlation function \({\mathcal M}(t,\theta^{(0)})\) in Eq. (3b). In particular, by Eq. (6b), the average \(\left\langle {\mathcal M}(t,\theta^{(0)})\right\rangle_{\theta^{(0)}}\) is constant in time, as are \({\mathcal M}(t<0,\theta^{(0)})\) and \({\mathcal M}(t\to\infty,\theta^{(0)})\). Thus, even though Eq. (6b) shows that the interval-specific firing rate average M(t,θ (0)) and an integer multiple of the correlation function \({\mathcal M}(t,\theta^{(0)})\) differ by an \({\mathcal O}(\nu)\) quantity, the correlation function \({\mathcal M}(t,\theta^{(0)})\) appears more suitable to capture the average tuning dynamics of a V1 neuron. This is in agreement with a similar observation at the end of Section 2.

The small-ν limit of the probability density p S ( s, t, θ (0) )

This limit can be derived in a manner reminiscent of the proof of the central limit theorem and the related Edgeworth expansion; see for instance Chapter 16 of Feller (1966). We begin with the observation that \(\nu^{-1/2}{\mathcal G}(t)\approx \nu^{1/2}G_{lgn}(t)\) for small ν, which follows from Eq. (25). We then use the formula

$$J_0(x)=\sum\limits_{l=0}^\infty \frac{(-1)^l \, x^{2l}}{2^{2l}(l!)^2}$$

to expand

$$ J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(0)}){\mathcal G}(t)\Bigr) =1 - \frac{\kappa^2 f^2(\theta^{(0)})\,{\mathcal G}^2(t)}{4\nu} + \frac{\kappa^4 f^4(\theta^{(0)})\,{\mathcal G}^4(t)}{64 \nu^2 } + {\mathcal O}(\nu^3)$$

and

$$\begin{array}{lll} &&\frac{1}{\pi}\!\int_0^\pi J_0\Bigl(\kappa \nu^{-1/2} f(\theta^{(n)}){\mathcal G}(t-n\nu)\Bigr)\; d\theta^{(n)}\\ &&=1 - \frac{\kappa^2 f_2\,{\mathcal G}^2(t-n\nu)}{4\nu} + \frac{\kappa^4f_4\,{\mathcal G}^4(t-n\nu)}{64\nu^2} + {\mathcal O}(\nu^3), \end{array}$$

with f 2 and f 4 defined in Eqs. (32) and (41), respectively. These two expansions, together with Eq. (88), imply

$$ \begin{array}{lll}\label{logphi1} \log \Phi_S(\kappa)&=& \log\Bigg( 1 - \frac{\kappa^2 f^2(\theta^{(0)})\,{\mathcal G}^2(t)}{4\nu}\nonumber\\[-6pt]&&{\kern20pt} + \frac{\kappa^4 f^4(\theta^{(0)})\,{\mathcal G}^4(t)}{64 \nu^2 } + {\mathcal O}(\nu^3) \Bigg) \nonumber \\[-6pt] &&+ \sum\limits_{n=-\infty\atop n\neq 0}^\infty \log\Bigg( 1 - \frac{\kappa^2 f_2\,{\mathcal G}^2(t-n\nu)}{4\nu} \nonumber \\[-6pt] && {\kern50pt} + \frac{\kappa^4f_4\,{\mathcal G}^4(t-n\nu)}{64\nu^2} + {\mathcal O}(\nu^3)\Bigg) \nonumber\\ &=& - \frac{\kappa^2 f^2(\theta^{(0)})\,{\mathcal G}^2(t)}{4\nu} - \frac{\kappa^4 f^4(\theta^{(0)})\,{\mathcal G}^4(t)}{64 \nu^2 }\nonumber \\[-6pt] &&- \frac{\kappa^2 f_2}{4\nu} \sum\limits_{n=-\infty\atop n\neq 0}^\infty {\mathcal G}^2(t-n\nu) + \frac{\kappa^4(f_4-2f_2^2)}{64\nu^2} \nonumber\\[-6pt] && \times\sum\limits_{n=-\infty\atop n\neq 0}^\infty {\mathcal G}^4(t\!-\!n\nu) \!+\! {\mathcal O}(\nu^3). \end{array} $$
(92)

We now express the time-dependent coefficients in Eq. (92) in terms of G lgn (·). First, using Eq. (25), we approximate

$$\label{scriptg} {\mathcal G}(t)=\nu G_{lgn}(t) + {\mathcal O}(\nu^2). $$
(93)

This makes the first term in the fifth line of Eq. (92) of \({\mathcal O}(\nu)\), and the second of \({\mathcal O}(\nu^2)\), which we will neglect. We then compute that the sum

$$\label{sumofsquares1} {\mathcal G}_2(t)= \sum\limits_{n=-\infty}^\infty {\mathcal G}^2(t-n\nu) $$
(94)

satisfies the equation

$$\label{gensumofsquares} {\mathcal G}_2(t)=2\sum\limits_{n=0}^\infty {\mathcal F}(t+n\nu)\Bigl[{\mathcal F}(t+n\nu)-{\mathcal F}\Bigl(t+(n+1)\nu\Bigr)\Bigr], $$
(95)

for all t satisfying 0 ≤ t ≤ ν. Here, \({\mathcal F}(t)\) is defined as in Eq. (25). The functional form of \({\mathcal F}(t)\), as it follows from Eqs. (25) and (20), lets us Taylor expand Eq. (95) in the form

$$\label{gensumofsquares1} {\mathcal G}_2(t)\!=\!-2\nu\sum\limits_{n=0}^\infty {\mathcal F}(t+n\nu)\Bigl[{\mathcal F}'(t\!+\!n\nu)\!+\!\frac{\nu}{2}{\mathcal F}''(t\!+\!n\nu) +\!\frac{\nu^2}{6}{\mathcal F}'''(t\!+\!n\nu)\! +\!\cdots\! \Bigr]. $$
(96)

(This is because all the series involving the derivatives of \({\mathcal F}\) converge and the formal \({\mathcal O}(\nu^k)\) terms are indeed of that order.) We assume ζ ≥ 1 in Eq. (20), or, more generally, at worst \(G_{lgn}(t)={\mathcal O}(t) \) and so \({\mathcal F}(t)={\mathcal O}(t^2) \) for small t. We use integration by parts and push the lower integration limit from t to 0 to compute from Eq. (96) that

$$\label{squares2} {\mathcal G}_2(t)=\nu\int_0^\infty G^2_{lgn}(\xi)\, d\xi +{\mathcal O}(\nu^3).$$
(97)

We conclude that this formula is true for all t since \({\mathcal G}_2(t)\) is periodic in t.

Using Eqs. (93) and (97), we thus derive the first term in the sixth line of Eq. (92) to be

$$ \begin{array}{lll}&&{\kern-8pt} -\frac{\kappa^2f_2}{4\nu}\left[{\mathcal G}_2(t)-{\mathcal G}^2(t)\right]\\&&{\kern6pt} =-\frac{\kappa^2 f_2{\mathcal G}_2(t)}{4\nu}+\nu\frac{ \kappa^2 f_2 G^2_{lgn}(t)}{4}+{\mathcal O}(\nu^2)\\[12pt] &&{\kern6pt}=-\frac{\kappa^2 f_2}{4}\int_0^\infty G^2_{lgn}(\xi)\, d\xi +\nu \frac{\kappa^2 f_2 G^2_{lgn}(t)}{4}+{\mathcal O}(\nu^{2})\\[12pt] &&{\kern6pt}= -\frac{\kappa^2}{2}\left(V_0-\nu \frac{ f_2 G^2_{lgn}(t)}{2}\right)+{\mathcal O}(\nu^{2}), \end{array} $$

where V 0 is defined as in Eq. (32). Thus, the sum of the fifth line and the first term in the sixth line of Eq. (92) is

$$ \begin{array}{lll}\label{anothersum} &&{\kern-8pt} -\frac{\kappa^2}{2}\left(V_0+\nu \frac{\left[f^2(\theta^{(0)})- f_2\right] G^2_{lgn}(t)}{2}\right)\nonumber\\&&{\kern6pt} +{\mathcal O}(\nu^{2})= -\frac{\kappa^2 V(t,\theta^{(0)})}{2}+ {\mathcal O}(\nu^{2}), \end{array} $$
(98)

where V(t,θ (0)) is the expression in Eq. (40). Finally

$$ \begin{array}{lll}\label{yetanothersum} &&{\kern-8pt} \sum\limits_{n=-\infty\atop n\neq 0}^\infty {\mathcal G}^4(t-n\nu)\nonumber\\&&{\kern4pt} =\sum\limits_{n=-\infty}^\infty {\mathcal G}^4(t-n\nu)-{\mathcal G}^4(t)\nonumber\\[3pt]&&{\kern4pt} = \nu^3\int_0^\infty G^4_{lgn}(\xi)\,d\xi + {\mathcal O}(\nu^{4})-\nu^4 G^4_{lgn}(t) + {\mathcal O}(\nu^{5}) \nonumber\\[3pt]&&{\kern4pt} = \nu^3\int_0^\infty G^4_{lgn}(\xi)\,d\xi + {\mathcal O}(\nu^{4}).\end{array}$$
(99)

Therefore, using Eqs. (92), (98), and (99), the characteristic function Φ S (κ) can be expressed as

$$ \begin{array}{lll}\label{charfunfin} \Phi_S(\kappa)&=&\exp\left[\!-\frac{\kappa^2 V(t,\theta^{(0)})}{2}+\kappa^4 \nu V_1 + {\mathcal O}(\nu^{2})\right]\nonumber\\[12pt]&=&\exp\left[\!-\frac{\kappa^2 V(t,\theta^{(0)})}{2}\right]\left[\!1\!+\! \kappa^4 \nu V_1 \!+\! {\mathcal O}(\nu^{2})\right]\!, \end{array}$$
(100)

where V 1 is defined as in Eq. (41), and the second line is valid for all fixed finite κ.

Using the formula

$$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-i \alpha x - \alpha^2/2}\,d\alpha = e^{-x^2/2},$$

the differentiation rule for Fourier transforms, Eqs. (100), (89), and the fact that large values of κ contribute negligible amounts to the integration in Eq. (89), we finally deduce Eq. (39).

5 Orientation tuning dynamics for a purely feed-forward model with no leakage

In this appendix, we concentrate on purely feed-forward models with no leakage, i.e., g L  = 0 in Eq. (15), and on finite presentation duration ν. We show that, in a fairly large class of such models, averaging over the spatial phase wipes out the orientation tuning of the corresponding V1 neuron.

First, in the limit as ν→0, we observe that formula (38) yields a completely untuned interval-specific firing rate average M(t,θ (0)) for a purely feed-forward model (17) with no leakage or global inhibition when the LGN drive is modeled by the simple Gabor function (19). To show this fact, we first recall that in Eq. (38), the part of the integrand in the square brackets is non-negative. Furthermore, as discussed above, in order for the LGN input not to become negative, if there is no leakage, i.e. g L  = 0, or inhibition, the magnitude of the constant C must be chosen sufficiently large so that the values of the variable s for which the probability density function \(p_S(s,t,\theta^{(0)})\) is non-zero must all be contained in the interval C/D ≤ s < ∞. Therefore, we can extend the integral in Eq. (38) to \(\int_{-\infty}^\infty\). Replacing ϕ (n) by π − ϕ (n) in the random variable S in Eq. (37) shows that this variable has a symmetric range, and thus the function \(p_S(s,t,\theta^{(0)})\) is even in s. The implication from Eq. (38) is then that in the case of a purely feed-forward model without leakage, M(t,θ (0)) = − C, independent of the orientation angle θ (0).

Since the result of the previous paragraph could be a consequence of the very specific and simple LGN drive model and the ν→0 limit, we now briefly show that it is instead a much more general phenomenon. In particular, we here assume a much more realistic LGN drive in the feed-forward model (17), namely, a feed-forward convergence of inputs from individual LGN neurons as used in our integrate-and-fire simulations described above, as well as the large-scale V1 network computations of McLaughlin et al. (2000), Tao (2004). Such an LGN drive is given by the formula

$$ \begin{array}{lll}\label{fulllgnker} g_{lgn}(t)\!&=&\!\sum\limits_{q=1}^Q g^q_{lgn}(t)\nonumber\\ &=&\sum\limits_{q=1}^Q\Bigg\{R_B\pm\int_{{\mathbb R}^2}dX\,dY \int_{-\infty}^t ds\, \nonumber\\[-4pt] &&{\kern18pt} \times K_{lgn} (|{\bf X}-{\bf X}_q|)G_{lgn}(t-s)\, {\mathcal I}({\bf X},s)\Bigg\}^+, \end{array}$$
(101)

where X = (X,Y), R B is the LGN neurons’ background firing rate, G lgn (t) is the LGN time-response kernel [for instance, as given by Eq. (20)], \({\mathcal I}({\bf X},s)\) is the stimulus (1), K lgn is the isotropic spatial kernel for the LGN neurons, X q is the receptive field center of the q-th LGN neuron that feeds into the V1 neuron in question, and {a} +  =  max {a,0}. The sign ± corresponds to neurons with “on” and “off” receptive fields, whose firing is triggered by either the presence or absence of a bright stimulus in their receptive field center, respectively. Each term in the sum on the right-hand side of Eq. (101) represents the output of a single LGN neuron. Note that the number Q of LGN neurons feeding into it generally varies from one V1 neuron to the next (Tanaka 1985; Reid and Alonso 1995; Alonso et al. 2001; Ringach 2002; Tao et al. 2004). Moreover, the argument to be presented here will not change if we assume that the shapes and strengths of the LGN kernels, or the background firing rates, vary from one LGN neuron to the next, i.e., we can assume R B , K lgn , and G lgn are q-dependent.

Since g lgn (t) ≥ 0 and g L  = 0 in the case at hand, we can omit the thresholding in Eq. (17) and write

$$ m(t)\!=\!\left(\frac{V_E}{V_T}-1\right)g_{lgn}(t)\!=\!\left(\frac{V_E}{V_T}-1\right)\sum\limits_{q=1}^Q g^q_{lgn}(t), $$
(102)

where \(g^q_{lgn}(t)\) are the individual bracketed terms in the sum (101). Using Eq. (7b), we compute the interval-specific firing rate average M(t,θ i ) to be

$$\label{firingcomp} M(t,\theta_i)= \left(\frac{V_E}{V_T}-1\right)\sum\limits_{q=1}^Q \left\langle g_{lgn}^q\right\rangle_{\vec\theta,\vec\phi,\theta^{(0)}=\theta_i} (t,\theta_i), $$
(103)

where the averaging is as in Eq. (7b).

Let us now consider any individual term on the right-hand side of Eq. (103). First, for any arbitrary radially-symmetric kernel K lgn , we compute

$$ \begin{array}{lll}\label{glgnq} g_{lgn}^q(t) \!&=&\! \Biggl\{\!R_B \pm I_0\varepsilon\nu^{-\frac{1}{2}}\sum\limits_{n=-\infty}^\infty {\mathcal G}(t-n\nu)\nonumber\\[3pt] &&{\kern2pt} \times\! \int_{{\mathbb R}^2}\!dXdYK_{lgn}(|{\bf X}\!-\!{\bf X}_q|)\sin\!\left[{\bf k}(\theta^{(n)})\cdot{\bf X}\!-\!\phi^{(n)}\right]\!\Biggr\}^+\nonumber\\[3pt] &=\!& \Biggl\{\!R_B \!\pm\! I_0\varepsilon\nu^{-\frac{1}{2}}\!\sum\limits_{n=-\infty}^\infty \!{\mathcal G}(t\!-\!n\nu)\sin\left[{\bf k}(\theta^{(n)})\cdot{\bf X_q}\!-\!\phi^{(n)}\right]\nonumber\\[3pt] &&{\kern2pt} \times \int_{{\mathbb R}^2}dU\,dV\,K_{lgn}(|{\bf Y}|)\cos\left[{\bf k}(\theta^{(n)})\cdot{\bf Y}\right]\Biggr\}^+\nonumber\\[3pt] &=& \Bigg\{R_B \pm I_0\varepsilon\nu^{-\frac{1}{2}}\kappa_{lgn}(k) \sum\limits_{n=-\infty}^\infty {\mathcal G}(t-n\nu)\nonumber\\&&{\kern2pt} \times \sin\left[{\bf k}(\theta^{(n)})\cdot{\bf X_q}-\phi^{(n)}\right]\Bigg\}^+. \end{array} $$
(104)

Here, (U,V) = Y = X − X q , \({\mathcal G}(t)\) is as in Eq. (25), and

$$ \kappa_{lgn}(k)= \int_{{\mathbb R}^2}d\xi\,d\eta\,K_{lgn}\left(\sqrt{\xi^2+\eta^2}\right)\cos k\xi \label{lastint} $$
(105a)
$$ \;\;\; =\int_{{\mathbb R}^2}dU\,dV\,K_{lgn}(|{\bf Y}|)\cos\left[{\bf k}(\theta^{(n)})\cdot{\bf Y}\right] $$
(105b)

is seen to be independent of the orientation angle θ (n) by rotating the coordinate system in Eq. (105a) so that the ξ-axis is parallel to the vector k(θ (n)).

To find \(\left\langle g_{lgn}^q\right\rangle_{\vec\theta,\vec\phi,\theta^{(0)}=\theta_i}\), we must average the last line in the expression (104) over all phases ϕ (n) and over all orientations θ (n) except θ (0). However, due to the Lemma in Appendix 1, the distribution of the variable \({\bf k}(\theta^{(0)})\cdot{\bf X_q}-\phi^{(0)}\), shifted into the interval [0,2π), is the same as the distribution of ϕ (0), i.e., uniform. Thus, the averaging over the phase ϕ (0) alone erases the dependence of \(\left\langle g_{lgn}^q\right\rangle_{\vec\theta,\vec\phi,\theta^{(0)}=\theta_i}\) on the value of the orientation θ (0). By Eq. (103), the interval-specific firing rate average M(t,θ (0)) is therefore also untuned in this case, and by Eq. (6b), so is the correlation function \({\mathcal M}(t,\theta^{(0)})\).

6 Integrate-and-fire ring model

For the simulations presented in Fig. 2, we use a network model of n = 1024 conductance-based, integrate-and-fire, point neurons, laid out on a ring with 32 different preferred orientations Θ i and 8 preferred spatial phases Φ j . These are conferred by convergent LGN inputs, which are modeled by Poisson spike trains, with each train modulated by a firing-rate function of the form (101). There are three excitatory neurons and one inhibitory neuron for each combination of a preferred orientation and spatial phase. The lateral, cortico-cortical connections are all-to-all, with the strengths of the connections depending on the difference Θ i  − Θ j between the preferred orientations, and independent of the preferred spatial phases of the neurons. The details of the model are as follows.

The membrane potential of the i-th model neuron follows the time evolution

$$ \begin{array}{lll}\label{integrateandf} \frac{d}{dt} v^{(i)}_P &=& -g_L \left(v^{(i)}_P - V_{r}\right) -g^{(i)}_{P E}(t)\left(v^{(i)}_P - V_E\right) \nonumber\\&& -g^{(i)}_{PI}(t)\left(v^{(i)}_P - V_I\right), \end{array} $$
(106)

where, as in the rest of this paper, P = E, I indicates whether the neuron is excitatory or inhibitory. The voltages V E  > V r  > V I are the excitatory, rest, and inhibitory reversal potentials, and as in the main text, their values are set to V E  = 14/3, V r  = 0, and V I  = − 2/3, respectively. The firing threshold is set to V T  = 1. The time scale of the leakage conductance g L is \(g_L^{-1}=20~ms\).

The time-dependent conductances \(g^{(i)}_{PP'}(t)\) in the model (106) arise from the convergent LGN input and from network activity and have the form

$$\label{econduct} g^{(i)}_{P E}(t)\! =\! S_{P E} \sum\limits_{j=1}^n K_{PE}\left(\Theta_i\!-\!\Theta_j\right) \sum\limits_k G_E\left(t\! -\! t^j_k\right)\! +\! g_{lgn}^{(i)}(t), $$
(107a)
$$\label{iconduct} g^{(i)}_{P I} (t)\! =\! S_{P I} \sum\limits_{j=1}^n K_{PI}\left(\Theta_i\!-\!\Theta_j\right) \sum\limits_k G_I\left(t\! -\! T^j_k\right) + g_{inh}(t). $$
(107b)

Here, the coupling coefficients S PP have the same meaning as in Eq. (16). In Fig. 2, they are set to S EE  = 1.0, S EI  = 2.0, S IE  = 6.0, and S II  = 2.0. The spatial coupling kernels K PE (θ) and K PI (θ) are taken to be Gaussians of the form (114) with both σ P  = π/8. The temporal kernels G P have the form

$$ G_P(t)\!=\!\frac{1}{\tau_{Pd}\!-\!\tau_{Pr}}\left[\!\exp\left(\!-\frac{t}{\tau_{Pd}}\right)\!-\!\exp\left(\!-\frac{t}{\tau_{Pr}}\right)\!\right] \:\, \mbox{for} \:\, t\!\geq\! 0, $$

and vanish for t < 0. Here, τ Ed  = 5 ms, τ Id  = 10 ms, and both τ Pr  = 1 ms. The symbol \(t^j_k\;(T^j_k)\) indicates the time of the k-th spike of the j-th excitatory (inhibitory) neuron.

The LGN driving term in Eq. (107a), g lgn (t), is a Poisson spike train with spike strengths 0.01 and the spiking rate modulated as the sum (101) of convergent inputs from a distribution of 30 identical “on” and “off” LGN cells, described in Appendix 5. The forms of the spatial and temporal kernels, K lgn and G lgn , in Eq. (101) are the same as those used in McLaughlin et al. (2000), Wielaard et al. (2001). In particular,

$$K_{lgn}(x)=\frac{a}{\pi\sigma_a^2}\exp\left(-\frac{x^2}{\sigma_a^2}\right)-\frac{b}{\pi\sigma_b^2}\exp\left(-\frac{x^2}{\sigma_b^2}\right),$$

with \(\sigma_a= 0.066^\circ\), \(\sigma_b= 0.093^\circ\), a = 1, and b = 0.74. The temporal kernel G lgn (t) is given by a constant multiple of formula (20) with ζ = 5, τ lgn  = 3 ms, α = 1, and β = 5/3. The overall constant in G lgn (t) is determined so that the peak firing rate for each LGN cell driven with a full-contrast drifting grating stimulus equals 100 Hz. The term g inh (t) in Eq. (107b) models global random inhibitory inputs, which are taken to be Poisson trains with rate 500 Hz and spike strengths 0.1. The input stimulus used in the simulation is the sequence of gratings (1) with randomly selected orientation angles and spatial phases. The integrate-and-fire system governed by Eqs. (106) and (107) is integrated numerically by the method of Shelley and Tao (2001).

In our simulations with this spiking integrate-and-fire model, the spike histograms \({\mathcal M}(\tau,\theta_i)\) of delay times τ and orientations θ i , and the corresponding probability density functions \({\mathcal P}(\tau,\theta_i),\) are constructed in the same way as from the experimental data, which is described at the beginning of Section 2.

7 Solution of the model of Mexican-hat tuning curves

In this appendix, we present the details of the calculations outlined in Section 5. As described there, we first drastically simplify the thresholded linear model (14) by dropping the thresholding, setting the preferred phase of the neuron in the LGN drive (19) to Φ = 0, fixing all the phases in the stimulus at ϕ (n) ≡ 0, and dropping the phase averaging. By considering only the ring architecture, that is, replacing the position x in the cortex by the preferred angle Θ, the spatial convolution in Eq. (14) becomes a convolution in the angle variable only,

$$K*F(\theta)=\frac{1}{\pi}\int_{-\pi/2}^{\pi/2} K(\theta-\theta')F(\theta')\,d\theta' .$$

Thus, the model (14) reduces to the coupled linear ring model

$$ \begin{array}{rll}\label{cartoon1} m_P(t,\Theta)&=&f_{lgn}(t,\Theta)+\sum\limits_{P'} C_{PP'}K_{PP'}*G_{P'}*m_{P'} (t,\Theta),\\ P,P'&=&E,I.\end{array} $$
(108)

The simplified LGN drive (22) becomes

$$\label{glgncartoon} f_{lgn}(t,\Theta)= A +B\sum\limits_{n=-\infty}^\infty f(\theta^{(n)}-\Theta){\mathcal G}(t-n\nu). $$
(109)

Here, A = − g L  + (V E /V T  − 1)R B , B = (V E /V T  − 1) \({\mathcal I}_0\varepsilon\nu^{-1/2}e^{-K^2\mu^2/4}/2\), with K and μ described after formula (19). (Since we are about to scale A and B out of the model, we do not specify the values of any constants involved in them.) The orientation-angle profile f(θ) ≡ f(θ,0) is given by Eq. (23), and the time course \({\mathcal G}(t)\) by Eq. (25). We choose ζ = 0 in Eqs. (20) and (25) for the simplest form of \({\mathcal G}(t)\). With no loss of generality, we can take the preferred angle Θ of the neuron under study to be Θ = 0. Moreover, in this idealized model, f(θ) can be simplified even further without affecting the qualitative nature of the results; we choose it to be the periodic version of the Gaussian of the form

$$\label{spatprof1} f(\theta)=\frac{1}{\sqrt{\pi}\sigma_{lgn}}\exp\left(-\frac{\theta^2}{\sigma_{lgn}^2}\right). $$
(110)

After ensemble averaging Eqs. (108) over the orientation angles θ (n)’s, except θ (0), using formula (7b), we find the equations for the excitatory and inhibitory interval-specific firing rate averages \(M_P(t,\theta^{(0)})\), P = E,I, [see Eqs. (7b) and (5b)] to be

$$ \begin{array}{lll}\label{cartoon2} M_P(t,\theta^{(0)})&=&A+B[f(\theta^{(0)})-\langle f\rangle_\theta]{\mathcal G}(t)\nonumber\\&&+\sum\limits_{P'} C_{PP'}K_{PP'}*G_{P'}*M_{P'}(t,\theta^{(0)}).\end{array} $$
(111)

Normalizing so that K PP*G P*1 = 1 and dropping the superscript 0 from θ (0), we linearly shift and rescale \(M_P(t,\theta)=\alpha_P + \beta_P \tilde M_P(t,\theta)\). Choosing

$$ \begin{array}{lll}\label{alphabeta} \alpha_P&=&\frac{A(1-C_{PP}+C_{PQ})}{(1-C_{EE})(1-C_{II})-C_{EI}C_{IE}}, \nonumber\\ \beta_P&=&B, \quad P,Q=E,I, \quad P\neq Q, \end{array} $$
(112)

and dropping the tilde in \(\tilde M_P(t,\theta)\), we derive the equations for the (rescaled) interval-specific firing rate average \( M_P(t,\theta^{(0)})\) to be

$$ \begin{array}{rll}\label{linear1} M_P(t,\theta)&=&[f(\theta)-\langle f\rangle_\theta]{\mathcal G}(t)\nonumber\\&&+\sum\limits_{P'} C_{PP'}K_{PP'}*G_{P'}* M_{P'}(t,\theta),\nonumber\\ P,P'&=&E,I, \end{array} $$
(113)

which is exactly Eq. (45).

For this model, we choose the angular cortical kernel to be a periodic version of the Gaussian of the form

$$ \label{spatial1} K_{PP'}(\theta)=\frac{1}{\sqrt\pi \sigma_{P'}} \exp\left(-\frac{\theta^2}{\sigma_{P'}^2}\right), $$
(114)

where σ P , P = E,I, denotes the angular extents of the excitatory and inhibitory cortical connections, respectively. For the temporal cortical kernel we choose the function

$$\label{corker1} G_{P}(t)=\left\{\begin{array}{ll}\displaystyle 0, &\quad t\leq 0,\\[6pt] \displaystyle\frac{1}{\tau_{P}}\exp\left(-\frac{t}{\tau_P}\right), &\quad t>0.\end{array}\right. $$
(115)

Equations (113) form a pair of linear convolution equations, which we solve by a combination of the Laplace transform in t and Fourier series in θ. In particular, for the n-th Fourier coefficient of the Laplace-transformed (rescaled) interval-specific firing rate averages, \(\hat M_{P,n}(\lambda)\), we find a simple set of two equations

$$ \begin{array}{rll} \label{laplaced} \hat M_{P,n}(\lambda)&=&f_n\hat{\mathcal G}(\lambda)+\sum\limits_{P'} C_{PP'} K_{PP',n}\,\hat G_{P'}(\lambda)\,\hat M_{P',n}(\lambda),\nonumber\\ P,P'&=&E,I, \end{array} $$
(116)

where K PP′,n and f n are the n-th Fourier coefficients of the spatial cortical kernel K PP(θ) and the spatial profile \(f(\theta)-\langle f\rangle_\theta\), and \(\hat G_{P'}(\lambda)\) and \(\hat{\mathcal G}(\lambda)\) are the Laplace transforms of the temporal cortical kernel G P(t) and the LGN time course \({\mathcal G}(t)\), respectively (note that f 0 = 0).

For the spatial kernel K PP(θ), the periodic version of the Gaussian of the form (114), we find that its n-th Fourier coefficient is given by

$$ \label{spatialn} K_{PP',n}=\frac{1}{\pi} \exp\left(-\frac{n^2 \sigma_{P'}^2}{4}\right). $$
(117)

For the temporal cortical kernel G P (t) in Eq. (115) we find

$$\label{lgnkerlap} \hat G_{P}(\lambda)=\frac{1}{\lambda+1/\tau_P}. $$
(118)

As mentioned in Section 5, the temporal LGN kernel is (20) with ζ = 0. This implies

$$\label{hatg} \hat {\mathcal G}(\lambda)=\tau_{lgn}(1-e^{-\nu\lambda})\left(\frac{1}{\lambda+\beta/\tau_{lgn}}-\frac{1}{\lambda+\alpha/\tau_{lgn}}\right). $$
(119)

Finally, for the spatial profile f(θ) in Eq. (110), we compute

$$\label{fn} f_n=\exp\left(-\frac{n^2\sigma_{lgn}^2}{4}\right), \qquad n\neq 0. $$
(120)

The solution of Eq. (116) is

$$ \begin{array}{lll}\label{gensol} &&{\kern-6pt} \hat M_{P,n}(\lambda)\nonumber\\&&=\!\frac{(\lambda\!+\!1/\tau_Q)\left[\!\lambda\!+\!\left(1\!+\!C_{PQ}K_{PQ,n}\!-\!C_{QQ}K_{QQ,n}\right)\!/\tau_P\!\right]}{(\lambda-\lambda_{1,n})(\lambda-\lambda_{2,n})}f_n\hat{\mathcal G}(\lambda),\nonumber\\ &&{\kern-6pt} Q\neq P \end{array} $$
(121)

where λ 1,n and λ 2,n are given by

$$ \lambda_{1,n}, \, \lambda_{2,n}=\frac{\lambda_{E,n}+\lambda_{I,n}}{2} \pm\sqrt{\frac{\left(\lambda_{E,n}-\lambda_{I,n}\right)^2}{4} +\frac{C_{IE}K_{IE,n}C_{EI}K_{EI,n}}{\tau_E\tau_I}} $$
(122)

with

$$\label{vertex} \lambda_{P,n}=\frac{C_{PP}K_{PP,n}-1}{\tau_P}, \qquad P=E,I. $$
(123)

Note that C EE ,C IE  ≥ 0 and C II ,C EI  ≤ 0, so that C IE K IE,n C EI K EI,n/τ E τ I  ≤ 0. Note also that \(\hat M_{P,0}(\lambda)=0\) because f 0 = 0.

The roots λ j,n, j = 1,2, are either real and equidis tant from their average \(\left(\lambda_{E,n}+\lambda_{I,n}\right)/2\), or complex with the real part \(\left(\lambda_{E,n}+\lambda_{I,n}\right)/2\). In the former case, the farthest λ 1,n and λ 2,n can be away from \(\left(\lambda_{E,n}+\lambda_{I,n}\right)/2\) is \(\left(\lambda_{I,n}-\lambda_{E,n}\right)/2\). In the latter case, there is an additional frequency in the system. If one of the roots λ j,n, j = 1,2, has a positive real part, there will be an instability in the solutions M P (t,θ). This happens for both real and complex λ j,n if \(\left(\lambda_{E,n}+\lambda_{I,n}\right)/2>0\), and, additionally for real ones, if \(\left(\lambda_{E,n}\!+\!\lambda_{I,n}\right)/2\!<\! 0\) but λ E,n λ I,n < C IE K IE,n C EI K EI,n/τ E τ I . This exhausts the possible bifurcation structure of the solutions that is due to the values of the cortical coupling constants, spatial kernels, and time scales.

The n-th Fourier coefficient, M P,n(t), of the tuning function M P (t,θ) can now be evaluated by using residue calculus. In order to do so, we define the function

$$ \begin{array}{lll} \hat \Gamma_{\!P,n}(\lambda)&\!=\!\!&f_n\tau_{lgn}\frac{(\lambda\!+\!1/\tau_Q)\left[\!\lambda\!+\!\left(1\!+\!C_{PQ}K_{PQ,n}\!-\!C_{QQ}K_{QQ,n}\right)\!/\tau_P\!\right]}{(\lambda-\lambda_{1,n})(\lambda-\lambda_{2,n})}\nonumber\\[6pt] &&\times \left(\frac{1}{\lambda+\beta/\tau_{lgn}}-\frac{1}{\lambda+\alpha/\tau_{lgn}}\right) \end{array} $$
(124)

from the appropriate parts of formulas (121) and (119). Then,

$$\label{mpndiff} M_{P,n}(t)=\Gamma_{P,n}(t)-\Gamma_{P,n}(t-\nu), $$
(125)

where Γ P,n(t) is the inverse Laplace transform of the function \(\hat \Gamma_{P,n}(\lambda)\). This is due to the well known

property of Laplace transforms, \(\widehat{F(t-\nu)}(\lambda)=e^{\lambda\nu}\hat F(\lambda)\). Moreover

$$ \Gamma_{P,n}(t)= \left\{ \begin{array}{ll} 0, & t<0, \\[6pt] \mbox{sum of the residues of } \hat \Gamma_{P,n}(\lambda)e^{\lambda t},\quad & t\geq 0. \end{array} \right. $$
(126)

For t ≥ 0, we compute

$$ \begin{array}{lll}\label{gammapn} &&\!\!\! \Gamma_{P,n}(t)=\frac{f_n\tau_{lgn}}{\tau_E\tau_I}\left\{\sum\limits_{{\mathop {j = 1}\limits_{i \ne j} }}^2\frac{\tau_{lgn}(\alpha-\beta)(1+\tau_P\lambda_j)\left(1+C_{PQ}K_{PQ,n}-C_{QQ}K_{QQ,n}+\tau_Q\lambda_j\right)}{(\lambda_j-\lambda_i)(\alpha+\tau_{lgn}\lambda_j)(\beta+\tau_{lgn}\lambda_j)}e^{\lambda_j t} \right.\nonumber\\[6pt] &&\frac{(\tau_{lgn}-\tau_P\alpha)\left[\tau_{lgn} (1+C_{PQ}K_{PQ,n}-C_{QQ}K_{QQ,n})-\tau_Q\alpha\right]}{(\alpha+\lambda_1\tau_{lgn})(\alpha+\lambda_2\tau_{lgn})}e^{-\alpha t/\tau_{lgn}}\nonumber \\[6pt] &&\left.+\frac{(\tau_{lgn}-\tau_P\beta)\left[\tau_{lgn} (1+C_{PQ}K_{PQ,n}-C_{QQ}K_{QQ,n})-\tau_Q\beta\right]}{(\beta+\lambda_1\tau_{lgn})(\beta+\lambda_2\tau_{lgn})}e^{-\beta t/\tau_{lgn}}\!\right\}, \end{array} $$
(127)

with P,Q = E,I and Q ≠ P. The interval-specific firing rate average M P (t,θ) can now be computed from formulas (125) and (127) as

$$\label{mfourier} M_P(t,\theta)=\sum_{n=-\infty}^\infty M_{P,n}(t) e^{2in\theta}. $$
(128)

From Eq. (128), we can easily calculate the correlation function \({\mathcal M}_P(t,\theta)\) via the formula (6b). After first rescaling \({\mathcal M}_P(t,\theta)\to\alpha_P + \beta_P I \tilde{\mathcal M}_P(t,\theta)\), with the constants α P and β P as in Eq. (112), this calculation replaces all the exponential factors of the form e γt in Eq. (127) by the functions

$$\label{timefactors} \Lambda(t,\gamma,\nu)=\left\{\begin{array}{ll} 0, & {\kern15pt} t\ \ \ \ <-\nu,\\ (e^{\gamma(t+\nu)}-1)/\gamma,\qquad &{\kern2pt} -\nu\leq t< 0,\\ e^{\gamma t }(e^{\gamma \nu}-1)/\gamma, & {\kern15pt}t\ \ \ \ \geq 0. \end{array}\right. $$
(129)

After dropping the tilde, the rescaled, orientation-tuned part of the correlation function, \({\mathcal M}_P(t,\theta)\), is then described by a Fourier series analogous to Eq. (128).

Note that formula (129) yields only a continuous solution \({\mathcal M}_P(t,\theta)\), not a continuously differentiable one. This can be traced back to the choice of ζ = 0 in the temporal LGN kernel given by Eqs. (20) and (25), which also makes this kernel only continuous. This lack of smoothness appears to be the source of the kinks at t min = ν in Fig. 6.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kovačič, G., Tao, L., Cai, D. et al. Theoretical analysis of reverse-time correlation for idealized orientation tuning dynamics. J Comput Neurosci 25, 401–438 (2008). https://doi.org/10.1007/s10827-008-0085-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-008-0085-7

Keywords

Navigation