-
Prototype Analysis in Hopfield Networks with Hebbian Learning Neural Comput. (IF 2.7) Pub Date : 2024-08-30 Hayden McAlister, Anthony Robins, Lech Szymanski
We discuss prototype formation in the Hopfield network. Typically, Hebbian learning with highly correlated states leads to degraded memory performance. We show that this type of learning can lead to prototype formation, where unlearned states emerge as representatives of large correlated subsets of states, alleviating capacity woes. This process has similarities to prototype learning in human cognition
-
Predictive Representations: Building Blocks of Intelligence Neural Comput. (IF 2.7) Pub Date : 2024-08-30 Wilka Carvalho, Momchil S. Tomov, William de Cothi, Caswell Barry, Samuel J. Gershman
Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This review integrates these theoretical ideas with work on cognition and neuroscience. We pay special attention to the successor representation and its generalizations, which have been widely applied as both engineering
-
Multimodal and Multifactor Branching Time Active Inference Neural Comput. (IF 2.7) Pub Date : 2024-08-30 Théophile Champion, Marek Grześ, Howard Bowman
Active inference is a state-of-the-art framework for modeling the brain that explains a wide range of mechanisms. Recently, two versions of branching time active inference (BTAI) have been developed to handle the exponential (space and time) complexity class that occurs when computing the prior over all possible policies up to the time horizon. However, those two versions of BTAI still suffer from
-
Mechanism of Duration Perception in Artificial Brains Suggests New Model of Attentional Entrainment Neural Comput. (IF 2.7) Pub Date : 2024-08-23 Ali Tehrani-Saleh, J. Devin McAuley, Christoph Adami
While cognitive theory has advanced several candidate frameworks to explain attentional entrainment, the neural basis for the temporal allocation of attention is unknown. Here we present a new model of attentional entrainment guided by empirical evidence obtained using a cohort of 50 artificial brains. These brains were evolved in silico to perform a duration judgment task similar to one where human
-
Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning Neural Comput. (IF 2.7) Pub Date : 2024-08-23 Zeyuan Wang, Luis Cruz
Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible
-
Spiking Neural Network Pressure Sensor Neural Comput. (IF 2.7) Pub Date : 2024-08-23 Michał Markiewicz, Ireneusz Brzozowski, Szymon Janusz
Von Neumann architecture requires information to be encoded as numerical values. For that reason, artificial neural networks running on computers require the data coming from sensors to be discretized. Other network architectures that more closely mimic biological neural networks (e.g., spiking neural networks) can be simulated on von Neumann architecture, but more important, they can also be executed
-
Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces under Partial Observability Neural Comput. (IF 2.7) Pub Date : 2024-08-23 Parvin Malekzadeh, Konstantinos N. Plataniotis
Reinforcement learning (RL) has garnered significant attention for developing decision-making agents that aim to maximize rewards, specified by an external supervisor, within fully observable environments. However, many real-world problems involve partial or noisy observations, where agents cannot access complete and accurate information about the environment. These problems are commonly formulated
-
Intrinsic Rewards for Exploration Without Harm From Observational Noise: A Simulation Study Based on the Free Energy Principle Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Theodore Jerome Tinker, Kenji Doya, Jun Tani
In reinforcement learning (RL), artificial agents are trained to maximize numerical rewards by performing tasks. Exploration is essential in RL because agents must discover information before exploiting it. Two rewards encouraging efficient exploration are the entropy of action policy and curiosity for information gain. Entropy is well established in the literature, promoting randomized action selection
-
Human Eyes–Inspired Recurrent Neural Networks Are More Robust Against Adversarial Noises Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Minkyu Choi, Yizhen Zhang, Kuan Han, Xiaokai Wang, Zhongming Liu
Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforward pass. In this study, we designed a dual-stream vision model inspired by the human brain. This model features retina-like input layers and includes
-
Efficient Hyperdimensional Computing With Spiking Phasors Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Jeff Orchard, P. Michael Furlong, Kathryn Simone
Hyperdimensional (HD) computing (also referred to as vector symbolic architectures, VSAs) offers a method for encoding symbols into vectors, allowing for those symbols to be combined in different ways to form other vectors in the same vector space. The vectors and operators form a compositional algebra, such that composite vectors can be decomposed back to their constituent vectors. Many useful algorithms
-
Spontaneous Emergence of Robustness to Light Variation in CNNs With a Precortically Inspired Module Neural Comput. (IF 2.7) Pub Date : 2024-08-19 J. Petkovic, R. Fioresi
The analogies between the mammalian primary visual cortex and the structure of CNNs used for image classification tasks suggest that the introduction of an additional preliminary convolutional module inspired by the mathematical modeling of the precortical neuronal circuits can improve robustness with respect to global light intensity and contrast variations in the input images. We validate this hypothesis
-
On the Search for Data-Driven and Reproducible Schizophrenia Subtypes Using Resting State fMRI Data From Multiple Sites Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Lærke Gebser Krohne, Ingeborg Helbech Hansen, Kristoffer H. Madsen
For decades, fMRI data have been used to search for biomarkers for patients with schizophrenia. Still, firm conclusions are yet to be made, which is often attributed to the high internal heterogeneity of the disorder. A promising way to disentangle the heterogeneity is to search for subgroups of patients with more homogeneous biological profiles. We applied an unsupervised multiple co-clustering (MCC)
-
UAdam: Unified Adam-Type Algorithmic Framework for Nonconvex Optimization Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Yiming Jiang, Jinlan Liu, Dongpo Xu, Danilo P. Mandic
Adam-type algorithms have become a preferred choice for optimization in the deep learning setting; however, despite their success, their convergence is still not well understood. To this end, we introduce a unified framework for Adam-type algorithms, termed UAdam. It is equipped with a general form of the second-order moment, which makes it possible to include Adam and its existing and future variants
-
Hebbian Descent: A Unified View on Log-Likelihood Learning Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Jan Melchior, Robin Schiewer, Laurenz Wiskott
This study discusses the negative impact of the derivative of the activation functions in the output layer of artificial neural networks, in particular in continual learning. We propose Hebbian descent as a theoretical framework to overcome this limitation, which is implemented through an alternative loss function for gradient descent we refer to as Hebbian descent loss. This loss is effectively the
-
Manifold Gaussian Variational Bayes on the Precision Matrix Neural Comput. (IF 2.7) Pub Date : 2024-08-19 Martin Magris, Mostafa Shabani, Alexandros Iosifidis
We propose an optimization algorithm for variational inference (VI) in complex models. Our approach relies on natural gradient updates where the variational space is a Riemann manifold. We develop an efficient algorithm for gaussian variational inference whose updates satisfy the positive definite constraint on the variational covariance matrix. Our manifold gaussian variational Bayes on the precision
-
Learning Internal Representations of 3D Transformations From 2D Projected Inputs Neural Comput. (IF 2.7) Pub Date : 2024-08-14 Marissa Connor, Bruno Olshausen, Christopher Rozell
We describe a computational model for inferring 3D structure from the motion of projected 2D points in an image, with the aim of understanding how biological vision systems learn and internally represent 3D transformations from the statistics of their input. The model uses manifold transport operators to describe the action of 3D points in a scene as they undergo transformation. We show that the model
-
Electrical Signaling Beyond Neurons Neural Comput. (IF 2.7) Pub Date : 2024-08-14 Travis Monk, Nik Dennler, Nicholas Ralph, Shavika Rastogi, Saeed Afshar, Pablo Urbizagastegui, Russell Jarvis, André van Schaik, Andrew Adamatzky
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that “simpler” neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without
-
Top-Down Priors Disambiguate Target and Distractor Features in Simulated Covert Visual Search Neural Comput. (IF 2.7) Pub Date : 2024-08-14 Justin D. Theiss, Michael A. Silver
Several models of visual search consider visual attention as part of a perceptual inference process, in which top-down priors disambiguate bottom-up sensory information. Many of these models have focused on gaze behavior, but there are relatively fewer models of covert spatial attention, in which attention is directed to a peripheral location in visual space without a shift in gaze direction. Here
-
Inference on the Macroscopic Dynamics of Spiking Neurons Neural Comput. (IF 2.7) Pub Date : 2024-08-14 Nina Baldy, Martin Breyton, Marmaduke M. Woodman, Viktor K. Jirsa, Meysam Hashemi
The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system’s dynamics and responses under changing
-
Deconstructing Deep Active Inference: A Contrarian Information Gatherer Neural Comput. (IF 2.7) Pub Date : 2024-08-14 Théophile Champion, Marek Grzés, Lisa Bonheme, Howard Bowman
Active inference is a theory of perception, learning, and decision making that can be applied to neuroscience, robotics, psychology, and machine learning. Recently, intensive research has been taking place to scale up this framework using Monte Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing
-
Energy Complexity of Convolutional Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Jiří Šíma, Petra Vidnerová, Vojtěch Mrázek
The energy efficiency of hardware implementations of convolutional neural networks (CNNs) is critical to their widespread deployment in low-power mobile devices. Recently, a number of methods have been proposed for providing energy-optimal mappings of CNNs onto diverse hardware accelerators. Their estimated energy consumption is related to specific implementation details and hardware parameters, which
-
Promoting the Shift From Pixel-Level Correlations to Object Semantics Learning by Rethinking Computer Vision Benchmark Data Sets Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Maria Osório, Andreas Wichert
In computer vision research, convolutional neural networks (CNNs) have demonstrated remarkable capabilities at extracting patterns from raw pixel data, achieving state-of-the-art recognition accuracy. However, they significantly differ from human visual perception, prioritizing pixel-level correlations and statistical patterns, often overlooking object semantics. To explore this difference, we propose
-
Trade-Offs Between Energy and Depth of Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Kei Uchizawa, Haruki Abe
We present an investigation on threshold circuits and other discretized neural networks in terms of the following four computational resources—size (the number of gates), depth (the number of layers), weight (weight resolution), and energy—where the energy is a complexity measure inspired by sparse coding and is defined as the maximum number of gates outputting nonzero values, taken over all the input
-
Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Vicky Zhu, Robert Rosenbaum
In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to
-
Extended Poisson Gaussian-Process Latent Variable Model for Unsupervised Neural Decoding Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Della Daiyi Luo, Bapun Giri, Kamran Diba, Caleb Kemere
Dimension reduction on neural activity paves a way for unsupervised neural decoding by dissociating the measurement of internal neural pattern reactivation from the measurement of external variable tuning. With assumptions only on the smoothness of latent dynamics and of internal tuning curves, the Poisson gaussian-process latent variable model (P-GPLVM; Wu et al., 2017) is a powerful tool to discover
-
Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Bastian Pietras
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow
-
A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra Neural Comput. (IF 2.7) Pub Date : 2024-07-19 Mark A. Kramer, Catherine J. Chu
Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific
-
Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes Neural Comput. (IF 2.7) Pub Date : 2024-06-07 Sören Christensen, Jan Kallsen
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning
-
Data Efficiency, Dimensionality Reduction, and the Generalized Symmetric Information Bottleneck Neural Comput. (IF 2.7) Pub Date : 2024-06-07 K. Michael Martini, Ilya Nemenman
The symmetric information bottleneck (SIB), an extension of the more familiar information bottleneck, is a dimensionality-reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the generalized symmetric information bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction
-
Desiderata for Normative Models of Synaptic Plasticity Neural Comput. (IF 2.7) Pub Date : 2024-06-07 Colin Bredenberg, Cristina Savin
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed
-
A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models Neural Comput. (IF 2.7) Pub Date : 2024-06-07 Christoffer G. Alexandersen, Chloé Duprat, Aitakin Ezzati, Pierre Houzelstein, Ambre Ledoux, Yuhong Liu, Sandra Saghir, Alain Destexhe, Federico Tesler, Damien Depannemaecker
Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner
-
A Multimodal Fitting Approach to Construct Single-Neuron Models With Patch Clamp and High-Density Microelectrode Arrays Neural Comput. (IF 2.7) Pub Date : 2024-06-07 Alessio Paolo Buccino, Tanguy Damart, Julian Bartram, Darshan Mandge, Xiaohan Xue, Mickael Zbili, Tobias Gänswein, Aurélien Jaquier, Vishalini Emmenegger, Henry Markram, Andreas Hierlemann, Werner Van Geit
In computational neuroscience, multicompartment models are among the most biophysically realistic representations of single neurons. Constructing such models usually involves the use of the patch-clamp technique to record somatic voltage signals under different experimental conditions. The experimental data are then used to fit the many parameters of the model. While patching of the soma is currently
-
Sparse Generalized Canonical Correlation Analysis: Distributed Alternating Iteration-Based Approach Neural Comput. (IF 2.7) Pub Date : 2024-06-07 Kexin Lv, Jia Cai, Junyi Huo, Chao Shang, Xiaolin Huang, Jie Yang
Sparse canonical correlation analysis (CCA) is a useful statistical tool to detect latent information with sparse structures. However, sparse CCA, where the sparsity could be considered as a Laplace prior on the canonical variates, works only for two data sets, that is, there are only two views or two distinct objects. To overcome this limitation, we propose a sparse generalized canonical correlation
-
Associative Learning of an Unnormalized Successor Representation Neural Comput. (IF 2.7) Pub Date : 2024-05-22 Niels J. Verosky
The successor representation is known to relate to temporal associations learned in the temporal context model (Gershman et al., 2012), and subsequent work suggests a wide relevance of the successor representation across spatial, visual, and abstract relational tasks. I demonstrate that the successor representation and purely associative learning have an even deeper relationship than initially indicated:
-
Bioplausible Unsupervised Delay Learning for Extracting Spatiotemporal Features in Spiking Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-05-22 Alireza Nadafian, Mohammad Ganjtabesh
The plasticity of the conduction delay between neurons plays a fundamental role in learning temporal features that are essential for processing videos, speech, and many high-level functions. However, the exact underlying mechanisms in the brain for this modulation are still under investigation. Devising a rule for precisely adjusting the synaptic delays could eventually help in developing more efficient
-
Positive Competitive Networks for Sparse Reconstruction Neural Comput. (IF 2.7) Pub Date : 2024-05-10 Veronica Centorrino, Anand Gokhale, Alexander Davydov, Giovanni Russo, Francesco Bullo
We propose and analyze a continuous-time firing-rate neural network, the positive firing-rate competitive network (PFCN), to tackle sparse reconstruction problems with non-negativity constraints. These problems, which involve approximating a given input stimulus from a dictionary using a set of sparse (active) neurons, play a key role in a wide range of domains, including, for example, neuroscience
-
How Does the Inner Retinal Network Shape the Ganglion Cells Receptive Field? A Computational Study Neural Comput. (IF 2.7) Pub Date : 2024-04-26 Evgenia Kartsaki, Gerrit Hilgen, Evelyne Sernagor, Bruno Cessac
We consider a model of basic inner retinal connectivity where bipolar and amacrine cells interconnect and both cell types project onto ganglion cells, modulating their response output to the brain visual areas. We derive an analytical formula for the spatiotemporal response of retinal ganglion cells to stimuli, taking into account the effects of amacrine cells inhibition. This analysis reveals two
-
Dense Sample Deep Learning Neural Comput. (IF 2.7) Pub Date : 2024-04-26 Stephen José Hanson, Vivek Yadav, Catherine Hanson
Deep learning (DL), a variant of the neural network algorithms originally proposed in the 1980s (Rumelhart et al., 1986), has made surprising progress in artificial intelligence (AI), ranging from language translation, protein folding (Jumper et al, 2021), autonomous cars, and, more recently, human-like language models (chatbots). All that seemed intractable until very recently. Despite the growing
-
Gauge-Optimal Approximate Learning for Small Data Classification Neural Comput. (IF 2.7) Pub Date : 2024-04-26 Edoardo Vecchi, Davide Bassetti, Fabio Graziato, Lukáš Pospíšil, Illia Horenko
Small data learning problems are characterized by a significant discrepancy between the limited number of response variable observations and the large feature space dimension. In this setting, the common learning tools struggle to identify the features important for the classification task from those that bear no relevant information and cannot derive an appropriate learning rule that allows discriminating
-
Linear Codes for Hyperdimensional Computing Neural Comput. (IF 2.7) Pub Date : 2024-04-26 Netanel Raviv
Hyperdimensional computing (HDC) is an emerging computational paradigm for representing compositional information as high-dimensional vectors and has a promising potential in applications ranging from machine learning to neuromorphic computing. One of the long-standing challenges in HDC is factoring a compositional representation to its constituent factors, also known as the recovery problem. In this
-
Obtaining Lower Query Complexities Through Lightweight Zeroth-Order Proximal Gradient Algorithms Neural Comput. (IF 2.7) Pub Date : 2024-04-23 Bin Gu, Xiyuan Wei, Hualin Zhang, Yi Chang, Heng Huang
Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is
-
Instance-Specific Model Perturbation Improves Generalized Zero-Shot Learning Neural Comput. (IF 2.7) Pub Date : 2024-04-23 Guanyu Yang, Kaizhu Huang, Rui Zhang, Xi Yang
Zero-shot learning (ZSL) refers to the design of predictive functions on new classes (unseen classes) of data that have never been seen during training. In a more practical scenario, generalized zero-shot learning (GZSL) requires predicting both seen and unseen classes accurately. In the absence of target samples, many GZSL models may overfit training data and are inclined to predict individuals as
-
Toward Improving the Generation Quality of Autoregressive Slot VAEs Neural Comput. (IF 2.7) Pub Date : 2024-04-23 Patrick Emami, Pan He, Sanjay Ranka, Anand Rangarajan
Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations (“slots”) from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult
-
An Overview of the Free Energy Principle and Related Research Neural Comput. (IF 2.7) Pub Date : 2024-04-23 Zhengquan Zhang, Feng Xu
The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making—within an agent—are all driven by the objective of “minimizing free energy,” evincing the following behaviors: learning and employing
-
The Determining Role of Covariances in Large Networks of Stochastic Neurons Neural Comput. (IF 2.7) Pub Date : 2024-04-25 Vincent Painchaud, Patrick Desrosiers, Nicolas Doyon
Biological neural networks are notoriously hard to model due to their stochastic behavior and high dimensionality. We tackle this problem by constructing a dynamical model of both the expectations and covariances of the fractions of active and refractory neurons in the network’s populations. We do so by describing the evolution of the states of individual neurons with a continuous-time Markov chain
-
Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits Neural Comput. (IF 2.7) Pub Date : 2024-04-24 Beck Strohmer, Elias Najarro, Jessica Ausborn, Rune W. Berg, Silvia Tolu
Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely
-
Heterogeneous Forgetting Rates and Greedy Allocation in Slot-Based Memory Networks Promotes Signal Retention Neural Comput. (IF 2.7) Pub Date : 2024-04-24 BethAnna Jones, Lawrence Snyder, ShiNung Ching
A key question in the neuroscience of memory encoding pertains to the mechanisms by which afferent stimuli are allocated within memory networks. This issue is especially pronounced in the domain of working memory, where capacity is finite. Presumably the brain must embed some “policy” by which to allocate these mnemonic resources in an online manner in order to maximally represent and store afferent
-
Synaptic Information Storage Capacity Measured With Information Theory Neural Comput. (IF 2.7) Pub Date : 2024-04-24 Mohammad Samavat, Thomas M. Bartol, Kristen M. Harris, Terrence J. Sejnowski
Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity
-
Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks Neural Comput. (IF 2.7) Pub Date : 2024-04-24 William F. Podlaski, Christian K. Machens
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale’s law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks
-
Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-03-21 Xu Pan, Ruben Coen-Cagli, Odelia Schwartz
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been
-
Vector Symbolic Finite State Machines in Attractor Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-03-21 Madison Cotteret, Hugh Greatorex, Martin Ziegler, Elisabetta Chicca
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions
-
CA3 Circuit Model Compressing Sequential Information in Theta Oscillation and Replay Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Satoshi Kuroki, Kenji Mizuseki
The hippocampus plays a critical role in the compression and retrieval of sequential information. During wakefulness, it achieves this through theta phase precession and theta sequences. Subsequently, during periods of sleep or rest, the compressed information reactivates through sharp-wave ripple events, manifesting as memory replay. However, how these sequential neuronal activities are generated
-
Learning Korobov Functions by Correntropy and Convolutional Neural Networks Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Zhiying Fang, Tong Mao, Jun Fan
Combining information-theoretic learning with deep learning has gained significant attention in recent years, as it offers a promising approach to tackle the challenges posed by big data. However, the theoretical understanding of convolutional structures, which are vital to many structured deep learning models, remains incomplete. To partially bridge this gap, this letter aims to develop generalization
-
Frequency Propagation: Multimechanism Learning in Nonlinear Physical Networks Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Vidyesh Rao Anisetti, Ananth Kandala, Benjamin Scellier, J. M. Schwarz
We introduce frequency propagation, a learning algorithm for nonlinear physical networks. In a resistive electrical circuit with variable resistors, an activation current is applied at a set of input nodes at one frequency and an error current is applied at a set of output nodes at another frequency. The voltage response of the circuit to these boundary currents is the superposition of an activation
-
Column Row Convolutional Neural Network: Reducing Parameters for Efficient Image Processing Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Seongil Im, Jae-Seung Jeong, Junseo Lee, Changhwan Shin, Jeong Ho Cho, Hyunsu Ju
Recent advancements in deep learning have achieved significant progress by increasing the number of parameters in a given model. However, this comes at the cost of computing resources, prompting researchers to explore model compression techniques that reduce the number of parameters while maintaining or even improving performance. Convolutional neural networks (CNN) have been recognized as more efficient
-
Mathematical Modeling of PI3K/Akt Pathway in Microglia Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Alireza Poshtkohi, John Wade, Liam McDaid, Junxiu Liu, Mark L. Dallas, Angela Bithell
The motility of microglia involves intracellular signaling pathways that are predominantly controlled by changes in cytosolic Ca2+ and activation of PI3K/Akt (phosphoinositide-3-kinase/protein kinase B). In this letter, we develop a novel biophysical model for cytosolic Ca2+ activation of the PI3K/Akt pathway in microglia where Ca2+ influx is mediated by both P2Y purinergic receptors (P2YR) and P2X
-
Object-Centric Scene Representations Using Active Inference Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Toon Van de Maele, Tim Verbelen, Pietro Mazzaglia, Stefano Ferraro, Bart Dhoedt
Representing a scene and its constituent objects from raw sensory data is a core ability for enabling robots to interact with their environment. In this letter, we propose a novel approach for scene understanding, leveraging an object-centric generative model that enables an agent to infer object category and pose in an allocentric reference frame using active inference, a neuro-inspired framework
-
Lateral Connections Improve Generalizability of Learning in a Simple Neural Network Neural Comput. (IF 2.7) Pub Date : 2024-03-08 Garrett Crutcher
To navigate the world around us, neural circuits rapidly adapt to their environment learning generalizable strategies to decode information. When modeling these learning strategies, network models find the optimal solution to satisfy one task condition but fail when introduced to a novel task or even a different stimulus in the same space. In the experiments described in this letter, I investigate
-
-
Active Learning for Discrete Latent Variable Models Neural Comput. (IF 2.7) Pub Date : 2024-02-16 Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing