当前期刊: Neural Computation Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Synchrony and Complexity in State-Related EEG Networks: An Application of Spectral Graph Theory
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Amir Hossein Ghaderi; Bianca R. Baltaretu; Masood Nemati Andevari; Vishal Bharmauria; Fuat Balci

    The brain may be considered as a synchronized dynamic network with several coherent dynamical units. However, concerns remain whether synchronizability is a stable state in the brain networks. If so, which index can best reveal the synchronizability in brain networks? To answer these questions, we tested the application of the spectral graph theory and the Shannon entropy as alternative approaches

    更新日期:2020-09-20
  • Toward a Unified Framework for Cognitive Maps
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Woori Kim; Yongseok Yoo

    In this study, we integrated neural encoding and decoding into a unified framework for spatial information processing in the brain. Specifically, the neural representations of self-location in the hippocampus (HPC) and entorhinal cortex (EC) play crucial roles in spatial navigation. Intriguingly, these neural representations in these neighboring brain areas show stark differences. Whereas the place

    更新日期:2020-09-20
  • Effect of Top-Down Connections in Hierarchical Sparse Coding
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Victor Boutin; Angelo Franciosini; Franck Ruffier; Laurent Perrinet

    Hierarchical sparse coding (HSC) is a powerful model to efficiently represent multidimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layer-wise subproblems. However, neuroscientific evidence would suggest interconnecting these subproblems as in predictive coding (PC) theory, which adds top-down connections

    更新日期:2020-09-20
  • Inferring Neuronal Couplings from Spiking Data Using a Systematic Procedure with a Statistical Criterion
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Yu Terada; Tomoyuki Obuchi; Takuya Isomura; Yoshiyuki Kabashima

    Recent remarkable advances in experimental techniques have provided a background for inferring neuronal couplings from point process data that include a great number of neurons. Here, we propose a systematic procedure for pre- and postprocessing generic point process data in an objective manner to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch–Pitts

    更新日期:2020-09-20
  • Differential Covariance: A New Method to Estimate Functional Connectivity in fMRI
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Tiger W. Lin; Yusi Chen; Qasim Bukhari; Giri P. Krishnan; Maxim Bazhenov; Terrence J. Sejnowski

    Measuring functional connectivity from fMRI recordings is important in understanding processing in cortical networks. However, because the brain's connection pattern is complex, currently used methods are prone to producing false functional connections. We introduce differential covariance analysis, a new method that uses derivatives of the signal for estimating functional connectivity. We generated

    更新日期:2020-09-20
  • Repetitive Control for Multi-Joint Arm Movements Based on Virtual Trajectories
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Yoji Uno; Takehiro Suzuki; Takahiro Kagawa

    According to the neuromuscular model of virtual trajectory control, the postures and movements of limbs are performed by shifting the equilibrium positions determined by agonist and antagonist muscle activities. In this study, we develop virtual trajectory control for the reaching movements of a multi-joint arm, introducing a proportional-derivative feedback control scheme. In virtual trajectory control

    更新日期:2020-09-20
  • Assessing Goodness-of-Fit in Marked Point Process Models of Neural Population Coding via Time and Rate Rescaling
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Ali Yousefi; Yalda Amidi; Behzad Nazari; Uri. T. Eden

    Marked point process models have recently been used to capture the coding properties of neural populations from multiunit electrophysiological recordings without spike sorting. These clusterless models have been shown in some instances to better describe the firing properties of neural populations than collections of receptive field models for sorted neurons and to lead to better decoding results.

    更新日期:2020-09-20
  • Bicomplex Projection Rule for Complex-Valued Hopfield Neural Networks
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Masaki Kobayashi

    A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should

    更新日期:2020-09-20
  • Analyzing and Accelerating the Bottlenecks of Training Deep SNNs with Backpropagation
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Ruizhi Chen; Ling Li

    Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter, we aim to understand the intrinsic limitations of SNN training to design better

    更新日期:2020-09-20
  • A Cerebellar Computational Mechanism for Delay Conditioning at Precise Time Intervals
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Terence D. Sanger; Mitsuo Kawato

    The cerebellum is known to have an important role in sensing and execution of precise time intervals, but the mechanism by which arbitrary time intervals can be recognized and replicated with high precision is unknown. We propose a computational model in which precise time intervals can be identified from the pattern of individual spike activity in a population of parallel fibers in the cerebellar

    更新日期:2020-09-20
  • Closed-Loop Deep Learning: Generating Forward Models with Backpropagation
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Sama Daryanavard; Bernd Porr

    A reflex is a simple closed-loop control approach that tries to minimize an error but fails to do so because it will always react too late. An adaptive algorithm can use this error to learn a forward model with the help of predictive cues. For example, a driver learns to improve steering by looking ahead to avoid steering in the last minute. In order to process complex cues such as the road ahead,

    更新日期:2020-09-20
  • ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Changcun Huang

    This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function f(x), we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to f(x). For multivariate function f(x), ReLU

    更新日期:2020-09-20
  • Reverse-Engineering Neural Networks to Characterize Their Cost Functions
    Neural Comput. (IF 2.505) Pub Date : 2020-09-18
    Takuya Isomura; Karl Friston

    This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural

    更新日期:2020-09-20
  • Analysis of Regression Algorithms with Unbounded Sampling.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Hongzhi Tong,Jiajing Gao

    In this letter, we study a class of the regularized regression algorithms when the sampling process is unbounded. By choosing different loss functions, the learning algorithms can include a wide range of commonly used algorithms for regression. Unlike the prior work on theoretical analysis of unbounded sampling, no constraint on the output variables is specified in our setting. By an elegant error

    更新日期:2020-08-20
  • Fast and Accurate Langevin Simulations of Stochastic Hodgkin-Huxley Dynamics.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Shusen Pu,Peter J Thomas

    Fox and Lu introduced a Langevin framework for discrete-time stochastic models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system. They derived a Fokker-Planck equation with state-dependent diffusion tensor D and suggested a Langevin formulation with noise coefficient matrix S such that SSr=D. Subsequently, several authors introduced a variety of Langevin equations for the HH system

    更新日期:2020-08-20
  • A Predictive-Coding Network That Is Both Discriminative and Generative.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Wei Sun,Jeff Orchard

    Predictive coding (PC) networks are a biologically interesting class of neural networks. Their layered hierarchy mimics the reciprocal connectivity pattern observed in the mammalian cortex, and they can be trained using local learning rules that approximate backpropagation (Bogacz, 2017). However, despite having feedback connections that enable information to flow down the network hierarchy, discriminative

    更新日期:2020-08-20
  • Active Learning of Bayesian Linear Models with High-Dimensional Binary Features by Parameter Confidence-Region Estimation.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yu Inatsu,Masayuki Karasuyama,Keiichi Inoue,Hideki Kandori,Ichiro Takeuchi

    In this letter, we study an active learning problem for maximizing an unknown linear function with high-dimensional binary features. This problem is notoriously complex but arises in many important contexts. When the sampling budget, that is, the number of possible function evaluations, is smaller than the number of dimensions, it tends to be impossible to identify all of the optimal binary features

    更新日期:2020-08-20
  • Multiview Alignment and Generation in CCA via Consistent Latent Encoding.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yaxin Shi,Yuangang Pan,Donna Xu,Ivor W Tsang

    Multiview alignment, achieving one-to-one correspondence of multiview inputs, is critical in many real-world multiview applications, especially for cross-view data analysis problems. An increasing amount of work has studied this alignment problem with canonical correlation analysis (CCA). However, existing CCA models are prone to misalign the multiple views due to either the neglect of uncertainty

    更新日期:2020-08-20
  • Modal Principal Component Analysis.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Keishi Sando,Hideitsu Hino

    Principal component analysis (PCA) is a widely used method for data processing, such as for dimension reduction and visualization. Standard PCA is known to be sensitive to outliers, and various robust PCA methods have been proposed. It has been shown that the robustness of many statistical methods can be improved using mode estimation instead of mean estimation, because mode estimation is not significantly

    更新日期:2020-08-20
  • Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yu Inatsu,Daisuke Sugita,Kazuaki Toyoura,Ichiro Takeuchi

    We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be directly observed. We propose a new AL method in which the input points are sequentially

    更新日期:2020-08-20
  • Binless Kernel Machine: Modeling Spike Train Transformation for Cognitive Neural Prostheses.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Cunle Qian,Xuyun Sun,Yueming Wang,Xiaoxiang Zheng,Yiwen Wang,Gang Pan

    Modeling spike train transformation among brain regions helps in designing a cognitive neural prosthesis that restores lost cognitive functions. Various methods analyze the nonlinear dynamic spike train transformation between two cortical areas with low computational eficiency. The application of a real-time neural prosthesis requires computational eficiency, performance stability, and better interpretation

    更新日期:2020-08-20
  • Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Yuko Kuroki,Liyuan Xu,Atsushi Miyauchi,Junya Honda,Masashi Sugiyama

    We study the problem of stochastic multiple-arm identification, where an agent sequentially explores a size-k subset of arms (also known as a super arm) from given n arms and tries to identify the best super arm. Most work so far has considered the semi-bandit setting, where the agent can observe the reward of each pulled arm or assumed each arm can be queried at each round. However, in real-world

    更新日期:2020-08-20
  • Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Ishan Wickramasingha,Ahmed Elrewainy,Michael Sobhy,Sherif S Sherif

    Sparse signal representations have gained much interest recently in both signal processing and statistical communities. Compared to orthogonal matching pursuit (OMP) and basis pursuit, which solve the L0 and L1 constrained sparse least-squares problems, respectively, least angle regression (LARS) is a computationally efficient method to solve both problems for all critical values of the regularization

    更新日期:2020-08-20
  • Hyperbolic-Valued Hopfield Neural Networks in Synchronous Mode.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Masaki Kobayashi

    For most multistate Hopfield neural networks, the stability conditions in asynchronous mode are known, whereas those in synchronous mode are not. If they were to converge in synchronous mode, recall would be accelerated by parallel processing. Complex-valued Hopfield neural networks (CHNNs) with a projection rule do not converge in synchronous mode. In this work, we provide stability conditions for

    更新日期:2020-08-20
  • Fine-Grained 3D-Attention Prototypes for Few-Shot Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Xin Hu,Jun Liu,Jie Ma,Yudai Pan,Lingling Zhang

    In the real world, a limited number of labeled finely grained images per class can hardly represent the class distribution effectively. Due to the more subtle visual differences in fine-grained images than simple images with obvious objects, that is, there exist smaller interclass and larger intraclass variations. To solve these issues, we propose an end-to-end attention-based model for fine-grained

    更新日期:2020-08-20
  • Parallel Neural Multiprocessing with Gamma Frequency Latencies.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Ruohan Zhang,Dana H Ballard

    The Poisson variability in cortical neural responses has been typically modeled using spike averaging techniques, such as trial averaging and rate coding, since such methods can produce reliable correlates of behavior. However, mechanisms that rely on counting spikes could be slow and inefficient and thus might not be useful in the brain for computations at timescales in the 10 millisecond range. This

    更新日期:2020-08-20
  • A Mean-Field Description of Bursting Dynamics in Spiking Neural Networks with Short-Term Adaptation.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Richard Gast,Helmut Schmidt,Thomas R Knösche

    Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic bursting behavior are a valuable tool. In this article, we derive mean-field descriptions

    更新日期:2020-08-20
  • Theory and Algorithms for Shapelet-Based Multiple-Instance Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Daiki Suehiro,Kohei Hatano,Eiji Takimoto,Shuji Yamamoto,Kenichi Bannai,Akiko Takeda

    We propose a new formulation of multiple-instance learning (MIL), in which a unit of data consists of a set of instances called a bag. The goal is to find a good classifier of bags based on the similarity with a "shapelet" (or pattern), where the similarity of a bag with a shapelet is the maximum similarity of instances in the bag. In previous work, some of the training instances have been chosen as

    更新日期:2020-06-10
  • On a Scalable Entropic Breaching of the Overfitting Barrier for Small Data Problems in Machine Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Illia Horenko

    Overfitting and treatment of small data are among the most challenging problems in machine learning (ML), when a relatively small data statistics size T is not enough to provide a robust ML fit for a relatively large data feature dimension D. Deploying a massively parallel ML analysis of generic classification problems for different D and T, we demonstrate the existence of statistically significant

    更新日期:2020-06-10
  • Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Shun-Ichi Amari

    It is known that any target function is realized in a sufficiently small neighborhood of any randomly connected deep network, provided the width (the number of neurons in a layer) is sufficiently large. There are sophisticated analytical theories and discussions concerning this striking fact, but rigorous theories are very complicated. We give an elementary geometrical proof by using a simple model

    更新日期:2020-06-10
  • A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Xinqi Li,Jun Wang,Sam Kwong

    Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints

    更新日期:2020-06-10
  • Stochastic Multichannel Ranking with Brain Dynamics Preferences.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Yuangang Pan,Ivor W Tsang,Avinash K Singh,Chin-Teng Lin,Masashi Sugiyama

    A driver's cognitive state of mental fatigue significantly affects his or her driving performance and more important, public safety. Previous studies have leveraged reaction time (RT) as the metric for mental fatigue and aim at estimating the exact value of RT using electroencephalogram (EEG) signals within a regression model. However, due to the easily corrupted and also nonsmooth properties of RTs

    更新日期:2020-06-10
  • Inference of a Mesoscopic Population Model from Population Spike Trains.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Alexandre René,André Longtin,Jakob H Macke

    Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic

    更新日期:2020-06-10
  • Shapley Homology: Topological Analysis of Sample Influence for Neural Networks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Kaixuan Zhang,Qinglong Wang,Xue Liu,C Lee Giles

    Data samples collected for training machine learning models are typically assumed to be independent and identically distributed (i.i.d.). Recent research has demonstrated that this assumption can be problematic as it simplifies the manifold of structured data. This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models. In this work

    更新日期:2020-05-20
  • Generation of Scale-Invariant Sequential Activity in Linear Recurrent Networks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Yue Liu,Marc W Howard

    Sequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori, it is desirable that memory, and thus the generated sequences, is scale invariant. Although recurrent neural network models have been proposed

    更新日期:2020-05-20
  • Heterogeneous Synaptic Weighting Improves Neural Coding in the Presence of Common Noise.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Pratik S Sachdeva,Jesse A Livezey,Michael R DeWeese

    Simultaneous recordings from the cortex have revealed that neural activity is highly variable and that some variability is shared across neurons in a population. Further experimental work has demonstrated that the shared component of a neuronal population's variability is typically comparable to or larger than its private component. Meanwhile, an abundance of theoretical work has assessed the impact

    更新日期:2020-05-20
  • A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Pascal Helson

    We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Then, during the forgetting time, the presentation of other signals (noise) may also

    更新日期:2020-05-20
  • A Model for the Study of the Increase in Stimulus and Change Point Detection with Small and Variable Spiking Delays.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Benjamin Straub,Gaby Schneider

    Precise timing of spikes between different neurons has been found to convey reliable information beyond the spike count. In contrast, the role of small and variable spiking delays, as reported, for example, in the visual cortex, remains largely unclear. This issue becomes particularly important considering the high speed of neuronal information processing, which is assumed to be based on only a few

    更新日期:2020-05-20
  • Minimal Spiking Neuron for Solving Multilabel Classification Tasks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Jakub Fil,Dominique Chu

    The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce

    更新日期:2020-05-20
  • Salient Slices: Improved Neural Network Training and Performance with Image Entropy.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-05
    Steven J Frank,Andrea M Frank

    As a training and analysis strategy for convolutional neural networks (CNNs), we slice images into tiled segments and use, for training and prediction, segments that both satisfy an information criterion and contain sufficient content to support classification. In particular, we use image entropy as the information criterion. This ensures that each tile carries as much information diversity as the

    更新日期:2020-05-05
  • Independently Interpretable Lasso for Generalized Linear Models.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Masaaki Takada,Taiji Suzuki,Hironori Fujisawa

    Sparse regularization such as ℓ1 regularization is a quite powerful and widely used strategy for high-dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary ℓ1 regularization

    更新日期:2020-04-28
  • First Passage Time Memory Lifetimes for Multistate, Filter-Based Synapses.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Terry Elliott

    Models of associative memory with discrete state synapses learn new memories by forgetting old ones. In contrast to non-integrative models of synaptic plasticity, models with integrative, filter-based synapses exhibit an initial rise in the fidelity of recall of stored memories. This rise to a peak is driven by a transient process and is then followed by a return to equilibrium. In a series of papers

    更新日期:2020-04-28
  • Efficient Position Decoding Methods Based on Fluorescence Calcium Imaging in the Mouse Hippocampus.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Mengyu Tu,Ruohe Zhao,Avital Adler,Wen-Biao Gan,Zhe S Chen

    Large-scale fluorescence calcium imaging methods have become widely adopted for studies of long-term hippocampal and cortical neuronal dynamics. Pyramidal neurons of the rodent hippocampus show spatial tuning in freely foraging or head-fixed navigation tasks. Development of efficient neural decoding methods for reconstructing the animal's position in real or virtual environments can provide a fast

    更新日期:2020-04-28
  • Nonequilibrium Statistical Mechanics of Continuous Attractors.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Weishun Zhong,Zhiyue Lu,David J Schwab,Arvind Murugan

    Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood

    更新日期:2020-04-28
  • Neural Model of Coding Stimulus Orientation and Adaptation.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Henrikas Vaitkevičius,Algimantas Švežda,Rytis Stanikūnas,Remigijus Bliumas,Alvydas Šoliūnas,Janus J Kulikowski

    The coding of line orientation in the visual system has been investigated extensively. During the prolonged viewing of a stimulus, the perceived orientation continuously changes (normalization effect). Also, the orientation of the adapting stimulus and the background stimuli influence the perceived orientation of the subsequently displayed stimulus: tilt after-effect (TAE) or tilt illusion (TI). The

    更新日期:2020-02-18
  • Center Manifold Analysis of Plateau Phenomena Caused by Degeneration of Three-Layer Perceptron.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Daiji Tsutsui

    A hierarchical neural network usually has many singular regions in the parameter space due to the degeneration of hidden units. Here, we focus on a three-layer perceptron, which has one-dimensional singular regions comprising both attractive and repulsive parts. Such a singular region is often called a Milnor-like attractor. It is empirically known that in the vicinity of a Milnor-like attractor, several

    更新日期:2020-02-18
  • Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Yuval Harel,Ron Meir

    Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning, which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may

    更新日期:2020-02-18
  • Online Learning Based on Online DCA and Application to Online Classification.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Hoai An Le Thi,Vinh Thanh Ho

    We investigate an approach based on DC (difference of convex functions) programming and DCA (dc algorithm) for online learning techniques. The prediction problem of an online learner can be formulated as a DC program for which online DCA is applied. We propose the two so-called complete/approximate versions of online DCA scheme and prove their logarithmic/sublinear regrets. Six online DCA-based algorithms

    更新日期:2020-02-18
  • Feature Extraction of Surface Electromyography Based on Improved Small-World Leaky Echo State Network.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Xugang Xi,Wenjun Jiang,Seyed M Miran,Xian Hua,Yun-Bo Zhao,Chen Yang,Zhizeng Luo

    Surface electromyography (sEMG) is an electrophysiological reflection of skeletal muscle contractile activity that can directly reflect neuromuscular activity. It has been a matter of research to investigate feature extraction methods of sEMG signals. In this letter, we propose a feature extraction method of sEMG signals based on the improved small-world leaky echo state network (ISWLESN). The reservoir

    更新日期:2020-02-18
  • Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Thomas Hueber,Eric Tatulli,Laurent Girin,Jean-Luc Schwartz

    Sensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the accuracy of predictions that can be computed in real sensory scenes. Here, we focus on the sensory processing of auditory and audiovisual speech. We propose a set of computational

    更新日期:2020-01-17
  • Hidden Aspects of the Research ADOS Are Bound to Affect Autism Science.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Elizabeth B Torres,Richa Rai,Sejal Mistry,Brenda Gupta

    The research-grade Autism Diagnostic Observational Schedule (ADOS) is a broadly used instrument that informs and steers much of the science of autism. Despite its broad use, little is known about the empirical variability inherently present in the scores of the ADOS scale or their appropriateness to define change and its rate, to repeatedly use this test to characterize neurodevelopmental trajectories

    更新日期:2020-01-17
  • Classification from Triplet Comparison Data.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Zhenghang Cui,Nontawat Charoenphakdee,Issei Sato,Masashi Sugiyama

    Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in a Euclidean space of the given instances that preserve the comparison order as much as possible. Unlike fully labeled data, triplet comparison data can be collected in a more

    更新日期:2020-01-17
  • Switching in Cerebellar Stellate Cell Excitability in Response to a Pair of Inhibitory/Excitatory Presynaptic Inputs: A Dynamical System Perspective.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Saeed Farjami,Ryan P D Alexander,Derek Bowie,Anmar Khadra

    Cerebellar stellate cells form inhibitory synapses with Purkinje cells, the sole output of the cerebellum. Upon stimulation by a pair of varying inhibitory and fixed excitatory presynaptic inputs, these cells do not respond to excitation (i.e., do not generate an action potential) when the magnitude of the inhibition is within a given range, but they do respond outside this range. We previously used

    更新日期:2020-01-17
  • Model-Free Robust Optimal Feedback Mechanisms of Biological Motor Control.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Tao Bian,Daniel M Wolpert,Zhong-Ping Jiang

    Sensorimotor tasks that humans perform are often affected by different sources of uncertainty. Nevertheless, the central nervous system (CNS) can gracefully coordinate our movements. Most learning frameworks rely on the internal model principle, which requires a precise internal representation in the CNS to predict the outcomes of our motor commands. However, learning a perfect internal model in a

    更新日期:2020-01-17
  • A Continuous-Time Analysis of Distributed Stochastic Gradient.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Nicholas M Boffi,Jean-Jacques E Slotine

    We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial

    更新日期:2019-11-01
  • Iterative Retrieval and Block Coding in Autoassociative and Heteroassociative Memory.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Andreas Knoblauch,Günther Palm

    Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou (2011) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative

    更新日期:2019-11-01
  • Toward Training Recurrent Neural Networks for Lifelong Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Shagun Sodhani,Sarath Chandar,Yoshua Bengio

    Catastrophic forgetting and capacity saturation are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the lifelong learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained

    更新日期:2019-11-01
  • An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Xiping Ju,Biao Fang,Rui Yan,Xiaoliang Xu,Huajin Tang

    A spiking neural network (SNN) is a type of biological plausibility model that performs information processing based on spikes. Training a deep SNN effectively is challenging due to the nondifferention of spike signals. Recent advances have shown that high-performance SNNs can be obtained by converting convolutional neural networks (CNNs). However, the large-scale SNNs are poorly served by conventional

    更新日期:2019-11-01
  • Optimal Sampling of Parametric Families: Implications for Machine Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Adrian E G Huber,Jithendar Anumula,Shih-Chii Liu

    It is well known in machine learning that models trained on a training set generated by a probability distribution function perform far worse on test sets generated by a different probability distribution function. In the limit, it is feasible that a continuum of probability distribution functions might have generated the observed test set data; a desirable property of a learned model in that case

    更新日期:2019-11-01
  • On Kernel Method-Based Connectionist Models and Supervised Deep Learning Without Backpropagation.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Shiyu Duan,Shujian Yu,Yunmei Chen,Jose C Principe

    We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to "kernelize" (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
物理学研究前沿热点精选期刊推荐
chemistry
《自然》编辑与您分享如何成为优质审稿人-信息流
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
自然职场线上招聘会
ACS ES&T Engineering
科研绘图
ACS ES&T Water
ACS Publications填问卷
屿渡论文,编辑服务
阿拉丁试剂right
张晓晨
田蕾蕾
李闯创
刘天飞
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍