当前期刊: Neural Computation Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Analysis of Regression Algorithms with Unbounded Sampling.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Hongzhi Tong,Jiajing Gao

    In this letter, we study a class of the regularized regression algorithms when the sampling process is unbounded. By choosing different loss functions, the learning algorithms can include a wide range of commonly used algorithms for regression. Unlike the prior work on theoretical analysis of unbounded sampling, no constraint on the output variables is specified in our setting. By an elegant error

    更新日期:2020-08-20
  • Fast and Accurate Langevin Simulations of Stochastic Hodgkin-Huxley Dynamics.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Shusen Pu,Peter J Thomas

    Fox and Lu introduced a Langevin framework for discrete-time stochastic models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system. They derived a Fokker-Planck equation with state-dependent diffusion tensor D and suggested a Langevin formulation with noise coefficient matrix S such that SSr=D. Subsequently, several authors introduced a variety of Langevin equations for the HH system

    更新日期:2020-08-20
  • A Predictive-Coding Network That Is Both Discriminative and Generative.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Wei Sun,Jeff Orchard

    Predictive coding (PC) networks are a biologically interesting class of neural networks. Their layered hierarchy mimics the reciprocal connectivity pattern observed in the mammalian cortex, and they can be trained using local learning rules that approximate backpropagation (Bogacz, 2017). However, despite having feedback connections that enable information to flow down the network hierarchy, discriminative

    更新日期:2020-08-20
  • Active Learning of Bayesian Linear Models with High-Dimensional Binary Features by Parameter Confidence-Region Estimation.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yu Inatsu,Masayuki Karasuyama,Keiichi Inoue,Hideki Kandori,Ichiro Takeuchi

    In this letter, we study an active learning problem for maximizing an unknown linear function with high-dimensional binary features. This problem is notoriously complex but arises in many important contexts. When the sampling budget, that is, the number of possible function evaluations, is smaller than the number of dimensions, it tends to be impossible to identify all of the optimal binary features

    更新日期:2020-08-20
  • Multiview Alignment and Generation in CCA via Consistent Latent Encoding.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yaxin Shi,Yuangang Pan,Donna Xu,Ivor W Tsang

    Multiview alignment, achieving one-to-one correspondence of multiview inputs, is critical in many real-world multiview applications, especially for cross-view data analysis problems. An increasing amount of work has studied this alignment problem with canonical correlation analysis (CCA). However, existing CCA models are prone to misalign the multiple views due to either the neglect of uncertainty

    更新日期:2020-08-20
  • Modal Principal Component Analysis.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Keishi Sando,Hideitsu Hino

    Principal component analysis (PCA) is a widely used method for data processing, such as for dimension reduction and visualization. Standard PCA is known to be sensitive to outliers, and various robust PCA methods have been proposed. It has been shown that the robustness of many statistical methods can be improved using mode estimation instead of mean estimation, because mode estimation is not significantly

    更新日期:2020-08-20
  • Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Yu Inatsu,Daisuke Sugita,Kazuaki Toyoura,Ichiro Takeuchi

    We study active learning (AL) based on gaussian processes (GPs) for efficiently enumerating all of the local minimum solutions of a black-box function. This problem is challenging because local solutions are characterized by their zero gradient and positive-definite Hessian properties, but those derivatives cannot be directly observed. We propose a new AL method in which the input points are sequentially

    更新日期:2020-08-20
  • Binless Kernel Machine: Modeling Spike Train Transformation for Cognitive Neural Prostheses.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-14
    Cunle Qian,Xuyun Sun,Yueming Wang,Xiaoxiang Zheng,Yiwen Wang,Gang Pan

    Modeling spike train transformation among brain regions helps in designing a cognitive neural prosthesis that restores lost cognitive functions. Various methods analyze the nonlinear dynamic spike train transformation between two cortical areas with low computational eficiency. The application of a real-time neural prosthesis requires computational eficiency, performance stability, and better interpretation

    更新日期:2020-08-20
  • Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Yuko Kuroki,Liyuan Xu,Atsushi Miyauchi,Junya Honda,Masashi Sugiyama

    We study the problem of stochastic multiple-arm identification, where an agent sequentially explores a size-k subset of arms (also known as a super arm) from given n arms and tries to identify the best super arm. Most work so far has considered the semi-bandit setting, where the agent can observe the reward of each pulled arm or assumed each arm can be queried at each round. However, in real-world

    更新日期:2020-08-20
  • Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Ishan Wickramasingha,Ahmed Elrewainy,Michael Sobhy,Sherif S Sherif

    Sparse signal representations have gained much interest recently in both signal processing and statistical communities. Compared to orthogonal matching pursuit (OMP) and basis pursuit, which solve the L0 and L1 constrained sparse least-squares problems, respectively, least angle regression (LARS) is a computationally efficient method to solve both problems for all critical values of the regularization

    更新日期:2020-08-20
  • Hyperbolic-Valued Hopfield Neural Networks in Synchronous Mode.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Masaki Kobayashi

    For most multistate Hopfield neural networks, the stability conditions in asynchronous mode are known, whereas those in synchronous mode are not. If they were to converge in synchronous mode, recall would be accelerated by parallel processing. Complex-valued Hopfield neural networks (CHNNs) with a projection rule do not converge in synchronous mode. In this work, we provide stability conditions for

    更新日期:2020-08-20
  • Fine-Grained 3D-Attention Prototypes for Few-Shot Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Xin Hu,Jun Liu,Jie Ma,Yudai Pan,Lingling Zhang

    In the real world, a limited number of labeled finely grained images per class can hardly represent the class distribution effectively. Due to the more subtle visual differences in fine-grained images than simple images with obvious objects, that is, there exist smaller interclass and larger intraclass variations. To solve these issues, we propose an end-to-end attention-based model for fine-grained

    更新日期:2020-08-20
  • Parallel Neural Multiprocessing with Gamma Frequency Latencies.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Ruohan Zhang,Dana H Ballard

    The Poisson variability in cortical neural responses has been typically modeled using spike averaging techniques, such as trial averaging and rate coding, since such methods can produce reliable correlates of behavior. However, mechanisms that rely on counting spikes could be slow and inefficient and thus might not be useful in the brain for computations at timescales in the 10 millisecond range. This

    更新日期:2020-08-20
  • A Mean-Field Description of Bursting Dynamics in Spiking Neural Networks with Short-Term Adaptation.
    Neural Comput. (IF 2.505) Pub Date : 2020-08-11
    Richard Gast,Helmut Schmidt,Thomas R Knösche

    Bursting plays an important role in neural communication. At the population level, macroscopic bursting has been identified in populations of neurons that do not express intrinsic bursting mechanisms. For the analysis of phase transitions between bursting and non-bursting states, mean-field descriptions of macroscopic bursting behavior are a valuable tool. In this article, we derive mean-field descriptions

    更新日期:2020-08-20
  • Theory and Algorithms for Shapelet-Based Multiple-Instance Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Daiki Suehiro,Kohei Hatano,Eiji Takimoto,Shuji Yamamoto,Kenichi Bannai,Akiko Takeda

    We propose a new formulation of multiple-instance learning (MIL), in which a unit of data consists of a set of instances called a bag. The goal is to find a good classifier of bags based on the similarity with a "shapelet" (or pattern), where the similarity of a bag with a shapelet is the maximum similarity of instances in the bag. In previous work, some of the training instances have been chosen as

    更新日期:2020-06-10
  • On a Scalable Entropic Breaching of the Overfitting Barrier for Small Data Problems in Machine Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Illia Horenko

    Overfitting and treatment of small data are among the most challenging problems in machine learning (ML), when a relatively small data statistics size T is not enough to provide a robust ML fit for a relatively large data feature dimension D. Deploying a massively parallel ML analysis of generic classification problems for different D and T, we demonstrate the existence of statistically significant

    更新日期:2020-06-10
  • Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Shun-Ichi Amari

    It is known that any target function is realized in a sufficiently small neighborhood of any randomly connected deep network, provided the width (the number of neurons in a layer) is sufficiently large. There are sophisticated analytical theories and discussions concerning this striking fact, but rigorous theories are very complicated. We give an elementary geometrical proof by using a simple model

    更新日期:2020-06-10
  • A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Xinqi Li,Jun Wang,Sam Kwong

    Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints

    更新日期:2020-06-10
  • Stochastic Multichannel Ranking with Brain Dynamics Preferences.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Yuangang Pan,Ivor W Tsang,Avinash K Singh,Chin-Teng Lin,Masashi Sugiyama

    A driver's cognitive state of mental fatigue significantly affects his or her driving performance and more important, public safety. Previous studies have leveraged reaction time (RT) as the metric for mental fatigue and aim at estimating the exact value of RT using electroencephalogram (EEG) signals within a regression model. However, due to the easily corrupted and also nonsmooth properties of RTs

    更新日期:2020-06-10
  • Inference of a Mesoscopic Population Model from Population Spike Trains.
    Neural Comput. (IF 2.505) Pub Date : 2020-06-10
    Alexandre René,André Longtin,Jakob H Macke

    Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic

    更新日期:2020-06-10
  • Shapley Homology: Topological Analysis of Sample Influence for Neural Networks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Kaixuan Zhang,Qinglong Wang,Xue Liu,C Lee Giles

    Data samples collected for training machine learning models are typically assumed to be independent and identically distributed (i.i.d.). Recent research has demonstrated that this assumption can be problematic as it simplifies the manifold of structured data. This has motivated different research areas such as data poisoning, model improvement, and explanation of machine learning models. In this work

    更新日期:2020-05-20
  • Generation of Scale-Invariant Sequential Activity in Linear Recurrent Networks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Yue Liu,Marc W Howard

    Sequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori, it is desirable that memory, and thus the generated sequences, is scale invariant. Although recurrent neural network models have been proposed

    更新日期:2020-05-20
  • Heterogeneous Synaptic Weighting Improves Neural Coding in the Presence of Common Noise.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Pratik S Sachdeva,Jesse A Livezey,Michael R DeWeese

    Simultaneous recordings from the cortex have revealed that neural activity is highly variable and that some variability is shared across neurons in a population. Further experimental work has demonstrated that the shared component of a neuronal population's variability is typically comparable to or larger than its private component. Meanwhile, an abundance of theoretical work has assessed the impact

    更新日期:2020-05-20
  • A Mathematical Analysis of Memory Lifetime in a Simple Network Model of Memory.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Pascal Helson

    We study the learning of an external signal by a neural network and the time to forget it when this network is submitted to noise. The presentation of an external stimulus to the recurrent network of binary neurons may change the state of the synapses. Multiple presentations of a unique signal lead to its learning. Then, during the forgetting time, the presentation of other signals (noise) may also

    更新日期:2020-05-20
  • A Model for the Study of the Increase in Stimulus and Change Point Detection with Small and Variable Spiking Delays.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Benjamin Straub,Gaby Schneider

    Precise timing of spikes between different neurons has been found to convey reliable information beyond the spike count. In contrast, the role of small and variable spiking delays, as reported, for example, in the visual cortex, remains largely unclear. This issue becomes particularly important considering the high speed of neuronal information processing, which is assumed to be based on only a few

    更新日期:2020-05-20
  • Minimal Spiking Neuron for Solving Multilabel Classification Tasks.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-20
    Jakub Fil,Dominique Chu

    The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce

    更新日期:2020-05-20
  • Salient Slices: Improved Neural Network Training and Performance with Image Entropy.
    Neural Comput. (IF 2.505) Pub Date : 2020-05-05
    Steven J Frank,Andrea M Frank

    As a training and analysis strategy for convolutional neural networks (CNNs), we slice images into tiled segments and use, for training and prediction, segments that both satisfy an information criterion and contain sufficient content to support classification. In particular, we use image entropy as the information criterion. This ensures that each tile carries as much information diversity as the

    更新日期:2020-05-05
  • Independently Interpretable Lasso for Generalized Linear Models.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Masaaki Takada,Taiji Suzuki,Hironori Fujisawa

    Sparse regularization such as ℓ1 regularization is a quite powerful and widely used strategy for high-dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary ℓ1 regularization

    更新日期:2020-04-28
  • First Passage Time Memory Lifetimes for Multistate, Filter-Based Synapses.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Terry Elliott

    Models of associative memory with discrete state synapses learn new memories by forgetting old ones. In contrast to non-integrative models of synaptic plasticity, models with integrative, filter-based synapses exhibit an initial rise in the fidelity of recall of stored memories. This rise to a peak is driven by a transient process and is then followed by a return to equilibrium. In a series of papers

    更新日期:2020-04-28
  • Efficient Position Decoding Methods Based on Fluorescence Calcium Imaging in the Mouse Hippocampus.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Mengyu Tu,Ruohe Zhao,Avital Adler,Wen-Biao Gan,Zhe S Chen

    Large-scale fluorescence calcium imaging methods have become widely adopted for studies of long-term hippocampal and cortical neuronal dynamics. Pyramidal neurons of the rodent hippocampus show spatial tuning in freely foraging or head-fixed navigation tasks. Development of efficient neural decoding methods for reconstructing the animal's position in real or virtual environments can provide a fast

    更新日期:2020-04-28
  • Nonequilibrium Statistical Mechanics of Continuous Attractors.
    Neural Comput. (IF 2.505) Pub Date : 2020-04-28
    Weishun Zhong,Zhiyue Lu,David J Schwab,Arvind Murugan

    Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood

    更新日期:2020-04-28
  • Neural Model of Coding Stimulus Orientation and Adaptation.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Henrikas Vaitkevičius,Algimantas Švežda,Rytis Stanikūnas,Remigijus Bliumas,Alvydas Šoliūnas,Janus J Kulikowski

    The coding of line orientation in the visual system has been investigated extensively. During the prolonged viewing of a stimulus, the perceived orientation continuously changes (normalization effect). Also, the orientation of the adapting stimulus and the background stimuli influence the perceived orientation of the subsequently displayed stimulus: tilt after-effect (TAE) or tilt illusion (TI). The

    更新日期:2020-02-18
  • Center Manifold Analysis of Plateau Phenomena Caused by Degeneration of Three-Layer Perceptron.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Daiji Tsutsui

    A hierarchical neural network usually has many singular regions in the parameter space due to the degeneration of hidden units. Here, we focus on a three-layer perceptron, which has one-dimensional singular regions comprising both attractive and repulsive parts. Such a singular region is often called a Milnor-like attractor. It is empirically known that in the vicinity of a Milnor-like attractor, several

    更新日期:2020-02-18
  • Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Yuval Harel,Ron Meir

    Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting as neural tuning, which minimizes mean decoding error. Many works optimize Fisher information, which approximates the minimum mean square error (MMSE) of the optimal decoder for long encoding time but may

    更新日期:2020-02-18
  • Online Learning Based on Online DCA and Application to Online Classification.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Hoai An Le Thi,Vinh Thanh Ho

    We investigate an approach based on DC (difference of convex functions) programming and DCA (dc algorithm) for online learning techniques. The prediction problem of an online learner can be formulated as a DC program for which online DCA is applied. We propose the two so-called complete/approximate versions of online DCA scheme and prove their logarithmic/sublinear regrets. Six online DCA-based algorithms

    更新日期:2020-02-18
  • Feature Extraction of Surface Electromyography Based on Improved Small-World Leaky Echo State Network.
    Neural Comput. (IF 2.505) Pub Date : 2020-02-18
    Xugang Xi,Wenjun Jiang,Seyed M Miran,Xian Hua,Yun-Bo Zhao,Chen Yang,Zhizeng Luo

    Surface electromyography (sEMG) is an electrophysiological reflection of skeletal muscle contractile activity that can directly reflect neuromuscular activity. It has been a matter of research to investigate feature extraction methods of sEMG signals. In this letter, we propose a feature extraction method of sEMG signals based on the improved small-world leaky echo state network (ISWLESN). The reservoir

    更新日期:2020-02-18
  • Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Thomas Hueber,Eric Tatulli,Laurent Girin,Jean-Luc Schwartz

    Sensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the accuracy of predictions that can be computed in real sensory scenes. Here, we focus on the sensory processing of auditory and audiovisual speech. We propose a set of computational

    更新日期:2020-01-17
  • Hidden Aspects of the Research ADOS Are Bound to Affect Autism Science.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Elizabeth B Torres,Richa Rai,Sejal Mistry,Brenda Gupta

    The research-grade Autism Diagnostic Observational Schedule (ADOS) is a broadly used instrument that informs and steers much of the science of autism. Despite its broad use, little is known about the empirical variability inherently present in the scores of the ADOS scale or their appropriateness to define change and its rate, to repeatedly use this test to characterize neurodevelopmental trajectories

    更新日期:2020-01-17
  • Classification from Triplet Comparison Data.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Zhenghang Cui,Nontawat Charoenphakdee,Issei Sato,Masashi Sugiyama

    Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in a Euclidean space of the given instances that preserve the comparison order as much as possible. Unlike fully labeled data, triplet comparison data can be collected in a more

    更新日期:2020-01-17
  • Switching in Cerebellar Stellate Cell Excitability in Response to a Pair of Inhibitory/Excitatory Presynaptic Inputs: A Dynamical System Perspective.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Saeed Farjami,Ryan P D Alexander,Derek Bowie,Anmar Khadra

    Cerebellar stellate cells form inhibitory synapses with Purkinje cells, the sole output of the cerebellum. Upon stimulation by a pair of varying inhibitory and fixed excitatory presynaptic inputs, these cells do not respond to excitation (i.e., do not generate an action potential) when the magnitude of the inhibition is within a given range, but they do respond outside this range. We previously used

    更新日期:2020-01-17
  • Model-Free Robust Optimal Feedback Mechanisms of Biological Motor Control.
    Neural Comput. (IF 2.505) Pub Date : 2020-01-17
    Tao Bian,Daniel M Wolpert,Zhong-Ping Jiang

    Sensorimotor tasks that humans perform are often affected by different sources of uncertainty. Nevertheless, the central nervous system (CNS) can gracefully coordinate our movements. Most learning frameworks rely on the internal model principle, which requires a precise internal representation in the CNS to predict the outcomes of our motor commands. However, learning a perfect internal model in a

    更新日期:2020-01-17
  • A Continuous-Time Analysis of Distributed Stochastic Gradient.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Nicholas M Boffi,Jean-Jacques E Slotine

    We analyze the effect of synchronization on distributed stochastic gradient algorithms. By exploiting an analogy with dynamical models of biological quorum sensing, where synchronization between agents is induced through communication with a common signal, we quantify how synchronization can significantly reduce the magnitude of the noise felt by the individual distributed agents and their spatial

    更新日期:2019-11-01
  • Iterative Retrieval and Block Coding in Autoassociative and Heteroassociative Memory.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Andreas Knoblauch,Günther Palm

    Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou (2011) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative

    更新日期:2019-11-01
  • Toward Training Recurrent Neural Networks for Lifelong Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Shagun Sodhani,Sarath Chandar,Yoshua Bengio

    Catastrophic forgetting and capacity saturation are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the lifelong learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained

    更新日期:2019-11-01
  • An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Xiping Ju,Biao Fang,Rui Yan,Xiaoliang Xu,Huajin Tang

    A spiking neural network (SNN) is a type of biological plausibility model that performs information processing based on spikes. Training a deep SNN effectively is challenging due to the nondifferention of spike signals. Recent advances have shown that high-performance SNNs can be obtained by converting convolutional neural networks (CNNs). However, the large-scale SNNs are poorly served by conventional

    更新日期:2019-11-01
  • Optimal Sampling of Parametric Families: Implications for Machine Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Adrian E G Huber,Jithendar Anumula,Shih-Chii Liu

    It is well known in machine learning that models trained on a training set generated by a probability distribution function perform far worse on test sets generated by a different probability distribution function. In the limit, it is feasible that a continuum of probability distribution functions might have generated the observed test set data; a desirable property of a learned model in that case

    更新日期:2019-11-01
  • On Kernel Method-Based Connectionist Models and Supervised Deep Learning Without Backpropagation.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Shiyu Duan,Shujian Yu,Yunmei Chen,Jose C Principe

    We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to "kernelize" (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given

    更新日期:2019-11-01
  • A Robust Model of Gated Working Memory.
    Neural Comput. (IF 2.505) Pub Date : 2019-11-08
    Anthony Strock,Xavier Hinaut,Nicolas P Rougier

    Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir

    更新日期:2019-11-01
  • On the Effect of the Activation Function on the Distribution of Hidden Nodes in a Deep Network.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-16
    Philip M Long,Hanie Sedghi

    We analyze the joint probability distribution on the lengths of the vectors of hidden variables in different layers of a fully connected deep network, when the weights and biases are chosen randomly according to gaussian distributions. We show that if the activation function φ satisfies a minimal set of assumptions, satisfied by all activation functions that we know that are used in practice, then

    更新日期:2019-11-01
  • Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-16
    Kenji Kawaguchi,Jiaoyang Huang,Leslie Pack Kaelbling

    For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference

    更新日期:2019-11-01
  • Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-16
    Lili Su,Chia-Jung Chang,Nancy Lynch

    Winner-take-all (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based k-WTA model wherein

    更新日期:2019-11-01
  • The Effect of Signaling Latencies and Node Refractory States on the Dynamics of Networks.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-16
    Gabriel A Silva

    We describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that model the competing dynamics of incident signals into nodes along directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal

    更新日期:2019-11-01
  • Safe Triplet Screening for Distance Metric Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-16
    Tomoki Yoshida,Ichiro Takeuchi,Masayuki Karasuyama

    Distance metric learning has been widely used to obtain the optimal distance function based on the given training data. We focus on a triplet-based loss function, which imposes a penalty such that a pair of instances in the same class is closer than a pair in different classes. However, the number of possible triplets can be quite large even for a small data set, and this considerably increases the

    更新日期:2019-11-01
  • Can Grid Cell Ensembles Represent Multiple Spaces?
    Neural Comput. (IF 2.505) Pub Date : 2019-10-15
    Davide Spalla,Alexis Dubreuil,Sophie Rosay,Remi Monasson,Alessandro Treves

    The way grid cells represent space in the rodent brain has been a striking discovery, with theoretical implications still unclear. Unlike hippocampal place cells, which are known to encode multiple, environment-dependent spatial maps, grid cells have been widely believed to encode space through a single low-dimensional manifold, in which coactivity relations between different neurons are preserved

    更新日期:2019-11-01
  • Replicating Neuroscience Observations on ML/MF and AM Face Patches by Deep Generative Model.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-15
    Tian Han,Xianglei Xing,Jiawen Wu,Ying Nian Wu

    A recent Cell paper (Chang & Tsao, 2017) reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit a strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this letter,

    更新日期:2019-11-01
  • Bayesian Filtering with Multiple Internal Models: Toward a Theory of Social Intelligence.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-15
    Takuya Isomura,Thomas Parr,Karl Friston

    To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an interesting problem: When receiving sensory input generated by a particular conspecific

    更新日期:2019-11-01
  • Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-15
    Mengwen Yuan,Xi Wu,Rui Yan,Huajin Tang

    Though succeeding in solving various learning tasks, most existing reinforcement learning (RL) models have failed to take into account the complexity of synaptic plasticity in the neural system. Models implementing reinforcement learning with spiking neurons involve only a single plasticity mechanism. Here, we propose a neural realistic reinforcement learning model that coordinates the plasticities

    更新日期:2019-11-01
  • Storing Object-Dependent Sparse Codes in a Willshaw Associative Network.
    Neural Comput. (IF 2.505) Pub Date : 2019-10-15
    Luis Sa-Couto,Andreas Wichert

    Willshaw networks are single-layered neural networks that store associations between binary vectors. Using only binary weights, these networks can be implemented efficiently to store large numbers of patterns and allow for fault-tolerant recovery of those patterns from noisy cues. However, this is only the case when the involved codes are sparse and randomly generated. In this letter, we use a recently

    更新日期:2019-11-01
  • Integrating Flexible Normalization into Midlevel Representations of Deep Convolutional Neural Networks.
    Neural Comput. (IF 2.505) Pub Date : 2019-09-17
    Luis Gonzalo Sánchez Giraldo,Odelia Schwartz

    Deep convolutional neural networks (CNNs) are becoming increasingly popular models to predict neural responses in visual cortex. However, contextual effects, which are prevalent in neural processing and in perception, are not explicitly handled by current CNNs, including those used for neural prediction. In primary visual cortex, neural responses are modulated by stimuli spatially surrounding the classical

    更新日期:2019-11-01
  • Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning.
    Neural Comput. (IF 2.505) Pub Date : 2019-09-17
    Xin Yao,Tianchi Huang,Chenglei Wu,Rui-Xiao Zhang,Lifeng Sun

    Humans are able to master a variety of knowledge and skills with ongoing learning. By contrast, dramatic performance degradation is observed when new tasks are added to an existing neural network model. This phenomenon, termed catastrophic forgetting, is one of the major roadblocks that prevent deep neural networks from achieving human-level artificial intelligence. Several research efforts (e.g.,

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
物理学研究前沿热点精选期刊推荐
科研绘图
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
自然职场线上招聘会
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
阿拉丁试剂right
张晓晨
田蕾蕾
李闯创
刘天飞
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
天合科研
x-mol收录
X-MOL
清华大学
廖矿标
陈永胜
试剂库存
down
wechat
bug