当前期刊: arXiv - CS - Neural and Evolutionary Computing Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Applying Dynamic Training-Subset Selection Methods Using Genetic Programming for Forecasting Implied Volatility
    arXiv.cs.NE Pub Date : 2020-06-29
    Sana Ben Hamida; Wafa Abdelmalek; Fathi Abid

    Volatility is a key variable in option pricing, trading and hedging strategies. The purpose of this paper is to improve the accuracy of forecasting implied volatility using an extension of genetic programming (GP) by means of dynamic training-subset selection methods. These methods manipulate the training data in order to improve the out of sample patterns fitting. When applied with the static subset

    更新日期:2020-07-15
  • Dense Crowds Detection and Counting with a Lightweight Architecture
    arXiv.cs.NE Pub Date : 2020-07-13
    Javier Antonio Gonzalez-Trejo; Diego Alberto Mercado-Ravell

    In the context of crowd counting, most of the works have focused on improving the accuracy without regard to the performance leading to algorithms that are not suitable for embedded applications. In this paper, we propose a lightweight convolutional neural network architecture to perform crowd detection and counting using fewer computer resources without a significant loss on count accuracy. The architecture

    更新日期:2020-07-15
  • Semi-steady-state Jaya Algorithm
    arXiv.cs.NE Pub Date : 2020-07-13
    Uday K. Chakraborty

    The Jaya algorithm is arguably one of the fastest-emerging metaheuristics amongst the newest members of the evolutionary computation family. The present paper proposes a new, improved Jaya algorithm by modifying the update strategies of the best and the worst members in the population. Simulation results on a twelve-function benchmark test-suite as well as a real-world problem of practical importance

    更新日期:2020-07-14
  • Deep Cross-Subject Mapping of Neural Activity
    arXiv.cs.NE Pub Date : 2020-07-13
    Marko Angjelichinoski; Bijan Pesaran; Vahid Tarokh

    In this paper, we demonstrate that a neural decoder trained on neural activity signals of one subject can be used to \textit{robustly} decode the motor intentions of a different subject with high reliability. This is achieved in spite of the non-stationary nature of neural activity signals and the subject-specific variations of the recording conditions. Our proposed algorithm for cross-subject mapping

    更新日期:2020-07-14
  • Exploring the Evolution of GANs through Quality Diversity
    arXiv.cs.NE Pub Date : 2020-07-13
    Victor Costa; Nuno Lourenço; João Correia; Penousal Machado

    Generative adversarial networks (GANs) achieved relevant advances in the field of generative algorithms, presenting high-quality results mainly in the context of images. However, GANs are hard to train, and several aspects of the model should be previously designed by hand to ensure training success. In this context, evolutionary algorithms such as COEGAN were proposed to solve the challenges in GAN

    更新日期:2020-07-14
  • Coarse scale representation of spiking neural networks: backpropagation through spikes and application to neuromorphic hardware
    arXiv.cs.NE Pub Date : 2020-07-13
    Angel Yanguas-Gil

    In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period. Our coarse time scale approximation is obtained using a probability distribution function for spike arrivals that is homogeneously distributed over this time interval. This leads to a discrete representation that exhibits the same dynamics as the

    更新日期:2020-07-14
  • Leaky Integrate-and-Fire Spiking Neuron with Learnable Membrane Time Parameter
    arXiv.cs.NE Pub Date : 2020-07-11
    Wei Fang

    The Spiking Neural Networks (SNNs) have attracted research interest due to its temporal information processing capability, low power consumption, and high biological plausibility. The Leaky Integrate-and-Fire (LIF) neuron model is one of the most popular spiking neuron models used in SNNs for it achieves a balance between computing cost and biological plausibility. The most important parameter of a

    更新日期:2020-07-14
  • Neuromorphic Processing and Sensing: Evolutionary Progression of AI to Spiking
    arXiv.cs.NE Pub Date : 2020-07-10
    Philippe Reiter; Geet Rose Jose; Spyridon Bizmpikis; Ionela-Ancuţa Cîrjilă

    The increasing rise in machine learning and deep learning applications is requiring ever more computational resources to successfully meet the growing demands of an always-connected, automated world. Neuromorphic technologies based on Spiking Neural Network algorithms hold the promise to implement advanced artificial intelligence using a fraction of the computations and power requirements by modeling

    更新日期:2020-07-14
  • Learning Reasoning Strategies in End-to-End Differentiable Proving
    arXiv.cs.NE Pub Date : 2020-07-13
    Pasquale Minervini; Sebastian Riedel; Pontus Stenetorp; Edward Grefenstette; Tim Rocktäschel

    Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted

    更新日期:2020-07-14
  • Beyond Graph Neural Networks with Lifted Relational Neural Networks
    arXiv.cs.NE Pub Date : 2020-07-13
    Gustav Sourek; Filip Zelezny; Ondrej Kuzelka

    We demonstrate a declarative differentiable programming framework based on the language of Lifted Relational Neural Networks, where small parameterized logic programs are used to encode relational learning scenarios. When presented with relational data, such as various forms of graphs, the program interpreter dynamically unfolds differentiable computational graphs to be used for the program parameter

    更新日期:2020-07-14
  • Distributed Graph Convolutional Networks
    arXiv.cs.NE Pub Date : 2020-07-13
    Simone Scardapane; Indro Spinelli; Paolo Di Lorenzo

    The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference

    更新日期:2020-07-14
  • Probabilistic bounds on data sensitivity in deep rectifier networks
    arXiv.cs.NE Pub Date : 2020-07-13
    Blaine Rister; Daniel L. Rubin

    Neuron death is a complex phenomenon with implications for model trainability, but until recently it was measured only empirically. Recent articles have claimed that, as the depth of a rectifier neural network grows to infinity, the probability of finding a valid initialization decreases to zero. In this work, we provide a simple and rigorous proof of that result. Then, we show what happens when the

    更新日期:2020-07-14
  • Multi-Emitter MAP-Elites: Improving quality, diversity and convergence speed with heterogeneous sets of emitters
    arXiv.cs.NE Pub Date : 2020-07-10
    Antoine Cully

    Quality-Diversity (QD) optimisation is a new family of learning algorithms that aims at generating collections of diverse and high-performing solutions. Among those algorithms, MAP-Elites is a simple yet powerful approach that has shown promising results in numerous applications. In this paper, we introduce a novel algorithm named Multi-Emitter MAP-Elites (ME-MAP-Elites) that improves the quality,

    更新日期:2020-07-13
  • Artificial Neural Network Approach for the Identification of Clove Buds Origin Based on Metabolites Composition
    arXiv.cs.NE Pub Date : 2020-07-10
    Rustam; Agus Yodi Gunawan; Made Tri Ari Penia Kresnowati

    This paper examines the use of artificial neural network approach in identifying the origin of clove buds based on metabolites composition. Generally, large data sets are critical for accurate identification. Machine learning with large data sets lead to precise identification based on origins. However, clove buds uses small data sets due to lack of metabolites composition and their high cost of extraction

    更新日期:2020-07-13
  • Improving Adversarial Robustness by Enforcing Local and Global Compactness
    arXiv.cs.NE Pub Date : 2020-07-10
    Anh Bui; Trung Le; He Zhao; Paul Montague; Olivier deVel; Tamas Abraham; Dinh Phung

    The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application. Among many developed defense models against such attacks, adversarial training emerges as the most successful method that consistently resists a wide range of attacks. In this work, based on an observation from a previous study that the representations

    更新日期:2020-07-13
  • Biological credit assignment through dynamic inversion of feedforward networks
    arXiv.cs.NE Pub Date : 2020-07-10
    William F. Podlaski; Christian K. Machens

    Learning depends on changes in synaptic connections deep inside the brain. In multilayer networks, these changes are triggered by error signals fed back from the output, generally through a stepwise inversion of the feedforward processing steps. The gold standard for this process -- backpropagation -- works well in artificial neural networks, but is biologically implausible. Several recent proposals

    更新日期:2020-07-13
  • Training of Deep Learning Neuro-Skin Neural Network
    arXiv.cs.NE Pub Date : 2020-07-03
    Mehrdad Shafiei Dizaji

    In this brief paper, a learning algorithm is developed for Deep Learning Neuro-Skin Neural Network to improve their learning properties. Neuroskin is a new type of neural network presented recently by the authors. It is comprised of a cellular membrane which has a neuron attached to each cell. The neuron is the cells nucleus. A neuroskin is modelled using finite elements. Each element of the finite

    更新日期:2020-07-10
  • Long Short-Term Memory Spiking Networks and Their Applications
    arXiv.cs.NE Pub Date : 2020-07-09
    Ali Lotfi Rezaabad; Sriram Vishwanath

    Recent advances in event-based neuromorphic systems have resulted in significant interest in the use and development of spiking neural networks (SNNs). However, the non-differentiable nature of spiking neurons makes SNNs incompatible with conventional backpropagation techniques. In spite of the significant progress made in training conventional deep neural networks (DNNs), training methods for SNNs

    更新日期:2020-07-10
  • A Hybrid Evolutionary Algorithm for Reliable Facility Location Problem
    arXiv.cs.NE Pub Date : 2020-06-27
    Han Zhang; Jialin Liu; Xin Yao

    The reliable facility location problem (RFLP) is an important research topic of operational research and plays a vital role in the decision-making and management of modern supply chain and logistics. Through solving RFLP, the decision-maker can obtain reliable location decisions under the risk of facilities' disruptions or failures. In this paper, we propose a novel model for the RFLP. Instead of assuming

    更新日期:2020-07-10
  • EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm for Constrained Global Optimization
    arXiv.cs.NE Pub Date : 2020-07-09
    Lorenzo Federici; Boris Benedikter; Alessandro Zavoli

    This paper presents the main characteristics of the evolutionary optimization code named EOS, Evolutionary Optimization at Sapienza, and its successful application to challenging, real-world space trajectory optimization problems. EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables. It implements a number of improvements to the well-known Differential

    更新日期:2020-07-10
  • A Study on Encodings for Neural Architecture Search
    arXiv.cs.NE Pub Date : 2020-07-09
    Colin White; Willie Neiswanger; Sam Nolen; Yash Savani

    Neural architecture search (NAS) has been extensively studied in the past few years. A popular approach is to represent each neural architecture in the search space as a directed acyclic graph (DAG), and then search over all DAGs by encoding the adjacency matrix and list of operations as a set of hyperparameters. Recent work has demonstrated that even small changes to the way each architecture is encoded

    更新日期:2020-07-10
  • Guiding Deep Molecular Optimization with Genetic Exploration
    arXiv.cs.NE Pub Date : 2020-07-04
    Sungsoo Ahn; Junsu Kim; Hankook Lee; Jinwoo Shin

    De novo molecular design attempts to search over the chemical space for molecules with the desired property. Recently, deep learning has gained considerable attention as a promising approach to solve the problem. In this paper, we propose genetic expert-guided learning (GEGL), a simple yet novel framework for training a deep neural network (DNN) to generate highly-rewarding molecules. Our main idea

    更新日期:2020-07-10
  • A Neuro-inspired Theory of Joint Human-Swarm Interaction
    arXiv.cs.NE Pub Date : 2020-07-09
    Jonas D. Hasbach; Maren Bennewitz

    Human-swarm interaction (HSI) is an active research challenge in the realms of swarm robotics and human-factors engineering. Here we apply a cognitive systems engineering perspective and introduce a neuro-inspired joint systems theory of HSI. The mindset defines predictions for adaptive, robust and scalable HSI dynamics and therefore has the potential to inform human-swarm loop design.

    更新日期:2020-07-10
  • Identifying efficient controls of complex interaction networks using genetic algorithms
    arXiv.cs.NE Pub Date : 2020-07-09
    Victor-Bogdan Popescu; Krishna Kanhaiya; Iulian Năstac; Eugen Czeizler; Ion Petre

    Control theory has seen recently impactful applications in network science, especially in connections with applications in network medicine. A key topic of research is that of finding minimal external interventions that offer control over the dynamics of a given network, a problem known as network controllability. We propose in this article a new solution for this problem based on genetic algorithms

    更新日期:2020-07-10
  • EVO-RL: Evolutionary-Driven Reinforcement Learning
    arXiv.cs.NE Pub Date : 2020-07-09
    Ahmed Hallawa; Thorsten Born; Anke Schmeink; Guido Dartmann; Arne Peine; Lukas Martin; Giovanni Iacca; Gusz Eiben; Gerd Ascheid

    In this work, we propose a novel approach for reinforcement learning driven by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven Reinforcement Learning (evo-RL), embeds the reinforcement learning algorithm in an evolutionary cycle, where we distinctly differentiate between purely evolvable (instinctive) behaviour versus purely learnable behaviour. Furthermore, we propose that this

    更新日期:2020-07-10
  • Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics Informed Neural Networks
    arXiv.cs.NE Pub Date : 2020-07-09
    Colby L. Wight; Jia Zhao

    Phase field models, in particular, the Allen-Cahn type and Cahn-Hilliard type equations, have been widely used to investigate interfacial dynamic problems. Designing accurate, efficient, and stable numerical algorithms for solving the phase field models has been an active field for decades. In this paper, we focus on using the deep neural network to design an automatic numerical solver for the Allen-Cahn

    更新日期:2020-07-10
  • AutoLR: An Evolutionary Approach to Learning Rate Policies
    arXiv.cs.NE Pub Date : 2020-07-08
    Pedro Carvalho; Nuno Lourenço; Filipe Assunção; Penousal Machado

    The choice of a proper learning rate is paramount for good Artificial Neural Network training and performance. In the past, one had to rely on experience and trial-and-error to find an adequate learning rate. Presently, a plethora of state of the art automatic methods exist that make the search for a good learning rate easier. While these techniques are effective and have yielded good results over

    更新日期:2020-07-09
  • BS4NN: Binarized Spiking Neural Networks with Temporal Coding and Learning
    arXiv.cs.NE Pub Date : 2020-07-08
    Saeed Reza Kheradpisheh; Maryam Mirsadeghi; Timothée Masquelier

    We recently proposed the S4NN algorithm, essentially an adaptation of backpropagation to multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and a form of temporal coding known as time-to-first-spike coding. With this coding scheme, neurons fire at most once per stimulus, but the firing order carries information. Here, we introduce BS4NN, a modification of S4NN in

    更新日期:2020-07-09
  • IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic
    arXiv.cs.NE Pub Date : 2020-07-08
    Hao Wang; Diederick Vermetten; Furong Ye; Carola Doerr; Thomas Bäck

    We propose IOHanalyzer, a new software for analyzing the empirical performance of iterative optimization heuristics (IOHs) such as local search algorithms, genetic and evolutionary algorithms, Bayesian optimization algorithms, and similar optimizers. Implemented in R and C++, IOHanalyzer is available on CRAN. It provides a platform for analyzing and visualizing the performance of IOHs on real-valued

    更新日期:2020-07-09
  • Learning Efficient Search Approximation in Mixed Integer Branch and Bound
    arXiv.cs.NE Pub Date : 2020-07-08
    Kaan Yilmaz; Neil Yorke-Smith

    In line with the growing trend of using machine learning to improve solving of combinatorial optimisation problems, one promising idea is to improve node selection within a mixed integer programming branch-and-bound tree by using a learned policy. In contrast to previous work using imitation learning, our policy is focused on learning which of a node's children to select. We present an offline method

    更新日期:2020-07-09
  • Artificial Life in Game Mods for Intuitive Evolution Education
    arXiv.cs.NE Pub Date : 2020-07-07
    Anya E. Vostinar; Barbara Z. Johnson; Kevin Connors

    The understanding and acceptance of evolution by natural selection has become a difficult issue in many parts of the world, particularly the United States of America. The use of games to improve intuition about evolution via natural selection is promising but can be challenging. We propose the use of modifications to commercial games using artificial life techniques to 'stealth teach' about evolution

    更新日期:2020-07-09
  • Uncertainty-Aware Lookahead Factor Models for Quantitative Investing
    arXiv.cs.NE Pub Date : 2020-07-07
    Lakshay Chauhan; John Alberg; Zachary C. Lipton

    On a periodic basis, publicly traded companies report fundamentals, financial data including revenue, earnings, debt, among others. Quantitative finance research has identified several factors, functions of the reported data that historically correlate with stock market performance. In this paper, we first show through simulation that if we could select stocks via factors calculated on future fundamentals

    更新日期:2020-07-09
  • Resonator networks for factoring distributed representations of data structures
    arXiv.cs.NE Pub Date : 2020-07-07
    E. Paxon Frady; Spencer Kent; Bruno A. Olshausen; Friedrich T. Sommer

    The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSA) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby

    更新日期:2020-07-09
  • Physics-Based Deep Neural Networks for Beam Dynamics in Charged Particle Accelerators
    arXiv.cs.NE Pub Date : 2020-07-07
    Andrei Ivanov; Ilya Agapov

    This paper presents a novel approach for constructing neural networks which model charged particle beam dynamics. In our approach, the Taylor maps arising in the representation of dynamics are mapped onto the weights of a polynomial neural network. The resulting network approximates the dynamical system with perfect accuracy prior to training and provides a possibility to tune the network weights on

    更新日期:2020-07-08
  • Multivariate Time Series Classification Using Spiking Neural Networks
    arXiv.cs.NE Pub Date : 2020-07-07
    Haowen Fang; Amar Shrestha; Qinru Qiu

    There is an increasing demand to process streams of temporal data in energy-limited scenarios such as embedded devices, driven by the advancement and expansion of Internet of Things (IoT) and Cyber-Physical Systems (CPS). Spiking neural network has drawn attention as it enables low power consumption by encoding and processing information as sparse spike events, which can be exploited for event-driven

    更新日期:2020-07-08
  • Benchmarking in Optimization: Best Practice and Open Issues
    arXiv.cs.NE Pub Date : 2020-07-07
    Thomas Bartz-Beielstein; Carola Doerr; Jakob Bossek; Sowmya Chandrasekaran; Tome Eftimov; Andreas Fischbach; Pascal Kerschke; Manuel Lopez-Ibanez; Katherine M. Malan; Jason H. Moore; Boris Naujoks; Patryk Orzechowski; Vanessa Volz; Markus Wagner; Thomas Weise

    This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well-specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective

    更新日期:2020-07-08
  • Fast Perturbative Algorithm Configurators
    arXiv.cs.NE Pub Date : 2020-07-07
    George T. Hall; Pietro Simone Oliveto; Dirk Sudholt

    Recent work has shown that the ParamRLS and ParamILS algorithm configurators can tune some simple randomised search heuristics for standard benchmark functions in linear expected time in the size of the parameter space. In this paper we prove a linear lower bound on the expected time to optimise any parameter tuning problem for ParamRLS, ParamILS as well as for larger classes of algorithm configurators

    更新日期:2020-07-08
  • Strong Generalization and Efficiency in Neural Programs
    arXiv.cs.NE Pub Date : 2020-07-07
    Yujia Li; Felix Gimeno; Pushmeet Kohli; Oriol Vinyals

    We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction. By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes, achieving strong generalization. Moreover, by using reinforcement learning, we optimize for program

    更新日期:2020-07-08
  • srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications
    arXiv.cs.NE Pub Date : 2020-07-07
    Anh Tran; Mike Eldred; Scott McCann; Yan Wang

    Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, latent-variable model, have been proposed to improve the limitation of the classical BO framework. In this work, we propose

    更新日期:2020-07-08
  • GOLD-NAS: Gradual, One-Level, Differentiable
    arXiv.cs.NE Pub Date : 2020-07-07
    Kaifeng Bi; Lingxi Xie; Xin Chen; Longhui Wei; Qi Tian

    There has been a large literature of neural architecture search, but most existing work made use of heuristic rules that largely constrained the search flexibility. In this paper, we first relax these manually designed constraints and enlarge the search space to contain more than $10^{160}$ candidates. In the new space, most existing differentiable search methods can fail dramatically. We then propose

    更新日期:2020-07-08
  • An Integer Programming Approach to Deep Neural Networks with Binary Activation Functions
    arXiv.cs.NE Pub Date : 2020-07-07
    Bubacarr Bah; Jannis Kurtz

    We study deep neural networks with binary activation functions (BDNN), i.e. the activation function only has two states. We show that the BDNN can be reformulated as a mixed-integer linear program which can be solved to global optimality by classical integer programming solvers. Additionally, a heuristic solution algorithm is presented and we study the model under data uncertainty, applying a two-stage

    更新日期:2020-07-08
  • An Entropy Equation for Energy
    arXiv.cs.NE Pub Date : 2020-07-07
    Kieran Greer

    This paper describes an entropy equation, but one that should be used for measuring energy and not information. In relation to the human brain therefore, both of these quantities can be used to represent the stored information. The human brain makes use of energy efficiency to form its structures, which is likely to be linked to the neuron wiring. This energy efficiency can also be used as the basis

    更新日期:2020-07-08
  • Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by Spiking Neural Network
    arXiv.cs.NE Pub Date : 2020-07-07
    Zihan Pan; Malu Zhang; Jibin Wu; Haizhou Li

    Inspired by the mammal's auditory localization pathway, in this paper we propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment, and implement this algorithm in a real-time robotic system with a microphone array. The key of this model relies on the MTPC scheme, which encodes the interaural time difference (ITD) cues into

    更新日期:2020-07-08
  • Meta-Learning through Hebbian Plasticity in Random Networks
    arXiv.cs.NE Pub Date : 2020-07-06
    Elias Najarro; Sebastian Risi

    Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and

    更新日期:2020-07-07
  • ModeNet: Mode Selection Network For Learned Video Coding
    arXiv.cs.NE Pub Date : 2020-07-06
    Théo LaduneIETR; Pierrick PhilippeIETR; Wassim HamidoucheIETR; Lu ZhangIETR; Olivier DéforgesIETR

    In this paper, a mode selection network (ModeNet) is proposed to enhance deep learning-based video compression. Inspired by traditional video coding, ModeNet purpose is to enable competition among several coding modes. The proposed ModeNet learns and conveys a pixel-wise partitioning of the frame, used to assign each pixel to the most suited coding mode. ModeNet is trained alongside the different coding

    更新日期:2020-07-07
  • A Case for Lifetime Reliability-Aware Neuromorphic Computing
    arXiv.cs.NE Pub Date : 2020-07-04
    Shihao Song; Anup Das

    Neuromorphic computing with non-volatile memory (NVM) can significantly improve performance and lower energy consumption of machine learning tasks implemented using spike-based computations and bio-inspired learning algorithms. High voltages required to operate certain NVMs such as phase-change memory (PCM) can accelerate aging in a neuron's CMOS circuit, thereby reducing the lifetime of neuromorphic

    更新日期:2020-07-07
  • Lazy Greedy Hypervolume Subset Selection from Large Candidate Solution Sets
    arXiv.cs.NE Pub Date : 2020-07-04
    Weiyu Chen; Hisao Ishibuhci; Ke Shang

    Subset selection is a popular topic in recent years and a number of subset selection methods have been proposed. Among those methods, hypervolume subset selection is widely used. Greedy hypervolume subset selection algorithms can achieve good approximations to the optimal subset. However, when the candidate set is large (e.g., an unbounded external archive with a large number of solutions), the algorithm

    更新日期:2020-07-07
  • Building Reservoir Computing Hardware Using Low Energy-Barrier Magnetics
    arXiv.cs.NE Pub Date : 2020-07-06
    Samiran Ganguly; Avik W. Ghosh

    Biologically inspired recurrent neural networks, such as reservoir computers are of interest in designing spatio-temporal data processors from a hardware point of view due to the simple learning scheme and deep connections to Kalman filters. In this work we discuss using in-depth simulation studies a way to construct hardware reservoir computers using an analog stochastic neuron cell built from a low

    更新日期:2020-07-07
  • Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions
    arXiv.cs.NE Pub Date : 2020-07-05
    Michael Chang; Sidhant Kaushik; S. Matthew Weinberg; Thomas L. Griffiths; Sergey Levine

    This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems. What makes it challenging to use a decentralized approach to collectively optimize a central objective is the difficulty in characterizing the equilibrium strategy profile of non-cooperative games

    更新日期:2020-07-07
  • On Connections between Regularizations for Improving DNN Robustness
    arXiv.cs.NE Pub Date : 2020-07-04
    Yiwen Guo; Long Chen; Yurong Chen; Changshui Zhang

    This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs), from a theoretical point of view. Specifically, we study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional. We investigate them on DNNs with

    更新日期:2020-07-07
  • Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors
    arXiv.cs.NE Pub Date : 2020-07-04
    Zijian Jiang; Jianwen Zhou; Haiping Huang

    Artificial neural networks can achieve impressive performances, and even outperform humans in some specific tasks. Nevertheless, unlike biological brains, the artificial neural networks suffer from tiny perturbations in sensory input, under various kinds of adversarial attacks. It is therefore necessary to study the origin of the adversarial vulnerability. Here, we establish a fundamental relationship

    更新日期:2020-07-07
  • Learn Faster and Forget Slower via Fast and Stable Task Adaptation
    arXiv.cs.NE Pub Date : 2020-07-02
    Farshid Varno; Lucas May Petry; Lisa Di Jorio; Stan Matwin

    Training Deep Neural Networks (DNNs) is still highly time-consuming and compute-intensive. It has been shown that adapting a pretrained model may significantly accelerate this process. With a focus on classification, we show that current fine-tuning techniques make the pretrained models catastrophically forget the transferred knowledge even before anything about the new task is learned. Such rapid

    更新日期:2020-07-06
  • Continuously Indexed Domain Adaptation
    arXiv.cs.NE Pub Date : 2020-07-03
    Hao Wang; Hao He; Dina Katabi

    Existing domain adaptation focuses on transferring knowledge between domains with categorical indices (e.g., between datasets A and B). However, many tasks involve continuously indexed domains. For example, in medical applications, one often needs to transfer disease analysis and prediction across patients of different ages, where age acts as a continuous domain index. Such tasks are challenging for

    更新日期:2020-07-06
  • Ground Truth Free Denoising by Optimal Transport
    arXiv.cs.NE Pub Date : 2020-07-03
    Sören Dittmer; Carola-Bibiane Schönlieb; Peter Maass

    We present a learned unsupervised denoising method for arbitrary types of data, which we explore on images and one-dimensional signals. The training is solely based on samples of noisy data and examples of noise, which -- critically -- do not need to come in pairs. We only need the assumption that the noise is independent and additive (although we describe how this can be extended). The method rests

    更新日期:2020-07-06
  • Surrogate-assisted Particle Swarm Optimisation for Evolving Variable-length Transferable Blocks for Image Classification
    arXiv.cs.NE Pub Date : 2020-07-03
    Bin Wang; Bing Xue; Mengjie Zhang

    Deep convolutional neural networks have demonstrated promising performance on image classification tasks, but the manual design process becomes more and more complex due to the fast depth growth and the increasingly complex topologies of convolutional neural networks. As a result, neural architecture search has emerged to automatically design convolutional neural networks that outperform handcrafted

    更新日期:2020-07-06
  • Persistent Neurons
    arXiv.cs.NE Pub Date : 2020-07-02
    Yimeng Min

    Most algorithms used in neural networks(NN)-based leaning tasks are strongly affected by the choices of initialization. Good initialization can avoid sub-optimal solutions and alleviate saturation during training. However, designing improved initialization strategies is a difficult task and our understanding of good initialization is still very primitive. Here, we propose persistent neurons, a strategy

    更新日期:2020-07-06
  • Decoder-free Robustness Disentanglement without (Additional) Supervision
    arXiv.cs.NE Pub Date : 2020-07-02
    Yifei Wang; Dan Peng; Furui Liu; Zhenguo Li; Zhitang Chen; Jiansheng Yang

    Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features. This motivates us to preserve both robust and non-robust features and separate them with disentangled representation learning. Our

    更新日期:2020-07-06
  • Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
    arXiv.cs.NE Pub Date : 2020-07-02
    Jibin Wu; Chenglin Xu; Daquan Zhou; Haizhou Li; Kay Chen Tan

    Spiking neural networks (SNNs) have shown clear advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency, due to their event-driven nature and sparse communication. However, the training of deep SNNs is not straightforward. In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern

    更新日期:2020-07-03
  • A Novel DNN Training Framework via Data Sampling and Multi-Task Optimization
    arXiv.cs.NE Pub Date : 2020-07-02
    Boyu Zhang; A. K. Qin; Hong Pan; Timos Sellis

    Conventional DNN training paradigms typically rely on one training set and one validation set, obtained by partitioning an annotated dataset used for training, namely gross training set, in a certain way. The training set is used for training the model while the validation set is used to estimate the generalization performance of the trained model as the training proceeds to avoid over-fitting. There

    更新日期:2020-07-03
  • High Dimensional Bayesian Optimization Assisted by Principal Component Analysis
    arXiv.cs.NE Pub Date : 2020-07-02
    Elena Raponi; Hao Wang; Mariusz Bujny; Simonetta Boria; Carola Doerr

    Bayesian Optimization (BO) is a surrogate-assisted global optimization technique that has been successfully applied in various fields, e.g., automated machine learning and design optimization. Built upon a so-called infill-criterion and Gaussian Process regression (GPR), the BO technique suffers from a substantial computational complexity and hampered convergence rate as the dimension of the search

    更新日期:2020-07-03
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
产业、创新与基础设施
AI核心技术
10years
自然科研线上培训服务
材料学研究精选
Springer Nature Live 产业与创新线上学术论坛
胸腔和胸部成像专题
自然科研论文编辑服务
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
杨超勇
周一歌
华东师范大学
段炼
清华大学
廖矿标
李远
跟Nature、Science文章学绘图
隐藏1h前已浏览文章
中洪博元
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
x-mol收录
福州大学
南京大学
王杰
丘龙斌
电子显微学
何凤
洛杉矶分校
吴杰
赵延川
试剂库存
天合科研
down
wechat
bug