当前期刊: arXiv - CS - Neural and Evolutionary Computing Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • EPNE: Evolutionary Pattern Preserving Network Embedding
    arXiv.cs.NE Pub Date : 2020-09-24
    Junshan Wang; Yilun Jin; Guojie Song; Xiaojun Ma

    Information networks are ubiquitous and are ideal for modeling relational data. Networks being sparse and irregular, network embedding algorithms have caught the attention of many researchers, who came up with numerous embeddings algorithms in static networks. Yet in real life, networks constantly evolve over time. Hence, evolutionary patterns, namely how nodes develop itself over time, would serve

    更新日期:2020-09-25
  • Neurocoder: Learning General-Purpose Computation Using Stored Neural Programs
    arXiv.cs.NE Pub Date : 2020-09-24
    Hung Le; Svetha Venkatesh

    Artificial Neural Networks are uniquely adroit at machine learning by processing data through a network of artificial neurons. The inter-neuronal connection weights represent the learnt Neural Program that instructs the network on how to compute the data. However, without an external memory to store Neural Programs, they are restricted to only one, overwriting learnt programs when trained on new data

    更新日期:2020-09-25
  • Evolution, Symbiosis, and Autopoiesis in the Game of Life
    arXiv.cs.NE Pub Date : 2020-09-23
    Peter D. Turney

    Recently we introduced a model of symbiosis, Model-S, based on the evolution of seed patterns in Conway's Game of Life. In the model, the fitness of a seed pattern is measured by one-on-one competitions in the Immigration Game, a two-player variation of the Game of Life. This article examines the role of autopoiesis in determining fitness in Model-S. We connect our research on evolution, symbiosis

    更新日期:2020-09-25
  • Parameters for the best convergence of an optimization algorithm On-The-Fly
    arXiv.cs.NE Pub Date : 2020-09-23
    Valdimir Pieter

    What really sparked my interest was how certain parameters worked better at executing and optimization algorithm convergence even though the objective formula had no significant differences. Thus the research question stated: 'Which parameters provides an upmost optimal convergence solution of an Objective formula using the on-the-fly method?' This research was done in an experimental concept in which

    更新日期:2020-09-25
  • Adversarial robustness via stochastic regularization of neural activation sensitivity
    arXiv.cs.NE Pub Date : 2020-09-23
    Gil Fidel; Ron Bitton; Ziv Katzir; Asaf Shabtai

    Recent works have shown that the input domain of any machine learning classifier is bound to contain adversarial examples. Thus we can no longer hope to immune classifiers against adversarial examples and instead can only aim to achieve the following two defense goals: 1) making adversarial examples harder to find, or 2) weakening their adversarial nature by pushing them further away from correctly

    更新日期:2020-09-25
  • Efficient Design of Neural Networks with Random Weights
    arXiv.cs.NE Pub Date : 2020-08-24
    Ajay M. Patrikar

    Single layer feedforward networks with random weights are known for their non-iterative and fast training algorithms and are successful in a variety of classification and regression problems. A major drawback of these networks is that they require a large number of hidden units. In this paper, we propose a technique to reduce the number of hidden units substantially without affecting the accuracy of

    更新日期:2020-09-25
  • Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
    arXiv.cs.NE Pub Date : 2020-09-23
    Dingqing Yang; Amin Ghasemazar; Xiaowei Ren; Maximilian Golub; Guy Lemieux; Mieszko Lis

    The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during $\mathit{inference}$, however, they do not efficiently support the emerging sparse $\mathit{training}$ techniques. In this paper

    更新日期:2020-09-24
  • An electronic neuromorphic system for real-time detection of High Frequency Oscillations (HFOs) in intracranial EEG
    arXiv.cs.NE Pub Date : 2020-09-23
    Mohammadali SharifhazilehInstitute of Neuroinformatics, University of Zurich and ETH ZurichKlinik für Neurochirurgie, UniversitätsSpital und Universität Zürich; Karla BureloInstitute of Neuroinformatics, University of Zurich and ETH ZurichKlinik für Neurochirurgie, UniversitätsSpital und Universität Zürich; Johannes SarntheinKlinik für Neurochirurgie, UniversitätsSpital und Universität Zürich; Giacomo

    In this work, we present a neuromorphic system that combines for the first time a neural recording headstage with a signal-to-spike conversion circuit and a multi-core spiking neural network (SNN) architecture on the same die for recording, processing, and detecting High Frequency Oscillations (HFO), which are biomarkers for the epileptogenic zone. The device was fabricated using a standard 0.18$\mu$m

    更新日期:2020-09-24
  • Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
    arXiv.cs.NE Pub Date : 2020-09-23
    Luke Metz; Niru Maheswaranathan; C. Daniel Freeman; Ben Poole; Jascha Sohl-Dickstein

    Much as replacing hand-designed features with learned functions has revolutionized how we solve perceptual tasks, we believe learned algorithms will transform how we train models. In this work we focus on general-purpose learned optimizers capable of training a wide variety of problems with no user-specified hyperparameters. We introduce a new, neural network parameterized, hierarchical optimizer with

    更新日期:2020-09-24
  • A new evolutionary algorithm: Learner performance based behavior algorithm
    arXiv.cs.NE Pub Date : 2020-09-05
    Chnoor M. Rahman; Tarik A. Rashid

    A novel evolutionary algorithm called learner performance based behavior algorithm (LPB) is proposed in this article. The basic inspiration of LPB originates from the process of accepting graduated learners from high school in different departments at university. In addition, the changes those learners should do in their studying behaviors to improve their study level at university. The most important

    更新日期:2020-09-24
  • Tensor Programs III: Neural Matrix Laws
    arXiv.cs.NE Pub Date : 2020-09-22
    Greg Yang

    In a neural network (NN), \emph{weight matrices} linearly transform inputs into \emph{preactivations} that are then transformed nonlinearly into \emph{activations}. A typical NN interleaves multitudes of such linear and nonlinear transforms to express complex functions. Thus, the (pre-)activations depend on the weights in an intricate manner. We show that, surprisingly, (pre-)activations of a randomly

    更新日期:2020-09-23
  • Complex Vehicle Routing with Memory Augmented Neural Networks
    arXiv.cs.NE Pub Date : 2020-09-22
    Marijn van Knippenberg; Mike Holenderski; Vlado Menkovski

    Complex real-life routing challenges can be modeled as variations of well-known combinatorial optimization problems. These routing problems have long been studied and are difficult to solve at scale. The particular setting may also make exact formulation difficult. Deep Learning offers an increasingly attractive alternative to traditional solutions, which mainly revolve around the use of various heuristics

    更新日期:2020-09-23
  • Multi-threaded Memory Efficient Crossover in C++ for Generational Genetic Programming
    arXiv.cs.NE Pub Date : 2020-09-22
    W. B. Langdon

    C++ code snippets from a multi-core parallel memory-efficient crossover for genetic programming are given. They may be adapted for separate generation evolutionary algorithms where large chromosomes or small RAM require no more than M + (2 times nthreads) simultaneously active individuals.

    更新日期:2020-09-23
  • Evolutionary Architecture Search for Graph Neural Networks
    arXiv.cs.NE Pub Date : 2020-09-21
    Min Shi; David A. Wilson; Xingquan Zhu; Yu Huang; Yuan Zhuang; Jianxun Liu; Yufei Tang

    Automated machine learning (AutoML) has seen a resurgence in interest with the boom of deep learning over the past decade. In particular, Neural Architecture Search (NAS) has seen significant attention throughout the AutoML research community, and has pushed forward the state-of-the-art in a number of neural models to address grid-like data such as texts and images. However, very litter work has been

    更新日期:2020-09-23
  • DISPATCH: Design Space Exploration of Cyber-Physical Systems
    arXiv.cs.NE Pub Date : 2020-09-21
    Prerit Terway; Kenza Hamidouche; Niraj K. Jha

    Design of Cyber-physical systems (CPSs) is a challenging task that involves searching over a large search space of various CPS configurations and possible values of components composing the system. Hence, there is a need for sample-efficient CPS design space exploration to select the system architecture and component values that meet the target system requirements. We address this challenge by formulating

    更新日期:2020-09-23
  • A Experimental Study of Weight Initialization and Weight Inheritance Effects on Neuroevolution
    arXiv.cs.NE Pub Date : 2020-09-21
    Zimeng Lyu; AbdElRahman ElSaid; Joshua Karns; Mohamed Mkaouer; Travis Desell

    Weight initialization is critical in being able to successfully train artificial neural networks (ANNs), and even more so for recurrent neural networks (RNNs) which can easily suffer from vanishing and exploding gradients. In neuroevolution, where evolutionary algorithms are applied to neural architecture search, weights typically need to be initialized at three different times: when initial genomes

    更新日期:2020-09-22
  • On the Performance of Generative Adversarial Network (GAN) Variants: A Clinical Data Study
    arXiv.cs.NE Pub Date : 2020-09-21
    Jaesung Yoo; Jeman Park; An Wang; David Mohaisen; Joongheon Kim

    Generative Adversarial Network (GAN) is a useful type of Neural Networks in various types of applications including generative models and feature extraction. Various types of GANs are being researched with different insights, resulting in a diverse family of GANs with a better performance in each generation. This review focuses on various GANs categorized by their common traits.

    更新日期:2020-09-22
  • Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial Decomposition
    arXiv.cs.NE Pub Date : 2020-09-19
    Adarsha Balaji; Shihao Song; Anup Das; Jeffrey Krichmar; Nikil Dutt; James Shackleford; Nagarajan Kandasamy; Francky Catthoor

    With growing model complexity, mapping Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is because the synaptic storage resources on a tile, viz. a crossbar, can accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN models that have many pre-synaptic connections per neuron,

    更新日期:2020-09-22
  • A Survey on Machine Learning Applied to Dynamic Physical Systems
    arXiv.cs.NE Pub Date : 2020-09-21
    Sagar Verma

    This survey is on recent advancements in the intersection of physical modeling and machine learning. We focus on the modeling of nonlinear systems which are closer to electric motors. Survey on motor control and fault detection in operation of electric motors has been done.

    更新日期:2020-09-22
  • Interpretable-AI Policies using Evolutionary Nonlinear Decision Trees for Discrete Action Systems
    arXiv.cs.NE Pub Date : 2020-09-20
    Yashesh Dhebar; Kalyanmoy Deb; Subramanya Nageshrao; Ling Zhu; Dimitar Filev

    Black-box artificial intelligence (AI) induction methods such as deep reinforcement learning (DRL) are increasingly being used to find optimal policies for a given control task. Although policies represented using a black-box AI are capable of efficiently executing the underlying control task and achieving optimal closed-loop performance -- controlling the agent from initial time step until the successful

    更新日期:2020-09-22
  • Latent Representation Prediction Networks
    arXiv.cs.NE Pub Date : 2020-09-20
    Hlynur Davíð Hlynsson; Merlin Schüler; Robin Schiewer; Tobias Glasmachers; Laurenz Wiskott

    Deeply-learned planning methods are often based on learning representations that are optimized for unrelated tasks. For example, they might be trained on reconstructing the environment. These representations are then combined with predictor functions for simulating rollouts to navigate the environment. We find this principle of learning representations unsatisfying and propose to learn them such that

    更新日期:2020-09-22
  • TorchDyn: A Neural Differential Equations Library
    arXiv.cs.NE Pub Date : 2020-09-20
    Michael Poli; Stefano Massaroli; Atsushi Yamashita; Hajime Asama; Jinkyoo Park

    Continuous-depth learning has recently emerged as a novel perspective on deep learning, improving performance in tasks related to dynamical systems and density estimation. Core to these approaches is the neural differential equation, whose forward passes are the solutions of an initial value problem parametrized by a neural network. Unlocking the full potential of continuous-depth models requires a

    更新日期:2020-09-22
  • Learned Low Precision Graph Neural Networks
    arXiv.cs.NE Pub Date : 2020-09-19
    Yiren Zhao; Duo Wang; Daniel Bates; Robert Mullins; Mateja Jamnik; Pietro Lio

    Deep Graph Neural Networks (GNNs) show promising performance on a range of graph tasks, yet at present are costly to run and lack many of the optimisations applied to DNNs. We show, for the first time, how to systematically quantise GNNs with minimal or no loss in performance using Network Architecture Search (NAS). We define the possible quantisation search space of GNNs. The proposed novel NAS mechanism

    更新日期:2020-09-22
  • Co-Evolution of Multi-Robot Controllers and Task Cues for Off-World Open Pit Mining
    arXiv.cs.NE Pub Date : 2020-09-19
    Jekan Thangavelautham; Yinan Xu

    Robots are ideal for open-pit mining on the Moon as its a dull, dirty, and dangerous task. The challenge is to scale up productivity with an ever-increasing number of robots. This paper presents a novel method for developing scalable controllers for use in multi-robot excavation and site-preparation scenarios. The controller starts with a blank slate and does not require human-authored operations scripts

    更新日期:2020-09-22
  • Closed-loop spiking control on a neuromorphic processor implemented on the iCub
    arXiv.cs.NE Pub Date : 2020-09-01
    Jingyue Zhao; Nicoletta Risi; Marco Monforte; Chiara Bartolozzi; Giacomo Indiveri; Elisa Donati

    Despite neuromorphic engineering promises the deployment of low latency, adaptive and low power systems that can lead to the design of truly autonomous artificial agents, the development of a fully neuromorphic artificial agent is still missing. While neuromorphic sensing and perception, as well as decision-making systems, are now mature, the control and actuation part is lagging behind. In this paper

    更新日期:2020-09-22
  • Improving Intelligence of Evolutionary Algorithms Using Experience Share and Replay
    arXiv.cs.NE Pub Date : 2020-08-10
    Majdi I. Radaideh; Koroush Shirvan

    We propose PESA, a novel approach combining Particle Swarm Optimisation (PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid Algorithm, inspired from reinforcement learning. PESA hybridizes the three algorithms by storing their solutions in a shared replay memory. Next, PESA applies prioritized replay to redistribute data between the three algorithms in frequent form based on their

    更新日期:2020-09-21
  • Multi-Activation Hidden Units for Neural Networks with Random Weights
    arXiv.cs.NE Pub Date : 2020-09-06
    Ajay M. Patrikar

    Single layer feedforward networks with random weights are successful in a variety of classification and regression problems. These networks are known for their non-iterative and fast training algorithms. A major drawback of these networks is that they require a large number of hidden units. In this paper, we propose the use of multi-activation hidden units. Such units increase the number of tunable

    更新日期:2020-09-21
  • Exploiting Heterogeneity in Operational Neural Networks by Synaptic Plasticity
    arXiv.cs.NE Pub Date : 2020-08-21
    Serkan Kiranyaz; Junaid Malik; Habib Ben Abdallah; Turker Ince; Alexandros Iosifidis; Moncef Gabbouj

    The recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or

    更新日期:2020-09-21
  • Unitary Learning for Deep Diffractive Neural Network
    arXiv.cs.NE Pub Date : 2020-08-17
    Yong-Liang Xiao

    Realization of deep learning with coherent diffraction has achieved remarkable development nowadays, which benefits on the fact that matrix multiplication can be optically executed in parallel as well as with little power consumption. Coherent optical field propagated in the form of complex-value entity can be manipulated into a task-oriented output with statistical inference. In this paper, we present

    更新日期:2020-09-21
  • Spatio-Temporal Activation Function To Map Complex Dynamical Systems
    arXiv.cs.NE Pub Date : 2020-09-06
    Parth Mahendra

    Most of the real world is governed by complex and chaotic dynamical systems. All of these dynamical systems pose a challenge in modelling them using neural networks. Currently, reservoir computing, which is a subset of recurrent neural networks, is actively used to simulate complex dynamical systems. In this work, a two dimensional activation function is proposed which includes an additional temporal

    更新日期:2020-09-21
  • A Study of Genetic Algorithms for Hyperparameter Optimization of Neural Networks in Machine Translation
    arXiv.cs.NE Pub Date : 2020-09-15
    Keshav Ganapathy

    With neural networks having demonstrated their versatility and benefits, the need for their optimal performance is as prevalent as ever. A defining characteristic, hyperparameters, can greatly affect its performance. Thus engineers go through a process, tuning, to identify and implement optimal hyperparameters. That being said, excess amounts of manual effort are required for tuning network architectures

    更新日期:2020-09-21
  • Multi-Objective Parameter-less Population Pyramid for Solving Industrial Process Planning Problems
    arXiv.cs.NE Pub Date : 2020-09-10
    Michal Witold Przewozniczek; Piotr Dziurzanski; Shuai Zhao; Leandro Soares Indrusiak

    Evolutionary methods are effective tools for obtaining high-quality results when solving hard practical problems. Linkage learning may increase their effectiveness. One of the state-of-the-art methods that employ linkage learning is the Parameter-less Population Pyramid (P3). P3 is dedicated to solving single-objective problems in discrete domains. Recent research shows that P3 is highly competitive

    更新日期:2020-09-21
  • Low-Power Low-Latency Keyword Spotting and Adaptive Control with a SpiNNaker 2 Prototype and Comparison with Loihi
    arXiv.cs.NE Pub Date : 2020-09-18
    Yexin Yan; Terrence C. Stewart; Xuan Choo; Bernhard Vogginger; Johannes Partzsch; Sebastian Hoeppner; Florian Kelber; Chris Eliasmith; Steve Furber; Christian Mayr

    We implemented two neural network based benchmark tasks on a prototype chip of the second-generation SpiNNaker (SpiNNaker 2) neuromorphic system: keyword spotting and adaptive robotic control. Keyword spotting is commonly used in smart speakers to listen for wake words, and adaptive control is used in robotic applications to adapt to unknown dynamics in an online fashion. We highlight the benefit of

    更新日期:2020-09-21
  • Generating Efficient DNN-Ensembles with Evolutionary Computation
    arXiv.cs.NE Pub Date : 2020-09-18
    Marc Ortiz; Florian Scheidegger; Marc Casas; Cristiano Malossi; Eduard Ayguadé

    In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models. We demonstrate that we can jointly optimize for accuracy, inference time, and the number of parameters by combining DNN classifiers. To achieve this, we combine multiple ensemble strategies: bagging, boosting, and an ordered chain of classifiers. To reduce the number of

    更新日期:2020-09-21
  • On the spatiotemporal behavior in biology-mimicking computing systems
    arXiv.cs.NE Pub Date : 2020-09-18
    János Végh; Ádám J. Berki

    The payload performance of conventional computing systems, from single processors to supercomputers, reached its limits the nature enables. Both the growing demand to cope with "big data" (based on, or assisted by, artificial intelligence) and the interest in understanding the operation of our brain more completely, stimulated the efforts to build biology-mimicking computing systems from inexpensive

    更新日期:2020-09-21
  • Pruning Neural Networks at Initialization: Why are We Missing the Mark?
    arXiv.cs.NE Pub Date : 2020-09-18
    Jonathan Frankle; Gintare Karolina Dziugaite; Daniel M. Roy; Michael Carbin

    Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), SynFlow (Tanaka et al., 2020), and magnitude pruning. Although these methods surpass the trivial baseline of random pruning, they remain below the accuracy of magnitude pruning after training, and we endeavor to understand why.

    更新日期:2020-09-21
  • The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons from Infant Learning
    arXiv.cs.NE Pub Date : 2020-09-17
    Lorijn Zaadnoordijk; Tarek R. Besold; Rhodri Cusack

    After a surge in popularity of supervised Deep Learning, the desire to reduce the dependence on curated, labelled data sets and to leverage the vast quantities of unlabelled data available recently triggered renewed interest in unsupervised learning algorithms. Despite a significantly improved performance due to approaches such as the identification of disentangled latent representations, contrastive

    更新日期:2020-09-21
  • Evolutionary Selective Imitation: Interpretable Agents by Imitation Learning Without a Demonstrator
    arXiv.cs.NE Pub Date : 2020-09-17
    Roy Eliya; J. Michael Herrmann

    We propose a new method for training an agent via an evolutionary strategy (ES), in which we iteratively improve a set of samples to imitate: Starting with a random set, in every iteration we replace a subset of the samples with samples from the best trajectories discovered so far. The evaluation procedure for this set is to train, via supervised learning, a randomly initialised neural network (NN)

    更新日期:2020-09-20
  • EventProp: Backpropagation for Exact Gradients in Spiking Neural Networks
    arXiv.cs.NE Pub Date : 2020-09-17
    Timo C. Wunderlich; Christian Pehle

    We derive the backpropagation algorithm for spiking neural networks composed of leaky integrate-and-fire neurons operating in continuous time. This algorithm, EventProp, computes the exact gradient of an arbitrary loss function of spike times and membrane potentials by backpropagating errors in time. For the first time, by leveraging methods from optimal control theory, we are able to backpropagate

    更新日期:2020-09-20
  • Attracting Sets in Perceptual Networks
    arXiv.cs.NE Pub Date : 2020-09-17
    Robert Prentner

    This document gives a specification for the model used in [1]. It presents a simple way of optimizing mutual information between some input and the attractors of a (noisy) network, using a genetic algorithm. The nodes of this network are modeled as simplified versions of the structures described in the "interface theory of perception" [2]. Accordingly, the system is referred to as a "perceptual network"

    更新日期:2020-09-20
  • Distributional Generalization: A New Kind of Generalization
    arXiv.cs.NE Pub Date : 2020-09-17
    Preetum Nakkiran; Yamini Bansal

    We introduce a new notion of generalization-- Distributional Generalization-- which roughly states that outputs of a classifier at train and test time are close *as distributions*, as opposed to close in just their average error. For example, if we mislabel 30% of dogs as cats in the train set of CIFAR-10, then a ResNet trained to interpolation will in fact mislabel roughly 30% of dogs as cats on the

    更新日期:2020-09-20
  • Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning
    arXiv.cs.NE Pub Date : 2020-09-16
    Qiong Liu; Yanxia Zhang

    As data from IoT (Internet of Things) sensors become ubiquitous, state-of-the-art machine learning algorithms face many challenges on directly using sensor data. To overcome these challenges, methods must be designed to learn directly from sensors without manual annotations. This paper introduces Sensory Time-cue for Unsupervised Meta-learning (STUM). Different from traditional learning approaches

    更新日期:2020-09-20
  • Computational tool to study high dimensional dynamic in NMM
    arXiv.cs.NE Pub Date : 2020-09-16
    A. González-Mitjans; D. Paz-Linares; A. Areces-Gonzalez; ML. Bringas-Vega; P. A Valdés-Sosa

    Neuroscience has shown great progress in recent years. Several of the theoretical bases have arisen from the examination of dynamic systems, using Neural Mass Models (NMMs). Due to the largescale brain dynamics of NMMs and the difficulty of studying nonlinear systems, the local linearization approach to discretize the state equation was used via an algebraic formulation, as it intervenes favorably

    更新日期:2020-09-18
  • An Extensive Experimental Evaluation of Automated Machine Learning Methods for Recommending Classification Algorithms (Extended Version)
    arXiv.cs.NE Pub Date : 2020-09-16
    Márcio P. Basgalupp; Rodrigo C. Barros; Alex G. C. de Sá; Gisele L. Pappa; Rafael G. Mantovani; André C. P. L. F. de Carvalho; Alex A. Freitas

    This paper presents an experimental comparison among four Automated Machine Learning (AutoML) methods for recommending the best classification algorithm for a given input dataset. Three of these methods are based on Evolutionary Algorithms (EAs), and the other is Auto-WEKA, a well-known AutoML method based on the Combined Algorithm Selection and Hyper-parameter optimisation (CASH) approach. The EA-based

    更新日期:2020-09-18
  • Ensemble learning of diffractive optical networks
    arXiv.cs.NE Pub Date : 2020-09-15
    Md Sadman Sakib Rahman; Jingxi Li; Deniz Mengu; Yair Rivenson; Aydogan Ozcan

    A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware, due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive Deep Neural Networks (D2NNs) form such

    更新日期:2020-09-16
  • Short-term synaptic plasticity optimally models continuous environments
    arXiv.cs.NE Pub Date : 2020-09-15
    Timoleon MoraitisIBM Research - Zurich; Abu SebastianIBM Research - Zurich; Evangelos EleftheriouIBM Research - Zurich

    Biological neural networks operate with extraordinary energy efficiency, owing to properties such as spike-based communication and synaptic plasticity driven by local activity. When emulated in silico, such properties also enable highly energy-efficient machine learning and inference systems. However, it is unclear whether these mechanisms only trade off performance for efficiency or rather they are

    更新日期:2020-09-16
  • Variable Binding for Sparse Distributed Representations: Theory and Applications
    arXiv.cs.NE Pub Date : 2020-09-14
    E. Paxon Frady; Denis Kleyko; Friedrich T. Sommer

    Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap. However, classical VSAs and neural networks are still considered incompatible. VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population. Neural networks

    更新日期:2020-09-16
  • AutoML for Multilayer Perceptron and FPGA Co-design
    arXiv.cs.NE Pub Date : 2020-09-14
    Philip Colangelo; Oren Segal; Alex Speicher; Martin Margala

    State-of-the-art Neural Network Architectures (NNAs) are challenging to design and implement efficiently in hardware. In the past couple of years, this has led to an explosion in research and development of automatic Neural Architecture Search (NAS) tools. AutomML tools are now used to achieve state of the art NNA designs and attempt to optimize for hardware usage and design. Much of the recent research

    更新日期:2020-09-15
  • Simple Simultaneous Ensemble Learning in Genetic Programming
    arXiv.cs.NE Pub Date : 2020-09-13
    Marco Virgolin

    Learning ensembles can substantially improve the generalization performance of low-bias high-variance estimators such as deep decision trees and deep nets. Improvements have also been found when Genetic Programming (GP) is used to learn the estimators. Yet, the best way to learn ensembles in GP remains to be determined, especially considering that the population of GP can be exploited to learn ensemble

    更新日期:2020-09-15
  • Extracting Optimal Solution Manifolds using Constrained Neural Optimization
    arXiv.cs.NE Pub Date : 2020-09-13
    Gurpreet Singh; Soumyajit Gupta; Matthew Lease

    Constrained Optimization solution algorithms are restricted to point based solutions. In practice, single or multiple objectives must be satisfied, wherein both the objective function and constraints can be non-convex resulting in multiple optimal solutions. Real world scenarios include intersecting surfaces as Implicit Functions, Hyperspectral Unmixing and Pareto Optimal fronts. Local or global convexification

    更新日期:2020-09-15
  • A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research
    arXiv.cs.NE Pub Date : 2020-09-14
    Cody Watson; Nathan Cooper; David Nader Palacio; Kevin Moran; Denys Poshyvanyk

    An increasingly popular set of techniques adopted by software engineering (SE) researchers to automate development tasks are those rooted in the concept of Deep Learning (DL). The popularity of such techniques largely stems from their automated feature engineering capabilities, which aid in modeling software artifacts. However, due to the rapid pace at which DL techniques have been adopted, it is difficult

    更新日期:2020-09-15
  • P-CRITICAL: A Reservoir Autoregulation Plasticity Rule for Neuromorphic Hardware
    arXiv.cs.NE Pub Date : 2020-09-11
    Ismael Balafrej; Jean Rouat

    Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time. These states must be kept in memory for an undefined period of time which is task-dependent. This paper uses the reservoir computing paradigm where an untrained recurrent neural network layer is used as a preprocessor stage to learn temporal and limited data. These so-called reservoirs

    更新日期:2020-09-15
  • Adaptive Convolution Kernel for Artificial Neural Networks
    arXiv.cs.NE Pub Date : 2020-09-14
    F. Boray Tek; İlker Çam; Deniz Karlı

    Many deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3$\times$3) kernels. This paper describes a method for training the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments

    更新日期:2020-09-15
  • Reservoir Memory Machines as Neural Computers
    arXiv.cs.NE Pub Date : 2020-09-14
    Benjamin Paaßen; Alexander Schulz; Terrence C. Stewart; Barbara Hammer

    Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model

    更新日期:2020-09-15
  • IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning
    arXiv.cs.NE Pub Date : 2020-09-10
    Yuxi Huan; Fan Wu; Michail Basios; Leslie Kanthan; Lingbo Li; Baowen Xu

    Hyperparameter optimisation is a crucial process in searching the optimal machine learning model. The efficiency of finding the optimal hyperparameter settings has been a big concern in recent researches since the optimisation process could be time-consuming, especially when the objective functions are highly expensive to evaluate. In this paper, we introduce an intelligent evolutionary optimisation

    更新日期:2020-09-15
  • Risk Bounds for Robust Deep Learning
    arXiv.cs.NE Pub Date : 2020-09-14
    Johannes Lederer

    It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data. In this paper, we support these empirical findings with statistical theory. We especially show that empirical-risk minimization with unbounded, Lipschitz-continuous loss functions, such as the least-absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss, can provide

    更新日期:2020-09-15
  • Iterative beam search algorithms for the permutation flowshop
    arXiv.cs.NE Pub Date : 2020-09-12
    Luc Libralesso; Pablo Andres Focke; Aurélien Secardin; Vincent Jost

    We study an iterative beam search algorithm for the permutation flowshop (makespan and flowtime minimization). This algorithm combines branching strategies inspired by recent branch-and-bounds and a guidance strategy inspired by the LR heuristic. It obtains competitive results, reports many new-best-so-far solutions on the VFR benchmark (makespan minimization) and the Taillard benchmark (flowtime minimization)

    更新日期:2020-09-15
  • EdgeLoc: An Edge-IoT Framework for Robust Indoor Localization Using Capsule Networks
    arXiv.cs.NE Pub Date : 2020-09-12
    Qianwen Ye; Xiaochen Fan; Gengfa Fang; Hongxia Bie; Chaocan Xiang; Xudong Song; Xiangjian He

    With the unprecedented demand for location-based services in indoor scenarios, wireless indoor localization has become essential for mobile users. While GPS is not available at indoor spaces, WiFi RSS fingerprinting has become popular with its ubiquitous accessibility. However, it is challenging to achieve robust and efficient indoor localization with two major challenges. First, the localization accuracy

    更新日期:2020-09-15
  • Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain
    arXiv.cs.NE Pub Date : 2020-09-11
    Beren Millidge; Alexander Tschantz; Christopher L Buckley; Anil Seth

    Can the powerful backpropagation of error (backprop) reinforcement learning algorithm be formulated in a manner suitable for implementation in neural circuitry? The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global (error) signals, as in orthodox backprop. Recently several algorithms for approximating backprop using only local signals

    更新日期:2020-09-14
  • Understanding the Role of Individual Units in a Deep Neural Network
    arXiv.cs.NE Pub Date : 2020-09-10
    David Bau; Jun-Yan Zhu; Hendrik Strobelt; Agata Lapedriza; Bolei Zhou; Antonio Torralba

    Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional

    更新日期:2020-09-11
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
物理学研究前沿热点精选期刊推荐
chemistry
自然职位线上招聘会
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
ACS ES&T Engineering
ACS ES&T Water
ACS Publications填问卷
屿渡论文,编辑服务
阿拉丁试剂right
南昌大学
王辉
南方科技大学
彭小水
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
天合科研
x-mol收录
赵延川
李霄羽
廖矿标
朱守非
试剂库存
down
wechat
bug