当前期刊: arXiv - CS - Neural and Evolutionary Computing Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Variable Division and Optimization for Constrained Multiobjective Portfolio Problems
    arXiv.cs.NE Pub Date : 2021-01-21
    Yi Chen; Aimin Zhou

    Variable division and optimization (D\&O) is a frequently utilized algorithm design paradigm in Evolutionary Algorithms (EAs). A D\&O EA divides a variable into partial variables and then optimize them respectively. A complicated problem is thus divided into simple subtasks. For example, a variable of portfolio problem can be divided into two partial variables, i.e. the selection of assets and the

    更新日期:2021-01-22
  • Can stable and accurate neural networks be computed? -- On the barriers of deep learning and Smale's 18th problem
    arXiv.cs.NE Pub Date : 2021-01-20
    Vegard Antun; Matthew J. Colbrook; Anders C. Hansen

    Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. However, DL suffers from a universal phenomenon: instability, despite universal approximating properties that often guarantee the existence of stable neural networks (NNs). We show the following paradox. There are basic well-conditioned problems in scientific computing where one can prove the

    更新日期:2021-01-22
  • Can a Fruit Fly Learn Word Embeddings?
    arXiv.cs.NE Pub Date : 2021-01-18
    Yuchen Liang; Chaitanya K. Ryali; Benjamin Hoover; Leopold Grinberg; Saket Navlakha; Mohammed J. Zaki; Dmitry Krotov

    The mushroom body of the fruit fly brain is one of the best studied systems in neuroscience. At its core it consists of a population of Kenyon cells, which receive inputs from multiple sensory modalities. These cells are inhibited by the anterior paired lateral neuron, thus creating a sparse high dimensional representation of the inputs. In this work we study a mathematical formalization of this network

    更新日期:2021-01-22
  • Zero-Cost Proxies for Lightweight NAS
    arXiv.cs.NE Pub Date : 2021-01-20
    Mohamed S. Abdelfattah; Abhinav Mehrotra; Łukasz Dudziak; Nicholas D. Lane

    Neural Architecture Search (NAS) is quickly becoming the standard methodology to design neural network models. However, NAS is typically compute-intensive because multiple models need to be evaluated before choosing the best one. To reduce the computational power and time needed, a proxy task is often used for evaluating each model instead of full training. In this paper, we evaluate conventional reduced-training

    更新日期:2021-01-21
  • Illuminating the Space of Beatable Lode Runner Levels Produced By Various Generative Adversarial Networks
    arXiv.cs.NE Pub Date : 2021-01-19
    Kirby Steckel; Jacob Schrum

    Generative Adversarial Networks (GANs) are capable of generating convincing imitations of elements from a training set, but the distribution of elements in the training set affects to difficulty of properly training the GAN and the quality of the outputs it produces. This paper looks at six different GANs trained on different subsets of data from the game Lode Runner. The quality diversity algorithm

    更新日期:2021-01-21
  • SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks
    arXiv.cs.NE Pub Date : 2021-01-19
    Chaeun Lee; Seyoung Kim

    As deep neural networks require tremendous amount of computation and memory, analog computing with emerging memory devices is a promising alternative to digital computing for edge devices. However, because of the increasing simulation time for analog computing system, it has not been explored. To overcome this issue, analytically approximated simulators are developed, but these models are inaccurate

    更新日期:2021-01-21
  • Implicit Bias of Linear RNNs
    arXiv.cs.NE Pub Date : 2021-01-19
    Melikasadat Emami; Mojtaba Sahraee-Ardakan; Parthe Pandit; Sundeep Rangan; Alyson K. Fletcher

    Contemporary wisdom based on empirical studies suggests that standard recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory. However, precise reasoning for this behavior is still unknown. This paper provides a rigorous explanation of this property in the special case of linear RNNs. Although this work is limited to linear RNNs, even these systems have traditionally

    更新日期:2021-01-21
  • Self-Organizing Intelligent Matter: A blueprint for an AI generating algorithm
    arXiv.cs.NE Pub Date : 2021-01-19
    Karol Gregor; Frederic Besse

    We propose an artificial life framework aimed at facilitating the emergence of intelligent organisms. In this framework there is no explicit notion of an agent: instead there is an environment made of atomic elements. These elements contain neural operations and interact through exchanges of information and through physics-like rules contained in the environment. We discuss how an evolutionary process

    更新日期:2021-01-20
  • Synaptic metaplasticity in binarized neural networks
    arXiv.cs.NE Pub Date : 2021-01-19
    Axel Laborieux; Maxence Ernoult; Tifenn Hirtzlin; Damien Querlioz

    Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to "catastrophic forgetting": they rapidly forget the previous task when trained on a new one. Neuroscience suggests that biological synapses avoid this issue through the process of synaptic consolidation and metaplasticity: the plasticity itself changes upon repeated synaptic

    更新日期:2021-01-20
  • A synthetic biology approach for the design of genetic algorithms with bacterial agents
    arXiv.cs.NE Pub Date : 2021-01-19
    A. Gargantilla Becerra; M. Gutiérrez; R. Lahoz-Beltra

    Bacteria have been a source of inspiration for the design of evolutionary algorithms. At the beginning of the 20th century synthetic biology was born, a discipline whose goal is the design of biological systems that do not exist in nature, for example, programmable synthetic bacteria. In this paper, we introduce as a novelty the designing of evolutionary algorithms where all the steps are conducted

    更新日期:2021-01-20
  • A Surrogate-Assisted Variable Grouping Algorithm for General Large Scale Global Optimization Problems
    arXiv.cs.NE Pub Date : 2021-01-19
    An Chen; Zhigang Ren; Muyi Wang; Yongsheng Liang; Hanqing Liu; Wenhao Du

    Problem decomposition plays a vital role when applying cooperative coevolution (CC) to large scale global optimization problems. However, most learning-based decomposition algorithms either only apply to additively separable problems or face the issue of false separability detections. Directing against these limitations, this study proposes a novel decomposition algorithm called surrogate-assisted

    更新日期:2021-01-20
  • Intelligent Frame Selection as a Privacy-Friendlier Alternative to Face Recognition
    arXiv.cs.NE Pub Date : 2021-01-19
    Mattijs Baert; Sam Leroux; Pieter Simoens

    The widespread deployment of surveillance cameras for facial recognition gives rise to many privacy concerns. This study proposes a privacy-friendly alternative to large scale facial recognition. While there are multiple techniques to preserve privacy, our work is based on the minimization principle which implies minimizing the amount of collected personal data. Instead of running facial recognition

    更新日期:2021-01-20
  • ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning
    arXiv.cs.NE Pub Date : 2021-01-19
    Xingyou Song; Krzysztof Choromanski; Jack Parker-Holder; Yunhao Tang; Daiyi Peng; Deepali Jain; Wenbo Gao; Aldo Pacchiano; Tamas Sarlos; Yuxiang Yang

    We introduce ES-ENAS, a simple neural architecture search (NAS) algorithm for the purpose of reinforcement learning (RL) policy design, by combining Evolutionary Strategies (ES) and Efficient NAS (ENAS) in a highly scalable and intuitive way. Our main insight is noticing that ES is already a distributed blackbox algorithm, and thus we may simply insert a model controller from ENAS into the central

    更新日期:2021-01-20
  • Training Learned Optimizers with Randomly Initialized Learned Optimizers
    arXiv.cs.NE Pub Date : 2021-01-14
    Luke Metz; C. Daniel Freeman; Niru Maheswaranathan; Jascha Sohl-Dickstein

    Learned optimizers are increasingly effective, with performance exceeding that of hand designed optimizers such as Adam~\citep{kingma2014adam} on specific tasks \citep{metz2019understanding}. Despite the potential gains available, in current work the meta-training (or `outer-training') of the learned optimizer is performed by a hand-designed optimizer, or by an optimizer trained by a hand-designed

    更新日期:2021-01-20
  • Benchmarking Perturbation-based Saliency Maps for Explaining Deep Reinforcement Learning Agents
    arXiv.cs.NE Pub Date : 2021-01-18
    Tobias Huber; Benedikt Limmer; Elisabeth André

    Recent years saw a plethora of work on explaining complex intelligent agents. One example is the development of several algorithms that generate saliency maps which show how much each pixel attributed to the agents' decision. However, most evaluations of such saliency maps focus on image classification tasks. As far as we know, there is no work which thoroughly compares different saliency maps for

    更新日期:2021-01-20
  • Guided parallelized stochastic gradient descent for delay compensation
    arXiv.cs.NE Pub Date : 2021-01-17
    Anuraganand Sharma

    Stochastic gradient descent (SGD) algorithm and its variations have been effectively used to optimize neural network models. However, with the rapid growth of big data and deep learning, SGD is no longer the most suitable choice due to its natural behavior of sequential optimization of the error function. This has led to the development of parallel SGD algorithms, such as asynchronous SGD (ASGD) and

    更新日期:2021-01-20
  • A Spiking Central Pattern Generator for the control of a simulated lamprey robot running on SpiNNaker and Loihi neuromorphic boards
    arXiv.cs.NE Pub Date : 2021-01-18
    Emmanouil Angelidis; Emanuel Buchholz; Jonathan Patrick Arreguit O'Neil; Alexis Rougè; Terrence Stewart; Axel von Arnim; Alois Knoll; Auke Ijspeert

    Central Pattern Generators (CPGs) models have been long used to investigate both the neural mechanisms that underlie animal locomotion as well as a tool for robotic research. In this work we propose a spiking CPG neural network and its implementation on neuromorphic hardware as a means to control a simulated lamprey model. To construct our CPG model, we employ the naturally emerging dynamical systems

    更新日期:2021-01-19
  • Performance Analysis and Improvement of Parallel Differential Evolution
    arXiv.cs.NE Pub Date : 2021-01-17
    Pan Zibin

    Differential evolution (DE) is an effective global evolutionary optimization algorithm using to solve global optimization problems mainly in a continuous domain. In this field, researchers pay more attention to improving the capability of DE to find better global solutions, however, the computational performance of DE is also a very interesting aspect especially when the problem scale is quite large

    更新日期:2021-01-19
  • Faster Convergence in Deep-Predictive-Coding Networks to Learn Deeper Representations
    arXiv.cs.NE Pub Date : 2021-01-18
    Isaac J. Sledge; Jose C. Principe

    Deep-predictive-coding networks (DPCNs) are hierarchical, generative models that rely on feed-forward and feed-back connections to modulate latent feature representations of stimuli in a dynamic and context-sensitive manner. A crucial element of DPCNs is a forward-backward inference procedure to uncover sparse states of a dynamic model, which are used for invariant feature extraction. However, this

    更新日期:2021-01-19
  • Deep-Mobility: A Deep Learning Approach for an Efficient and Reliable 5G Handover
    arXiv.cs.NE Pub Date : 2021-01-17
    Rahul Arun Paropkari; Anurag Thantharate; Cory Beard

    5G cellular networks are being deployed all over the world and this architecture supports ultra-dense network (UDN) deployment. Small cells have a very important role in providing 5G connectivity to the end users. Exponential increases in devices, data and network demands make it mandatory for the service providers to manage handovers better, to cater to the services that a user desire. In contrast

    更新日期:2021-01-19
  • Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks
    arXiv.cs.NE Pub Date : 2021-01-16
    Jia Liu; Yaochu Jin

    Many existing deep learning models are vulnerable to adversarial examples that are imperceptible to humans. To address this issue, various methods have been proposed to design network architectures that are robust to one particular type of adversarial attacks. It is practically impossible, however, to predict beforehand which type of attacks a machine learn model may suffer from. To address this challenge

    更新日期:2021-01-19
  • Controllable reset behavior in domain wall-magnetic tunnel junction artificial neurons for task-adaptable computation
    arXiv.cs.NE Pub Date : 2021-01-08
    Samuel Liu; Christopher H. Bennett; Joseph S. Friedman; Matthew J. Marinella; David Paydarfar; Jean Anne C. Incorvia

    Neuromorphic computing with spintronic devices has been of interest due to the limitations of CMOS-driven von Neumann computing. Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior. Edgy-relaxed behavior, where a frequently firing neuron experiences a lower action potential threshold, may provide additional artificial

    更新日期:2021-01-19
  • A New Artificial Neuron Proposal with Trainable Simultaneous Local and Global Activation Function
    arXiv.cs.NE Pub Date : 2021-01-15
    Tiago A. E. Ferreira; Marios Mattheakis; Pavlos Protopapas

    The activation function plays a fundamental role in the artificial neural network learning process. However, there is no obvious choice or procedure to determine the best activation function, which depends on the problem. This study proposes a new artificial neuron, named global-local neuron, with a trainable activation function composed of two components, a global and a local. The global component

    更新日期:2021-01-18
  • A Novel Prediction Approach for Exploring PM2.5 Spatiotemporal Propagation Based on Convolutional Recursive Neural Networks
    arXiv.cs.NE Pub Date : 2021-01-15
    Hsing-Chung Chen; Karisma Trinanda Putra; Jerry Chun-WeiLin

    The spread of PM2.5 pollutants that endanger health is difficult to predict because it involves many atmospheric variables. These micron particles can spread rapidly from their source to residential areas, increasing the risk of respiratory disease if exposed for long periods. The prediction system of PM2.5 propagation provides more detailed and accurate information as an early warning system to reduce

    更新日期:2021-01-18
  • The Geometry of Deep Generative Image Models and its Applications
    arXiv.cs.NE Pub Date : 2021-01-15
    Binxu Wang; Carlos R. Ponce

    Generative adversarial networks (GANs) have emerged as a powerful unsupervised method to model the statistical patterns of real-world data sets, such as natural images. These networks are trained to map random inputs in their latent space to new samples representative of the learned data. However, the structure of the latent space is hard to intuit due to its high dimensionality and the non-linearity

    更新日期:2021-01-18
  • Convolutional Neural Network with Pruning Method for Handwritten Digit Recognition
    arXiv.cs.NE Pub Date : 2021-01-15
    Mengyu Chen

    CNN model is a popular method for imagery analysis, so it could be utilized to recognize handwritten digits based on MNIST datasets. For higher recognition accuracy, various CNN models with different fully connected layer sizes are exploited to figure out the relationship between the CNN fully connected layer size and the recognition accuracy. Inspired by previous pruning work, we performed pruning

    更新日期:2021-01-18
  • Unveiling the role of plasticity rules in reservoir computing
    arXiv.cs.NE Pub Date : 2021-01-14
    Guillermo B. Morales; Claudio R. Mirasso; Miguel C. Soriano

    Reservoir Computing (RC) is an appealing approach in Machine Learning that combines the high computational capabilities of Recurrent Neural Networks with a fast and easy training method. Likewise, successful implementation of neuro-inspired plasticity rules into RC artificial networks has boosted the performance of the original models. In this manuscript, we analyze the role that plasticity rules play

    更新日期:2021-01-18
  • A Nature-Inspired Feature Selection Approach based on Hypercomplex Information
    arXiv.cs.NE Pub Date : 2021-01-14
    Gustavo H. de Rosa; João Paulo Papa; Xin-She Yang

    Feature selection for a given model can be transformed into an optimization task. The essential idea behind it is to find the most suitable subset of features according to some criterion. Nature-inspired optimization can mitigate this problem by producing compelling yet straightforward solutions when dealing with complicated fitness functions. Additionally, new mathematical representations, such as

    更新日期:2021-01-15
  • A Multiple Classifier Approach for Concatenate-Designed Neural Networks
    arXiv.cs.NE Pub Date : 2021-01-14
    Ka-Hou Chan; Sio-Kei Im; Wei Ke

    This article introduces a multiple classifier method to improve the performance of concatenate-designed neural networks, such as ResNet and DenseNet, with the purpose to alleviate the pressure on the final classifier. We give the design of the classifiers, which collects the features produced between the network sets, and present the constituent layers and the activation function for the classifiers

    更新日期:2021-01-15
  • Optimal Energy Shaping via Neural Approximators
    arXiv.cs.NE Pub Date : 2021-01-14
    Stefano Massaroli; Michael Poli; Federico Califano; Jinkyoo Park; Atsushi Yamashita; Hajime Asama

    We introduce optimal energy shaping as an enhancement of classical passivity-based control methods. A promising feature of passivity theory, alongside stability, has traditionally been claimed to be intuitive performance tuning along the execution of a given task. However, a systematic approach to adjust performance within a passive control framework has yet to be developed, as each method relies on

    更新日期:2021-01-15
  • Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias
    arXiv.cs.NE Pub Date : 2021-01-14
    Axel Laborieux; Maxence Ernoult; Benjamin Scellier; Yoshua Bengio; Julie Grollier; Damien Querlioz

    Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning. In practice, however, EP does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient

    更新日期:2021-01-15
  • A threshold search based memetic algorithm for the disjunctively constrained knapsack problem
    arXiv.cs.NE Pub Date : 2021-01-12
    Zequn Wei; Jin-Kao Hao

    The disjunctively constrained knapsack problem consists in packing a subset of pairwisely compatible items in a capacity-constrained knapsack such that the total profit of the selected items is maximized while satisfying the knapsack capacity. DCKP has numerous applications and is however computationally challenging (NP-hard). In this work, we present a threshold search based memetic algorithm for

    更新日期:2021-01-14
  • Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence Labeling
    arXiv.cs.NE Pub Date : 2021-01-12
    Muhammad Khalifa; Muhammad Abdul-Mageed; Khaled Shaalan

    A sufficient amount of annotated data is required to fine-tune pre-trained language models for downstream tasks. Unfortunately, attaining labeled data can be costly, especially for multiple language varieties/dialects. We propose to self-train pre-trained language models in zero- and few-shot scenarios to improve the performance on data-scarce dialects using only resources from data-rich ones. We demonstrate

    更新日期:2021-01-14
  • An Evolutionary Game Model for Understanding Fraud in Consumption Taxes
    arXiv.cs.NE Pub Date : 2021-01-12
    M. Chica; J. Hernandez; C. Manrique-de-Lara-Peñate; R. Chiong

    This paper presents a computational evolutionary game model to study and understand fraud dynamics in the consumption tax system. Players are cooperators if they correctly declare their value added tax (VAT), and are defectors otherwise. Each player's payoff is influenced by the amount evaded and the subjective probability of being inspected by tax authorities. Since transactions between companies

    更新日期:2021-01-13
  • Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks
    arXiv.cs.NE Pub Date : 2021-01-12
    Karina Vasquez; Yeshwanth Venkatesha; Abhiroop Bhattacharjee; Abhishek Moitra; Priyadarshini Panda

    As neural networks gain widespread adoption in embedded devices, there is a need for model compression techniques to facilitate deployment in resource-constrained environments. Quantization is one of the go-to methods yielding state-of-the-art model compression. Most approaches take a fully trained model, apply different heuristics to determine the optimal bit-precision for different layers of the

    更新日期:2021-01-13
  • Training Deep Architectures Without End-to-End Backpropagation: A Brief Survey
    arXiv.cs.NE Pub Date : 2021-01-09
    Shiyu Duan; Jose C. Principe

    This tutorial paper surveys training alternatives to end-to-end backpropagation (E2EBP) -- the de facto standard for training deep architectures. Modular training refers to strictly local training without both the forward and the backward pass, i.e., dividing a deep architecture into several nonoverlapping modules and training them separately without any end-to-end operation. Between the fully global

    更新日期:2021-01-12
  • Evolving Reinforcement Learning Algorithms
    arXiv.cs.NE Pub Date : 2021-01-08
    John D. Co-Reyes; Yingjie Miao; Daiyi Peng; Esteban Real; Sergey Levine; Quoc V. Le; Honglak Lee; Aleksandra Faust

    We propose a method for meta-learning reinforcement learning algorithms by searching over the space of computational graphs which compute the loss function for a value-based model-free RL agent to optimize. The learned algorithms are domain-agnostic and can generalize to new environments not seen during training. Our method can both learn from scratch and bootstrap off known existing algorithms, like

    更新日期:2021-01-12
  • A Reinforcement Learning Based Encoder-Decoder Framework for Learning Stock Trading Rules
    arXiv.cs.NE Pub Date : 2021-01-08
    Mehran Taghian; Ahmad Asadi; Reza Safabakhsh

    A wide variety of deep reinforcement learning (DRL) models have recently been proposed to learn profitable investment strategies. The rules learned by these models outperform the previous strategies specially in high frequency trading environments. However, it is shown that the quality of the extracted features from a long-term sequence of raw prices of the instruments greatly affects the performance

    更新日期:2021-01-12
  • Machine learning approach for quantum non-Markovian noise classification
    arXiv.cs.NE Pub Date : 2021-01-08
    Stefano Martina; Stefano Gherardini; Filippo Caruso

    In this paper, machine learning and artificial neural network models are proposed for quantum noise classification in stochastic quantum dynamics. For this purpose, we train and then validate support vector machine, multi-layer perceptron and recurrent neural network, models with different complexity and accuracy, to solve supervised binary classification problems. By exploiting the quantum random

    更新日期:2021-01-12
  • Manifold Interpolation for Large-Scale Multi-Objective Optimization via Generative Adversarial Networks
    arXiv.cs.NE Pub Date : 2021-01-08
    Zhenzhong Wang; Haokai Hong; Kai Ye; Min Jiang; Kay Chen Tan

    Large-scale multiobjective optimization problems (LSMOPs) are characterized as involving hundreds or even thousands of decision variables and multiple conflicting objectives. An excellent algorithm for solving LSMOPs should find Pareto-optimal solutions with diversity and escape from local optima in the large-scale search space. Previous research has shown that these optimal solutions are uniformly

    更新日期:2021-01-11
  • When does the Physarum Solver Distinguish the Shortest Path from other Paths: the Transition Point and its Applications
    arXiv.cs.NE Pub Date : 2021-01-08
    Yusheng HuangInstitute of Fundamental and Frontier Science, University of Electronic Science and Technology of China, Chengdu, China; Dong ChuInstitute of Fundamental and Frontier Science, University of Electronic Science and Technology of China, Chengdu, China; Joel Weijia LaiScience and Math Cluster, Singapore University of Technology and Design; Yong DengInstitute of Fundamental and Frontier Science

    Physarum solver, also called the physarum polycephalum inspired algorithm (PPA), is a newly developed bio-inspired algorithm that has an inherent ability to find the shortest path in a given graph. Recent research has proposed methods to develop this algorithm further by accelerating the original PPA (OPPA)'s path-finding process. However, when does the PPA ascertain that the shortest path has been

    更新日期:2021-01-11
  • Infinite-dimensional Folded-in-time Deep Neural Networks
    arXiv.cs.NE Pub Date : 2021-01-08
    Florian StelzerInstitute of Mathematics, Technische Universität Berlin, GermanyDepartment of Mathematics, Humboldt-Universität zu Berlin, Germany; Serhiy YanchukInstitute of Mathematics, Technische Universität Berlin, Germany

    The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinite-dimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely

    更新日期:2021-01-11
  • Neural Storage: A New Paradigm of Elastic Memory
    arXiv.cs.NE Pub Date : 2021-01-07
    Prabuddha Chakraborty; Swarup Bhunia

    Storage and retrieval of data in a computer memory plays a major role in system performance. Traditionally, computer memory organization is static - i.e., they do not change based on the application-specific characteristics in memory access behaviour during system operation. Specifically, the association of a data block with a search pattern (or cues) as well as the granularity of a stored data do

    更新日期:2021-01-11
  • Adaptive Immunity for Software: Towards Autonomous Self-healing Systems
    arXiv.cs.NE Pub Date : 2021-01-07
    Moeen Ali Naqvi; Merve Astekin; Sehrish Malik; Leon Moonen

    Testing and code reviews are known techniques to improve the quality and robustness of software. Unfortunately, the complexity of modern software systems makes it impossible to anticipate all possible problems that can occur at runtime, which limits what issues can be found using testing and reviews. Thus, it is of interest to consider autonomous self-healing software systems, which can automatically

    更新日期:2021-01-08
  • Active learning for object detection in high-resolution satellite images
    arXiv.cs.NE Pub Date : 2021-01-07
    Alex Goupilleau; Tugdual Ceillier; Marie-Caroline Corbineau

    In machine learning, the term active learning regroups techniques that aim at selecting the most useful data to label from a large pool of unlabelled examples. While supervised deep learning techniques have shown to be increasingly efficient on many applications, they require a huge number of labelled examples to reach operational performances. Therefore, the labelling effort linked to the creation

    更新日期:2021-01-08
  • Drift anticipation with forgetting to improve evolving fuzzy system
    arXiv.cs.NE Pub Date : 2021-01-07
    Clément LeroyINTUIDOC; Eric AnquetilINTUIDOC; Nathalie GirardINTUIDOC

    Working with a non-stationary stream of data requires for the analysis system to evolve its model (the parameters as well as the structure) over time. In particular, concept drifts can occur, which makes it necessary to forget knowledge that has become obsolete. However, the forgetting is subjected to the stability-plasticity dilemma, that is, increasing forgetting improve reactivity of adapting to

    更新日期:2021-01-08
  • DICE: Deep Significance Clustering for Outcome-Aware Stratification
    arXiv.cs.NE Pub Date : 2021-01-07
    Yufang Huang; Kelly M. Axsom; John Lee; Lakshminarayanan Subramanian; Yiye Zhang

    We present deep significance clustering (DICE), a framework for jointly performing representation learning and clustering for "outcome-aware" stratification. DICE is intended to generate cluster membership that may be used to categorize a population by individual risk level for a targeted outcome. Following the representation learning and clustering steps, we embed the objective function in DICE with

    更新日期:2021-01-08
  • Infinitely Wide Tensor Networks as Gaussian Process
    arXiv.cs.NE Pub Date : 2021-01-07
    Erdong Guo; David Draper

    Gaussian Process is a non-parametric prior which can be understood as a distribution on the function space intuitively. It is known that by introducing appropriate prior to the weights of the neural networks, Gaussian Process can be obtained by taking the infinite-width limit of the Bayesian neural networks from a Bayesian perspective. In this paper, we explore the infinitely wide Tensor Networks and

    更新日期:2021-01-08
  • Can Transfer Neuroevolution Tractably Solve Your Differential Equations?
    arXiv.cs.NE Pub Date : 2021-01-06
    Jian Cheng Wong; Abhishek Gupta; Yew-Soon Ong

    This paper introduces neuroevolution for solving differential equations. The solution is obtained through optimizing a deep neural network whose loss function is defined by the residual terms from the differential equations. Recent studies have focused on learning such physics-informed neural networks through stochastic gradient descent (SGD) variants, yet they face the difficulty of obtaining an accurate

    更新日期:2021-01-07
  • The Shapley Value of Classifiers in Ensemble Games
    arXiv.cs.NE Pub Date : 2021-01-06
    Benedek Rozemberczki; Rik Sarkar

    How do we decide the fair value of individual classifiers in an ensemble model? We introduce a new class of transferable utility cooperative games to answer this question. The players in ensemble games are pre-trained binary classifiers that collaborate in an ensemble to correctly label points from a dataset. We design Troupe a scalable algorithm that designates payoffs to individual models based on

    更新日期:2021-01-07
  • Constrained optimisation of preliminary spacecraft configurations under the design-for-demise paradigm
    arXiv.cs.NE Pub Date : 2020-12-27
    Mirko Trisolini; Hugh G. Lewis; Camilla Colombo

    In the past few years, the interest towards the implementation of design-for-demise measures has increased steadily. Most mid-sized satellites currently launched and already in orbit fail to comply with the casualty risk threshold of 0.0001. Therefore, satellites manufacturers and mission operators need to perform a disposal through a controlled re-entry, which has a higher cost and increased complexity

    更新日期:2021-01-06
  • Computing Cliques and Cavities in Networks
    arXiv.cs.NE Pub Date : 2021-01-03
    Dinghua Shi; Zhifeng Chen; Xiang Sun; Qinghua Chen; Yang Lou; Guanrong Chen

    Complex networks have complete subgraphs such as nodes, edges, triangles, etc., referred to as cliques of different orders. Notably, cavities consisting of higher-order cliques have been found playing an important role in brain functions. Since searching for the maximum clique in a large network is an NP-complete problem, we propose using k-core decomposition to determine the computability of a given

    更新日期:2021-01-05
  • Regularization-based Continual Learning for Anomaly Detection in Discrete Manufacturing
    arXiv.cs.NE Pub Date : 2021-01-02
    Benjamin Maschler; Thi Thu Huong Pham; Michael Weyrich

    The early and robust detection of anomalies occurring in discrete manufacturing processes allows operators to prevent harm, e.g. defects in production machinery or products. While current approaches for data-driven anomaly detection provide good results on the exact processes they were trained on, they often lack the ability to flexibly adapt to changes, e.g. in products. Continual learning promises

    更新日期:2021-01-05
  • The Bayesian Method of Tensor Networks
    arXiv.cs.NE Pub Date : 2021-01-01
    Erdong Guo; David Draper

    Bayesian learning is a powerful learning framework which combines the external information of the data (background information) with the internal information (training data) in a logically consistent way in inference and prediction. By Bayes rule, the external information (prior distribution) and the internal information (training data likelihood) are combined coherently, and the posterior distribution

    更新日期:2021-01-05
  • Generative Deep Learning for Virtuosic Classical Music: Generative Adversarial Networks as Renowned Composers
    arXiv.cs.NE Pub Date : 2021-01-01
    Daniel Szelogowski

    Current AI-generated music lacks fundamental principles of good compositional techniques. By narrowing down implementation issues both programmatically and musically, we can create a better understanding of what parameters are necessary for a generated composition nearly indistinguishable from that of a master composer.

    更新日期:2021-01-05
  • ECG-Based Driver Stress Levels Detection System Using Hyperparameter Optimization
    arXiv.cs.NE Pub Date : 2021-01-01
    Mohammad Naim Rastgoo; Bahareh Nakisa; Andry Rakotonirainy; Frederic Maire; Vinod Chandran

    Stress and driving are a dangerous combination which can lead to crashes, as evidenced by the large number of road traffic crashes that involve stress. Motivated by the need to address the significant costs of driver stress, it is essential to build a practical system that can classify driver stress level with high accuracy. However, the performance of an accurate driving stress levels classification

    更新日期:2021-01-05
  • Ensembles of Localised Models for Time Series Forecasting
    arXiv.cs.NE Pub Date : 2020-12-30
    Rakshitha Godahewa; Kasun Bandara; Geoffrey I. Webb; Slawek Smyl; Christoph Bergmeir

    With large quantities of data typically available nowadays, forecasting models that are trained across sets of time series, known as Global Forecasting Models (GFM), are regularly outperforming traditional univariate forecasting models that work on isolated series. As GFMs usually share the same set of parameters across all time series, they often have the problem of not being localised enough to a

    更新日期:2021-01-01
  • Meta Learning Backpropagation And Improving It
    arXiv.cs.NE Pub Date : 2020-12-29
    Louis Kirsch; Jürgen Schmidhuber

    Many concepts have been proposed for meta learning with neural networks (NNs), e.g., NNs that learn to control fast weights, hyper networks, learned learning rules, and meta recurrent neural networks (Meta RNNs). Our Variable Shared Meta Learning (VS-ML) unifies the above and demonstrates that simple weight-sharing and sparsity in an NN is sufficient to express powerful learning algorithms. A simple

    更新日期:2021-01-01
  • Emergent Symbols through Binding in External Memory
    arXiv.cs.NE Pub Date : 2020-12-29
    Taylor W. Webb; Ishan Sinha; Jonathan D. Cohen

    A key aspect of human intelligence is the ability to infer abstract rules directly from high-dimensional sensory data, and to do so given only a limited amount of training experience. Deep neural network algorithms have proven to be a powerful tool for learning directly from high-dimensional data, but currently lack this capacity for data-efficient induction of abstract rules, leading some to argue

    更新日期:2021-01-01
  • Byzantine-Resilient Non-Convex Stochastic Gradient Descent
    arXiv.cs.NE Pub Date : 2020-12-28
    Zeyuan Allen-Zhu; Faeze Ebrahimian; Jerry Li; Dan Alistarh

    We study adversary-resilient stochastic distributed optimization, in which $m$ machines can independently compute stochastic gradients, and cooperate to jointly optimize over their local objective functions. However, an $\alpha$-fraction of the machines are $\textit{Byzantine}$, in that they may behave in arbitrary, adversarial ways. We consider a variant of this procedure in the challenging $\textit{non-convex}$

    更新日期:2020-12-29
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
微生物研究
亚洲大洋洲地球科学
NPJ欢迎投稿
自然科研论文编辑
ERIS期刊投稿
欢迎阅读创刊号
自然职场,为您触达千万科研人才
spring&清华大学出版社
城市可持续发展前沿研究专辑
Springer 纳米技术权威期刊征稿
全球视野覆盖
施普林格·自然新
chemistry
物理学研究前沿热点精选期刊推荐
自然职位线上招聘会
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
阿拉丁试剂right
上海中医药大学
浙江大学
西湖大学
化学所
北京大学
清华
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
清华大学-1
武汉大学
浙江大学
天合科研
x-mol收录
试剂库存
down
wechat
bug