当前期刊: arXiv - CS - Machine Learning Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Model-based controlled learning of MDP policies with an application to lost-sales inventory control
    arXiv.cs.LG Pub Date : 2020-11-30
    Willem van Jaarsveld

    Recent literature established that neural networks can represent good MDP policies across a range of stochastic dynamic models in supply chain and logistics. To overcome limitations of the model-free algorithms typically employed to learn/find such neural network policies, a model-based algorithm is proposed that incorporates variance reduction techniques. For the classical lost sales inventory model

    更新日期:2020-12-01
  • Learning by Passing Tests, with Application to Neural Architecture Search
    arXiv.cs.LG Pub Date : 2020-11-30
    Xuefeng Du; Pengtao Xie

    Learning through tests is a broadly used methodology in human learning and shows great effectiveness in improving learning outcome: a sequence of tests are made with increasing levels of difficulty; the learner takes these tests to identify his/her weak points in learning and continuously addresses these weak points to successfully pass these tests. We are interested in investigating whether this powerful

    更新日期:2020-12-01
  • Inductive Biases for Deep Learning of Higher-Level Cognition
    arXiv.cs.LG Pub Date : 2020-11-30
    Anirudh Goyal; Yoshua Bengio

    A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains

    更新日期:2020-12-01
  • Graph convolutions that can finally model local structure
    arXiv.cs.LG Pub Date : 2020-11-30
    Rémy Brossard; Oriel Frigo; David Dehaene

    Despite quick progress in the last few years, recent studies have shown that modern graph neural networks can still fail at very simple tasks, like detecting small cycles. This hints at the fact that current networks fail to catch information about the local structure, which is problematic if the downstream task heavily relies on graph substructure analysis, as in the context of chemistry. We propose

    更新日期:2020-12-01
  • General Invertible Transformations for Flow-based Generative Modeling
    arXiv.cs.LG Pub Date : 2020-11-30
    Jakub M. Tomczak

    In this paper, we present a new class of invertible transformations. We indicate that many well-known invertible tranformations in reversible logic and reversible neural networks could be derived from our proposition. Next, we propose two new coupling layers that are important building blocks of flow-based generative models. In the preliminary experiments on toy digit data, we present how these new

    更新日期:2020-12-01
  • Combinatorial Learning of Graph Edit Distance via Dynamic Embedding
    arXiv.cs.LG Pub Date : 2020-11-30
    Runzhong Wang; Tianqi Zhang; Tianshu Yu; Junchi Yan; Xiaokang Yang

    Graph Edit Distance (GED) is a popular similarity measurement for pairwise graphs and it also refers to the recovery of the edit path from the source graph to the target graph. Traditional A* algorithm suffers scalability issues due to its exhaustive nature, whose search heuristics heavily rely on human prior knowledge. This paper presents a hybrid approach by combing the interpretability of traditional

    更新日期:2020-12-01
  • RealCause: Realistic Causal Inference Benchmarking
    arXiv.cs.LG Pub Date : 2020-11-30
    Brady Neal; Chin-Wei Huang; Sunand Raghupathi

    There are many different causal effect estimators in causal inference. However, it is unclear how to choose between these estimators because there is no ground-truth for causal effects. A commonly used option is to simulate synthetic data, where the ground-truth is known. However, the best causal estimators on synthetic data are unlikely to be the best causal estimators on realistic data. An ideal

    更新日期:2020-12-01
  • Depression Status Estimation by Deep Learning based Hybrid Multi-Modal Fusion Model
    arXiv.cs.LG Pub Date : 2020-11-30
    Hrithwik Shalu; Harikrishnan P; Hari Sankar CN; Akash Das; Saptarshi Majumder; Arnhav Datar; Subin Mathew MS; Anugyan Das; Juned Kadiwala

    Preliminary detection of mild depression could immensely help in effective treatment of the common mental health disorder. Due to the lack of proper awareness and the ample mix of stigmas and misconceptions present within the society, mental health status estimation has become a truly difficult task. Due to the immense variations in character level traits from person to person, traditional deep learning

    更新日期:2020-12-01
  • KST-GCN: A Knowledge-Driven Spatial-Temporal Graph Convolutional Network for Traffic Forecasting
    arXiv.cs.LG Pub Date : 2020-11-26
    Jiawei Zhu; Xin Han; Hanhan Deng; Chao Tao; Ling Zhao; Lin Tao; Haifeng Li

    When considering the spatial and temporal features of traffic, capturing the impacts of various external factors on travel is an important step towards achieving accurate traffic forecasting. The impacts of external factors on the traffic flow have complex correlations. However, existing studies seldom consider external factors or neglecting the effect of the complex correlations among external factors

    更新日期:2020-12-01
  • BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning
    arXiv.cs.LG Pub Date : 2020-11-25
    Kamil Deja; Paweł Wawrzyński; Daniel Marczak; Wojciech Masarczyk; Tomasz Trzciński

    We introduce a binary latent space autoencoder architecture to rehearse training samples for the continual learning of neural networks. The ability to extend the knowledge of a model with new data without forgetting previously learned samples is a fundamental requirement in continual learning. Existing solutions address it by either replaying past data from memory, which is unsustainable with growing

    更新日期:2020-12-01
  • Handling Noisy Labels via One-Step Abductive Multi-Target Learning
    arXiv.cs.LG Pub Date : 2020-11-25
    Yongquan Yang; Yiming Yang; Jie Chen; Jiayi Zheng; Zhongxi Zheng

    Learning from noisy labels is an important concern because of the lack of accurate ground-truth labels in plenty of real-world scenarios. In practice, various approaches for this concern first make corrections corresponding to potentially noisy-labeled instances, and then update predictive model with information of the made corrections. However, in specific areas, such as medical histopathology whole

    更新日期:2020-12-01
  • Autonomous Graph Mining Algorithm Search with Best Speed/Accuracy Trade-off
    arXiv.cs.LG Pub Date : 2020-11-26
    Minji Yoon; Théophile Gervet; Bryan Hooi; Christos Faloutsos

    Graph data is ubiquitous in academia and industry, from social networks to bioinformatics. The pervasiveness of graphs today has raised the demand for algorithms that can answer various questions: Which products would a user like to purchase given her order list? Which users are buying fake followers to increase their public reputation? Myriads of new graph mining algorithms are proposed every year

    更新日期:2020-12-01
  • Comparative Analysis of Extreme Verification Latency Learning Algorithms
    arXiv.cs.LG Pub Date : 2020-11-26
    Muhammad Umer; Robi Polikar

    One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift. Perhaps even a more challenging version of this scenario is when -- following a small set of initial labeled data -- the data stream consists of unlabeled data only. Such a scenario is typically referred to as learning in initially labeled nonstationary

    更新日期:2020-12-01
  • Prior Flow Variational Autoencoder: A density estimation model for Non-Intrusive Load Monitoring
    arXiv.cs.LG Pub Date : 2020-11-30
    Luis Felipe M. O. Henriques; Eduardo Morgan; Sergio Colcher; Ruy Luiz Milidiú

    Non-Intrusive Load Monitoring (NILM) is a computational technique to estimate the power loads' appliance-by-appliance from the whole consumption measured by a single meter. In this paper, we propose a conditional density estimation model, based on deep neural networks, that joins a Conditional Variational Autoencoder with a Conditional Invertible Normalizing Flow model to estimate the individual appliance's

    更新日期:2020-12-01
  • Explaining by Removing: A Unified Framework for Model Explanation
    arXiv.cs.LG Pub Date : 2020-11-21
    Ian Covert; Scott Lundberg; Su-In Lee

    Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another. We establish a new class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence. These methods vary in several respects, so we develop a framework

    更新日期:2020-12-01
  • Doubly Stochastic Subspace Clustering
    arXiv.cs.LG Pub Date : 2020-11-30
    Derek Lim; René Vidal; Benjamin D. Haeffele

    Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity. Most of the research into these methods focuses on the first step of generating the affinity matrix, which often exploits the self-expressive property of linear subspaces, with little consideration typically

    更新日期:2020-12-01
  • Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research
    arXiv.cs.LG Pub Date : 2020-11-20
    Johan S. Obando-Ceron; Pablo Samuel Castro

    Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of

    更新日期:2020-12-01
  • A Review of Recent Advances of Binary Neural Networks for Edge Computing
    arXiv.cs.LG Pub Date : 2020-11-24
    Wenyu Zhao; Teli Ma; Xuan Gong; Baochang Zhang; David Doermann

    Edge computing is promising to become one of the next hottest topics in artificial intelligence because it benefits various evolving domains such as real-time unmanned aerial systems, industrial applications, and the demand for privacy protection. This paper reviews recent advances on binary neural network (BNN) and 1-bit CNN technologies that are well suitable for front-end, edge-based computing.

    更新日期:2020-12-01
  • Discovering Causal Structure with Reproducing-Kernel Hilbert Space $ε$-Machines
    arXiv.cs.LG Pub Date : 2020-11-23
    Nicolas Brodu; James P. Crutchfield

    We merge computational mechanics' definition of causal states (predictively-equivalent histories) with reproducing-kernel Hilbert space (RKHS) representation inference. The result is a widely-applicable method that infers causal structure directly from observations of a system's behaviors whether they are over discrete or continuous events or time. A structural representation -- a finite- or infinite-state

    更新日期:2020-12-01
  • Advancements of federated learning towards privacy preservation: from federated learning to split learning
    arXiv.cs.LG Pub Date : 2020-11-25
    Chandra Thapa; M. A. P. Chamikara; Seyit A. Camtepe

    In the distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles. FL provides privacy-by-design. It trains a machine learning model collaboratively over several distributed clients (ranging from two to millions) such as mobile phones

    更新日期:2020-12-01
  • Bringing AI To Edge: From Deep Learning's Perspective
    arXiv.cs.LG Pub Date : 2020-11-25
    Di Liu; Hao Kong; Xiangzhong Luo; Weichen Liu; Ravi Subramaniam

    Edge computing and artificial intelligence (AI), especially deep learning for nowadays, are gradually intersecting to build a novel system, called edge intelligence. However, the development of edge intelligence systems encounters some challenges, and one of these challenges is the \textit{computational gap} between computation-intensive deep learning algorithms and less-capable edge systems. Due to

    更新日期:2020-12-01
  • Data-Free Model Extraction
    arXiv.cs.LG Pub Date : 2020-11-30
    Jean-Baptiste Truong; Pratyush Maini; Robert Walls; Nicolas Papernot

    Current model extraction attacks assume that the adversary has access to a surrogate dataset with characteristics similar to the proprietary data used to train the victim model. This requirement precludes the use of existing model extraction techniques on valuable models, such as those trained on rare or hard to acquire datasets. In contrast, we propose data-free model extraction methods that do not

    更新日期:2020-12-01
  • Binary Classification: Counterbalancing Class Imbalance by Applying Regression Models in Combination with One-Sided Label Shifts
    arXiv.cs.LG Pub Date : 2020-11-30
    Peter Bellmann; Heinke Hihn; Daniel A. Braun; Friedhelm Schwenker

    In many real-world pattern recognition scenarios, such as in medical applications, the corresponding classification tasks can be of an imbalanced nature. In the current study, we focus on binary, imbalanced classification tasks, i.e.~binary classification tasks in which one of the two classes is under-represented (minority class) in comparison to the other class (majority class). In the literature

    更新日期:2020-12-01
  • Probabilistic Load Forecasting Based on Adaptive Online Learning
    arXiv.cs.LG Pub Date : 2020-11-30
    Verónica Álvarez; Santiago Mazuelas; José A. Lozano

    Load forecasting is crucial for multiple energy management tasks such as scheduling generation capacity, planning supply and demand, and minimizing energy trade costs. Such relevance has increased even more in recent years due to the integration of renewable energies, electric cars, and microgrids. Conventional load forecasting techniques obtain single-value load forecasts by exploiting consumption

    更新日期:2020-12-01
  • On Initial Pools for Deep Active Learning
    arXiv.cs.LG Pub Date : 2020-11-30
    Akshay L Chandra; Sai Vikas Desai; Chaitanya Devaguptapu; Vineeth N Balasubramanian

    Active Learning (AL) techniques aim to minimize the training data required to train a model for a given task. Pool-based AL techniques start with a small initial labeled pool and then iteratively pick batches of the most informative samples for labeling. Generally, the initial pool is sampled randomly and labeled to seed the AL iterations. While recent` studies have focused on evaluating the robustness

    更新日期:2020-12-01
  • KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization
    arXiv.cs.LG Pub Date : 2020-11-30
    Het Shah; Avishree Khare; Neelay Shah; Khizir Siddiqui

    In recent years, the growing size of neural networks has led to a vast amount of research concerning compression techniques to mitigate the drawbacks of such large sizes. Most of these research works can be categorized into three broad families : Knowledge Distillation, Pruning, and Quantization. While there has been steady research in this domain, adoption and commercial usage of the proposed techniques

    更新日期:2020-12-01
  • Can neural networks learn persistent homology features?
    arXiv.cs.LG Pub Date : 2020-11-30
    Guido Montúfar; Nina Otter; Yuguang Wang

    Topological data analysis uses tools from topology -- the mathematical area that studies shapes -- to create representations of data. In particular, in persistent homology, one studies one-parameter families of spaces associated with data, and persistence diagrams describe the lifetime of topological invariants, such as connected components or holes, across the one-parameter family. In many applications

    更新日期:2020-12-01
  • Robust Ultra-wideband Range Error Mitigation with Deep Learning at the Edge
    arXiv.cs.LG Pub Date : 2020-11-30
    Simone Angarano; Vittorio Mazzia; Francesco Salvetti; Giovanni Fantin; Marcello Chiaberge

    Ultra-wideband (UWB) is the state-of-the-art and most popular technology for wireless localization. Nevertheless, precise ranging and localization in non-line-of-sight (NLoS) conditions is still an open research topic. Indeed, multipath effects, reflections, refractions and complexity of the indoor radio environment can easily introduce a positive bias in the ranging measurement, resulting in highly

    更新日期:2020-12-01
  • TSSRGCN: Temporal Spectral Spatial Retrieval Graph Convolutional Network for Traffic Flow Forecasting
    arXiv.cs.LG Pub Date : 2020-11-30
    Xu Chen; Yuanxing Zhang; Lun Du; Zheng Fang; Yi Ren; Kaigui Bian; Kunqing Xie

    Traffic flow forecasting is of great significance for improving the efficiency of transportation systems and preventing emergencies. Due to the highly non-linearity and intricate evolutionary patterns of short-term and long-term traffic flow, existing methods often fail to take full advantage of spatial-temporal information, especially the various temporal patterns with different period shifting and

    更新日期:2020-12-01
  • Optimizing the Neural Architecture of Reinforcement Learning Agents
    arXiv.cs.LG Pub Date : 2020-11-30
    N. Mazyavkina; S. Moustafa; I. Trofimov; E. Burnaev

    Reinforcement learning (RL) enjoyed significant progress over the last years. One of the most important steps forward was the wide application of neural networks. However, architectures of these neural networks are typically constructed manually. In this work, we study recently proposed neural architecture search (NAS) methods for optimizing the architecture of RL agents. We carry out experiments on

    更新日期:2020-12-01
  • RegFlow: Probabilistic Flow-based Regression for Future Prediction
    arXiv.cs.LG Pub Date : 2020-11-30
    Maciej Zięba; Marcin Przewięźlikowski; Marek Śmieja; Jacek Tabor; Tomasz Trzcinski; Przemysław Spurek

    Predicting future states or actions of a given system remains a fundamental, yet unsolved challenge of intelligence, especially in the scope of complex and non-deterministic scenarios, such as modeling behavior of humans. Existing approaches provide results under strong assumptions concerning unimodality of future states, or, at best, assuming specific probability distributions that often poorly fit

    更新日期:2020-12-01
  • Incremental Learning via Rate Reduction
    arXiv.cs.LG Pub Date : 2020-11-30
    Ziyang Wu; Christina Baek; Chong You; Yi Ma

    Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes. The fundamental roadblock faced by deep learning methods is that deep learning models are optimized as "black boxes," making it difficult to properly adjust the model parameters to preserve knowledge about previously seen data

    更新日期:2020-12-01
  • Robust and Private Learning of Halfspaces
    arXiv.cs.LG Pub Date : 2020-11-30
    Badih Ghazi; Ravi Kumar; Pasin Manurangsi; Thao Nguyen

    In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement

    更新日期:2020-12-01
  • Where Should We Begin? A Low-Level Exploration of Weight Initialization Impact on Quantized Behaviour of Deep Neural Networks
    arXiv.cs.LG Pub Date : 2020-11-30
    Stone Yun; Alexander Wong

    With the proliferation of deep convolutional neural network (CNN) algorithms for mobile processing, limited precision quantization has become an essential tool for CNN efficiency. Consequently, various works have sought to design fixed precision quantization algorithms and quantization-focused optimization techniques that minimize quantization induced performance degradation. However, there is little

    更新日期:2020-12-01
  • Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning
    arXiv.cs.LG Pub Date : 2020-11-30
    Farhad Farokhi

    We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved

    更新日期:2020-12-01
  • A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models
    arXiv.cs.LG Pub Date : 2020-11-30
    Jeong-Hoe Ku; JiHun Oh; YoungYoon Lee; Gaurav Pooniwala; SangJeong Lee

    This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural network field. To this end, we give a brief overview of knowledge distillation and some related works including learning using privileged information(LUPI) and generalized distillation(GD). Even though

    更新日期:2020-12-01
  • Feature Learning in Infinite-Width Neural Networks
    arXiv.cs.LG Pub Date : 2020-11-30
    Greg Yang; Edward J. Hu

    As its width tends to infinity, a deep neural network's behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK parametrization). However, we show that the standard and NTK parametrizations of a neural network do not admit infinite-width limits that can learn features, which is crucial for

    更新日期:2020-12-01
  • Soft-Robust Algorithms for Handling Model Misspecification
    arXiv.cs.LG Pub Date : 2020-11-30
    Elita A. Lobo; Mohammad Ghavamzadeh; Marek Petrik

    In reinforcement learning, robust policies for high-stakes decision-making problems with limited data are usually computed by optimizing the percentile criterion, which minimizes the probability of a catastrophic failure. Unfortunately, such policies are typically overly conservative as the percentile criterion is non-convex, difficult to optimize, and ignores the mean performance. To overcome these

    更新日期:2020-12-01
  • Value Function Based Performance Optimization of Deep Learning Workloads
    arXiv.cs.LG Pub Date : 2020-11-30
    Benoit Steiner; Chris Cummins; Horace He; Hugh Leather

    As machine learning techniques become ubiquitous, the efficiency of neural network implementations is becoming correspondingly paramount. Frameworks, such as Halide and TVM, separate out the algorithmic representation of the network from the schedule that determines its implementation. Finding good schedules, however, remains extremely challenging. We model this scheduling problem as a sequence of

    更新日期:2020-12-01
  • Kinetics-Informed Neural Networks
    arXiv.cs.LG Pub Date : 2020-11-30
    Gabriel S. Gusmão; Adhika P. Retnanto; Shashwati C. da Cunha; Andrew J. Medford

    Chemical kinetics consists of the phenomenological framework for the disentanglement of reaction mechanisms, optimization of reaction performance and the rational design of chemical processes. Here, we utilize feed-forward artificial neural networks as basis functions for the construction of surrogate models to solve ordinary differential equations (ODEs) that describe microkinetic models (MKMs). We

    更新日期:2020-12-01
  • Scaling *down* Deep Learning
    arXiv.cs.LG Pub Date : 2020-11-29
    Sam Greydanus

    Though deep learning models have taken on commercial and political relevance, many aspects of their training and operation remain poorly understood. This has sparked interest in "science of deep learning" projects, many of which are run at scale and require enormous amounts of time, money, and electricity. But how much of this research really needs to occur at scale? In this paper, we introduce MNIST-1D:

    更新日期:2020-12-01
  • Architectural Adversarial Robustness: The Case for Deep Pursuit
    arXiv.cs.LG Pub Date : 2020-11-29
    George Cazenavette; Calvin Murdock; Simon Lucey

    Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise. While the underlying cause of this sensitivity is not well understood, theoretical analyses can be simplified by reframing each layer of a feed-forward network as an approximate solution to a sparse coding problem. Iterative solutions using basis pursuit

    更新日期:2020-12-01
  • Improving Neural Network with Uniform Sparse Connectivity
    arXiv.cs.LG Pub Date : 2020-11-29
    Weijun Luo

    Neural network forms the foundation of deep learning and numerous AI applications. Classical neural networks are fully connected, expensive to train and prone to overfitting. Sparse networks tend to have convoluted structure search, suboptimal performance and limited usage. We proposed the novel uniform sparse network (USN) with even and sparse connectivity within each layer. USN has one striking property

    更新日期:2020-12-01
  • Self-supervised Visual Reinforcement Learning with Object-centric Representations
    arXiv.cs.LG Pub Date : 2020-11-29
    Andrii Zadaianchuk; Maximilian Seitzer; Georg Martius

    Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal

    更新日期:2020-12-01
  • Offline Reinforcement Learning Hands-On
    arXiv.cs.LG Pub Date : 2020-11-29
    Louis Monier; Jakub Kmec; Alexandre Laterre; Thomas Pierrot; Valentin Courgeau; Olivier Sigaud; Karim Beguir

    Offline Reinforcement Learning (RL) aims to turn large datasets into powerful decision-making engines without any online interactions with the environment. This great promise has motivated a large amount of research that hopes to replicate the success RL has experienced in simulation settings. This work ambitions to reflect upon these efforts from a practitioner viewpoint. We start by discussing the

    更新日期:2020-12-01
  • Predicting Regional Locust Swarm Distribution with Recurrent Neural Networks
    arXiv.cs.LG Pub Date : 2020-11-29
    Hadia Mohmmed Osman Ahmed Samil; Annabelle Martin; Arnav Kumar Jain; Susan Amin; Samira Ebrahimi Kahou

    Locust infestation of some regions in the world, including Africa, Asia and Middle East has become a concerning issue that can affect the health and the lives of millions of people. In this respect, there have been attempts to resolve or reduce the severity of this problem via detection and monitoring of locust breeding areas using satellites and sensors, or the use of chemicals to prevent the formation

    更新日期:2020-12-01
  • A smartphone based multi input workflow for non-invasive estimation of haemoglobin levels using machine learning techniques
    arXiv.cs.LG Pub Date : 2020-11-29
    Sarah; S. Sidhartha Narayan; Irfaan Arif; Hrithwik Shalu; Juned Kadiwala

    We suggest a low cost, non invasive healthcare system that measures haemoglobin levels in patients and can be used as a preliminary diagnostic test for anaemia. A combination of image processing, machine learning and deep learning techniques are employed to develop predictive models to measure haemoglobin levels. This is achieved through the color analysis of the fingernail beds, palpebral conjunctiva

    更新日期:2020-12-01
  • A Targeted Universal Attack on Graph Convolutional Network
    arXiv.cs.LG Pub Date : 2020-11-29
    Jiazhu Dai; Weifeng Zhu; Xiangfeng Luo

    Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among

    更新日期:2020-12-01
  • Optimal Mixture Weights for Off-Policy Evaluation with Multiple Behavior Policies
    arXiv.cs.LG Pub Date : 2020-11-29
    Jinlin Lai; Lixin Zou; Jiaxing Song

    Off-policy evaluation is a key component of reinforcement learning which evaluates a target policy with offline data collected from behavior policies. It is a crucial step towards safe reinforcement learning and has been used in advertisement, recommender systems and many other applications. In these applications, sometimes the offline data is collected from multiple behavior policies. Previous works

    更新日期:2020-12-01
  • FROCC: Fast Random projection-based One-Class Classification
    arXiv.cs.LG Pub Date : 2020-11-29
    Arindam Bhattacharya; Sumanth Varambally; Amitabha Bagchi; Srikanta Bedathur

    We present Fast Random projection-based One-Class Classification (FROCC), an extremely efficient method for one-class classification. Our method is based on a simple idea of transforming the training data by projecting it onto a set of random unit vectors that are chosen uniformly and independently from the unit sphere, and bounding the regions based on separation of the data. FROCC can be naturally

    更新日期:2020-12-01
  • Active Output Selection Strategies for Multiple Learning Regression Models
    arXiv.cs.LG Pub Date : 2020-11-29
    Adrian Prochaska; Julien Pillas; Bernard Bäker

    Active learning shows promise to decrease test bench time for model-based drivability calibration. This paper presents a new strategy for active output selection, which suits the needs of calibration tasks. The strategy is actively learning multiple outputs in the same input space. It chooses the output model with the highest cross-validation error as leading. The presented method is applied to three

    更新日期:2020-12-01
  • Minimax Sample Complexity for Turn-based Stochastic Game
    arXiv.cs.LG Pub Date : 2020-11-29
    Qiwen Cui; Lin F. Yang

    The empirical success of Multi-agent reinforcement learning is encouraging, while few theoretical guarantees have been revealed. In this work, we prove that the plug-in solver approach, probably the most natural reinforcement learning algorithm, achieves minimax sample complexity for turn-based stochastic game (TBSG). Specifically, we plan in an empirical TBSG by utilizing a `simulator' that allows

    更新日期:2020-12-01
  • Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning
    arXiv.cs.LG Pub Date : 2020-11-29
    Hongseok Namkoong; Samuel Daulton; Eytan Bakshy

    Thompson sampling (TS) has emerged as a robust technique for contextual bandit problems. However, TS requires posterior inference and optimization for action generation, prohibiting its use in many internet applications where latency and ease of deployment are of concern. We propose a novel imitation-learning-based algorithm that distills a TS policy into an explicit policy representation by performing

    更新日期:2020-12-01
  • Importance Weight Estimation and Generalization in Domain Adaptation under Label Shift
    arXiv.cs.LG Pub Date : 2020-11-29
    Kamyar Azizzadenesheli

    We study generalization under label shift in domain adaptation where the learner has access to labeled samples from the source domain but unlabeled samples from the target domain. Prior works deploy label classifiers and introduce various methods to estimate the importance weights from source to target domains. They use these estimates in importance weighted empirical risk minimization to learn classifiers

    更新日期:2020-12-01
  • Monte Carlo Tree Search for a single target search game on a 2-D lattice
    arXiv.cs.LG Pub Date : 2020-11-29
    Elana Kozak; Scott Hottovy

    Monte Carlo Tree Search (MCTS) is a branch of stochastic modeling that utilizes decision trees for optimization, mostly applied to artificial intelligence (AI) game players. This project imagines a game in which an AI player searches for a stationary target within a 2-D lattice. We analyze its behavior with different target distributions and compare its efficiency to the Levy Flight Search, a model

    更新日期:2020-12-01
  • Curvature Regularization to Prevent Distortion in Graph Embedding
    arXiv.cs.LG Pub Date : 2020-11-28
    Hongbin Pei; Bingzhe Wei; Kevin Chen-Chuan Chang; Chunxu Zhang; Bo Yang

    Recent research on graph embedding has achieved success in various applications. Most graph embedding methods preserve the proximity in a graph into a manifold in an embedding space. We argue an important but neglected problem about this proximity-preserving strategy: Graph topology patterns, while preserved well into an embedding manifold by preserving proximity, may distort in the ambient embedding

    更新日期:2020-12-01
  • Uncertainty Quantification in Deep Learning through Stochastic Maximum Principle
    arXiv.cs.LG Pub Date : 2020-11-28
    Richard Archibald; Feng Bao; Yanzhao Cao; He Zhang

    We develop a probabilistic machine learning method, which formulates a class of stochastic neural networks by a stochastic optimal control problem. An efficient stochastic gradient descent algorithm is introduced under the stochastic maximum principle framework. Convergence analysis for stochastic gradient descent optimization and numerical experiments for applications of stochastic neural networks

    更新日期:2020-12-01
  • Short-Term Load Forecasting using Bi-directional Sequential Models and Feature Engineering for Small Datasets
    arXiv.cs.LG Pub Date : 2020-11-28
    Abdul Wahab; Muhammad Anas Tahir; Naveed Iqbal; Faisal Shafait; Syed Muhammad Raza Kazmi

    Electricity load forecasting enables the grid operators to optimally implement the smart grid's most essential features such as demand response and energy efficiency. Electricity demand profiles can vary drastically from one region to another on diurnal, seasonal and yearly scale. Hence to devise a load forecasting technique that can yield the best estimates on diverse datasets, specially when the

    更新日期:2020-12-01
  • Risk-Monotonicity via Distributional Robustness
    arXiv.cs.LG Pub Date : 2020-11-28
    Zakaria Mhammedi; Hisham Husain

    Acquisition of data is a difficult task in most applications of Machine Learning (ML), and it is only natural that one hopes and expects lower populating risk (better performance) with increasing data points. It turns out, somewhat surprisingly, that this is not the case even for the most standard algorithms such as the Empirical Risk Minimizer (ERM). Non-monotonic behaviour of the risk and instability

    更新日期:2020-12-01
  • Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules
    arXiv.cs.LG Pub Date : 2020-11-28
    Johannes Klicpera; Shankari Giri; Johannes T. Margraf; Stephan Günnemann

    Many important tasks in chemistry revolve around molecules during reactions. This requires predictions far from the equilibrium, while most recent work in machine learning for molecules has been focused on equilibrium or near-equilibrium states. In this paper we aim to extend this scope in three ways. First, we propose the DimeNet++ model, which is 8x faster and 10% more accurate than the original

    更新日期:2020-12-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
ERIS期刊投稿
欢迎阅读创刊号
自然职场,为您触达千万科研人才
spring&清华大学出版社
城市可持续发展前沿研究专辑
Springer 纳米技术权威期刊征稿
全球视野覆盖
施普林格·自然新
chemistry
物理学研究前沿热点精选期刊推荐
自然职位线上招聘会
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
ACS Publications填问卷
阿拉丁试剂right
苏州大学
林亮
南方科技大学
朱守非
胡少伟
杨小会
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
上海纽约大学
浙江大学
廖矿标
天合科研
x-mol收录
试剂库存
down
wechat
bug