当前期刊: Machine Learning Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Statistical hierarchical clustering algorithm for outlier detection in evolving data streams
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Dalibor Krleža, Boris Vrdoljak, Mario Brčić

    Anomaly detection is a hard data analysis process that requires constant creation and improvement of data analysis algorithms. Using traditional clustering algorithms to analyse data streams is impossible due to processing power and memory issues. To solve this, the traditional clustering algorithm complexity needed to be reduced, which led to the creation of sequential clustering algorithms. The usual

    更新日期:2020-09-05
  • Imbalanced regression and extreme value prediction
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Rita P. Ribeiro, Nuno Moniz

    Research in imbalanced domain learning has almost exclusively focused on solving classification tasks for accurate prediction of cases labelled with a rare class. Approaches for addressing such problems in regression tasks are still scarce due to two main factors. First, standard regression tasks assume each domain value as equally important. Second, standard evaluation metrics focus on assessing the

    更新日期:2020-09-05
  • Ada-boundary: accelerating DNN training via adaptive boundary batch selection
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Hwanjun Song, Sundong Kim, Minseok Kim, Jae-Gil Lee

    Neural networks converge faster with help from a smart batch selection strategy. In this regard, we propose Ada-Boundary, a novel and simple adaptive batch selection algorithm that constructs an effective mini-batch according to the learning progress of the model. Our key idea is to exploit confusing samples for which the model cannot predict labels with high confidence. Thus, samples near the current

    更新日期:2020-09-05
  • Skew Gaussian processes for classification
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Alessio Benavoli, Dario Azzimonti, Dario Piga

    Gaussian processes (GPs) are distributions over functions, which provide a Bayesian nonparametric approach to regression and classification. In spite of their success, GPs have limited use in some applications, for example, in some cases a symmetric distribution with respect to its mean is an unreasonable model. This implies, for instance, that the mean and the median coincide, while the mean and median

    更新日期:2020-09-05
  • A decision-theoretic approach for model interpretability in Bayesian framework
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Homayun Afrabandpey, Tomi Peltola, Juho Piironen, Aki Vehtari, Samuel Kaski

    A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users’ preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this

    更新日期:2020-09-05
  • Weak approximation of transformed stochastic gradient MCMC
    Mach. Learn. (IF 2.672) Pub Date : 2020-09-04
    Soma Yokoi, Takuma Otsuka, Issei Sato

    Stochastic gradient Langevin dynamics (SGLD) is a computationally efficient sampler for Bayesian posterior inference given a large scale dataset and a complex model. Although SGLD is designed for unbounded random variables, practical models often incorporate variables within a bounded domain, such as non-negative or a finite interval. The use of variable transformation is a typical way to handle such

    更新日期:2020-09-05
  • Co-eye: a multi-resolution ensemble classifier for symbolically approximated time series
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-26
    Zahraa S. Abdallah, Mohamed Medhat Gaber

    Time series classification (TSC) is a challenging task that attracted many researchers in the last few years. One main challenge in TSC is the diversity of domains where time series data come from. Thus, there is no “one model that fits all” in TSC. Some algorithms are very accurate in classifying a specific type of time series when the whole series is considered, while some only target the existence/non-existence

    更新日期:2020-08-27
  • Bonsai: diverse and shallow trees for extreme multi-label classification
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-23
    Sujay Khandagale, Han Xiao, Rohit Babbar

    Extreme multi-label classification (XMC) refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. In this paper, we develop a suite of algorithms, called Bonsai, which generalizes the notion of label representation in XMC, and partitions the labels in the representation space to learn shallow trees. We show three concrete realizations of this label representation

    更新日期:2020-08-24
  • Ensembles of extremely randomized predictive clustering trees for predicting structured outputs
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-17
    Dragi Kocev, Michelangelo Ceci, Tomaž Stepišnik

    We address the task of learning ensembles of predictive models for structured output prediction (SOP). We focus on three SOP tasks: multi-target regression (MTR), multi-label classification (MLC) and hierarchical multi-label classification (HMC). In contrast to standard classification and regression, where the output is a single (discrete or continuous) variable, in SOP the output is a data structure—a

    更新日期:2020-08-18
  • Interpretable clustering: an optimization approach
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-16
    Dimitris Bertsimas, Agni Orfanoudaki, Holly Wiberg

    State-of-the-art clustering algorithms provide little insight into the rationale for cluster membership, limiting their interpretability. In complex real-world applications, the latter poses a barrier to machine learning adoption when experts are asked to provide detailed explanations of their algorithms’ recommendations. We present a new unsupervised learning method that leverages Mixed Integer Optimization

    更新日期:2020-08-17
  • Learning representations from dendrograms
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-16
    Morteza Haghir Chehreghani, Mostafa Haghir Chehreghani

    We propose unsupervised representation learning and feature extraction from dendrograms. The commonly used Minimax distance measures correspond to building a dendrogram with single linkage criterion, with defining specific forms of a level function and a distance function over that. Therefore, we extend this method to arbitrary dendrograms. We develop a generalized framework wherein different distance

    更新日期:2020-08-17
  • Using error decay prediction to overcome practical issues of deep active learning for named entity recognition
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-05
    Haw-Shiuan Chang, Shankar Vembu, Sunil Mohan, Rheeya Uppaal, Andrew McCallum

    Existing deep active learning algorithms achieve impressive sampling efficiency on natural language processing tasks. However, they exhibit several weaknesses in practice, including (a) inability to use uncertainty sampling with black-box models, (b) lack of robustness to labeling noise, and (c) lack of transparency. In response, we propose a transparent batch active sampling framework by estimating

    更新日期:2020-08-06
  • Predicting rice phenotypes with meta and multi-target learning
    Mach. Learn. (IF 2.672) Pub Date : 2020-08-02
    Oghenejokpeme I. Orhobor, Nickolai N. Alexandrov, Ross D. King

    The features in some machine learning datasets can naturally be divided into groups. This is the case with genomic data, where features can be grouped by chromosome. In many applications it is common for these groupings to be ignored, as interactions may exist between features belonging to different groups. However, including a group that does not influence a response introduces noise when fitting

    更新日期:2020-08-03
  • Node classification over bipartite graphs through projection
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-28
    Marija Stankova, Stiene Praet, David Martens, Foster Provost

    Many real-world large datasets correspond to bipartite graph data settings—think for example of users rating movies or people visiting locations. Although there has been some prior work on data analysis with such bigraphs, no general network-oriented methodology has been proposed yet to perform node classification. In this paper we propose a three-stage classification framework that effectively deals

    更新日期:2020-07-29
  • Unsupervised representation learning with Minimax distance measures
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-28
    Morteza Haghir Chehreghani

    We investigate the use of Minimax distances to extract in a nonparametric way the features that capture the unknown underlying patterns and structures in the data. We develop a general-purpose and computationally efficient framework to employ Minimax distances with many machine learning methods that perform on numerical data. We study both computing the pairwise Minimax distances for all pairs of objects

    更新日期:2020-07-29
  • Embedding-based Silhouette community detection
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-27
    Blaž Škrlj, Jan Kralj, Nada Lavrač

    Mining complex data in the form of networks is of increasing interest in many scientific disciplines. Network communities correspond to densely connected subnetworks, and often represent key functional parts of real-world systems. This paper proposes the embedding-based Silhouette community detection (SCD), an approach for detecting communities, based on clustering of network node embeddings, i.e.

    更新日期:2020-07-27
  • The voice of optimization
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-19
    Dimitris Bertsimas, Bartolomeo Stellato

    We introduce the idea that using optimal classification trees (OCTs) and optimal classification trees with-hyperplanes (OCT-Hs), interpretable machine learning algorithms developed by Bertsimas and Dunn (Mach Learn 106(7):1039–1082, 2017), we are able to obtain insight on the strategy behind the optimal solution in continuous and mixed-integer convex optimization problem as a function of key parameters

    更新日期:2020-07-20
  • Reflections on reciprocity in research.
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-16
    Peter A Flach

    更新日期:2020-07-16
  • Double random forest
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-02
    Sunwoo Han, Hyunjoong Kim, Yung-Seop Lee

    Random forest (RF) is one of the most popular parallel ensemble methods, using decision trees as classifiers. One of the hyper-parameters to choose from for RF fitting is the nodesize, which determines the individual tree size. In this paper, we begin with the observation that for many data sets (34 out of 58), the best RF prediction accuracy is achieved when the trees are grown fully by minimizing

    更新日期:2020-07-03
  • Propositionalization and embeddings: two sides of the same coin.
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-28
    Nada Lavrač,Blaž Škrlj,Marko Robnik-Šikonja

    Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used in relational learning that enable data fusion from different input data types and formats into a single

    更新日期:2020-06-28
  • An empirical analysis of binary transformation strategies and base algorithms for multi-label learning
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-10
    Adriano Rivolli, Jesse Read, Carlos Soares, Bernhard Pfahringer, André C. P. L. F. de Carvalho

    Investigating strategies that are able to efficiently deal with multi-label classification tasks is a current research topic in machine learning. Many methods have been proposed, making the selection of the most suitable strategy a challenging issue. From this premise, this paper presents an extensive empirical analysis of the binary transformation strategies and base algorithms for multi-label learning

    更新日期:2020-06-10
  • Correction to: Efficient feature selection using shrinkage estimators
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-04
    Konstantinos Sechidis, Laura Azzimonti, Adam Pocock, Giorgio Corani, James Weatherall, Gavin Brown

    There was a mistake in the proof of the optimal shrinkage intensity for our estimator presented in Section 3.1.

    更新日期:2020-06-04
  • Correction to: Robust classification via MOM minimization
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-03
    Guillaume Lecué, Matthieu Lerasle, Timothée Mathieu

    There is a mistake in one of the authors’ names (in both online and print versions of the article): it should be Timothée Mathieu instead of Timlothée Mathieu.

    更新日期:2020-06-03
  • Anomaly detection with inexact labels
    Mach. Learn. (IF 2.672) Pub Date : 2020-05-31
    Tomoharu Iwata, Machiko Toyoda, Shotaro Tora, Naonori Ueda

    We propose a supervised anomaly detection method for data with inexact anomaly labels, where each label, which is assigned to a set of instances, indicates that at least one instance in the set is anomalous. Although many anomaly detection methods have been proposed, they cannot handle inexact anomaly labels. To measure the performance with inexact anomaly labels, we define the inexact AUC, which is

    更新日期:2020-05-31
  • Transfer learning by mapping and revising boosted relational dependency networks
    Mach. Learn. (IF 2.672) Pub Date : 2020-05-11
    Rodrigo Azevedo Santos, Aline Paes, Gerson Zaverucha

    Statistical machine learning algorithms usually assume the availability of data of considerable size to train the models. However, they would fail in addressing domains where data is difficult or expensive to obtain. Transfer learning has emerged to address this problem of learning from scarce data by relying on a model learned in a source domain where data is easy to obtain to be a starting point

    更新日期:2020-05-11
  • Robust classification via MOM minimization
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-27
    Guillaume Lecué, Matthieu Lerasle, Timlothée Mathieu

    We present an extension of Chervonenkis and Vapnik’s classical empirical risk minimization (ERM) where the empirical risk is replaced by a median-of-means (MOM) estimator of the risk. The resulting new estimators are called MOM minimizers. While ERM is sensitive to corruption of the dataset for many classical loss functions used in classification, we show that MOM minimizers behave well in theory,

    更新日期:2020-04-27
  • Engineering problems in machine learning systems
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-23
    Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae

    Fatal accidents are a major issue hindering the wide acceptance of safety-critical systems that employ machine learning and deep learning models, such as automated driving vehicles. In order to use machine learning in a safety-critical system, it is necessary to demonstrate the safety and security of the system through engineering processes. However, thus far, no such widely accepted engineering concepts

    更新日期:2020-04-24
  • Learning from positive and unlabeled data: a survey
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-02
    Jessa Bekker, Jesse Davis

    Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis

    更新日期:2020-04-22
  • Classification using proximity catch digraphs
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-31
    Artür Manukyan, Elvan Ceyhan

    We employ random geometric digraphs to construct semi-parametric classifiers. These data-random digraphs belong to parameterized random digraph families called proximity catch digraphs (PCDs). A related geometric digraph family, class cover catch digraph (CCCD), has been used to solve the class cover problem by using its approximate minimum dominating set and showed relatively good performance in the

    更新日期:2020-04-22
  • Discovering subjectively interesting multigraph patterns
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-16
    Sarang Kapoor, Dhish Kumar Saxena, Matthijs van Leeuwen

    Over the past decade, network analysis has attracted substantial interest because of its potential to solve many real-world problems. This paper lays the conceptual foundation for an application in aviation, through focusing on the discovery of patterns in multigraphs (graphs in which multiple edges can be present between vertices). Our main contributions are twofold. Firstly, we propose a novel subjective

    更新日期:2020-04-22
  • Detecting anomalous packets in network transfers: investigations using PCA, autoencoder and isolation forest in TCP
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-12
    Mariam Kiran, Cong Wang, George Papadimitriou, Anirban Mandal, Ewa Deelman

    Large-scale scientific workflows rely heavily on high-performance file transfers. These transfers require strict quality parameters such as guaranteed bandwidth, no packet loss or data duplication. To have successful file transfers, methods such as predetermined thresholds and statistical analysis need to be done to determine abnormal patterns. Network administrators routinely monitor and analyze network

    更新日期:2020-04-22
  • Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-04
    Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik

    We consider the problem of learning a binary classifier from only positive and unlabeled observations (called PU learning). Recent studies in PU learning have shown superior performance theoretically and empirically. However, most existing algorithms may not be suitable for large-scale datasets because they face repeated computations of a large Gram matrix or require massive hyperparameter optimization

    更新日期:2020-04-22
  • Gradient descent optimizes over-parameterized deep ReLU networks
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-23
    Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu

    We study the problem of training deep fully connected neural networks with Rectified Linear Unit (ReLU) activation function and cross entropy loss function for binary classification using gradient descent. We show that with proper random weight initialization, gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under certain assumption on the

    更新日期:2020-04-22
  • Rank minimization on tensor ring: an efficient approach for tensor decomposition and completion
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-04
    Longhao Yuan, Chao Li, Jianting Cao, Qibin Zhao

    In recent studies, tensor ring decomposition (TRD) has become a promising model for tensor completion. However, TRD suffers from the rank selection problem due to the undetermined multilinear rank. For tensor decomposition with missing entries, the sub-optimal rank selection of traditional methods leads to the overfitting/underfitting problem. In this paper, we first explore the latent space of the

    更新日期:2020-04-22
  • Multi-label optimal margin distribution machine
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Zhi-Hao Tan, Peng Tan, Yuan Jiang, Zhi-Hua Zhou

    Multi-label support vector machine (Rank-SVM) is a classic and effective algorithm for multi-label classification. The pivotal idea is to maximize the minimum margin of label pairs, which is extended from SVM. However, recent studies disclosed that maximizing the minimum margin does not necessarily lead to better generalization performance, and instead, it is more crucial to optimize the margin distribution

    更新日期:2020-04-22
  • Joint consensus and diversity for multi-view semi-supervised classification
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-07
    Wenzhang Zhuge, Chenping Hou, Shaoliang Peng, Dongyun Yi

    As data can be acquired in an ever-increasing number of ways, multi-view data is becoming more and more available. Considering the high price of labeling data in many machine learning applications, we focus on multi-view semi-supervised classification problem. To address this problem, in this paper, we propose a method called joint consensus and diversity for multi-view semi-supervised classification

    更新日期:2020-04-22
  • Handling concept drift via model reuse
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou

    In many real-world applications, data are often collected in the form of a stream, and thus the distribution usually changes in nature, which is referred to as concept drift in the literature. We propose a novel and effective approach to handle concept drift via model reuse, that is, reusing models trained on previous data to tackle the changes. Each model is associated with a weight representing its

    更新日期:2020-04-22
  • Communication-efficient distributed multi-task learning with matrix sparsity regularization
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-07
    Qiang Zhou, Yu Chen, Sinno Jialin Pan

    This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization without

    更新日期:2020-04-22
  • Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Han-Jia Ye, Xiang-Rong Sheng, De-Chuan Zhan

    Considering the data collection and labeling cost in real-world applications, training a model with limited examples is an essential problem in machine learning, visual recognition, etc. Directly training a model on such few-shot learning (FSL) tasks falls into the over-fitting dilemma, which would turn to an effective task-level inductive bias as a key supervision. By treating the few-shot task as

    更新日期:2020-04-22
  • Skill-based curiosity for intrinsically motivated reinforcement learning
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Nicolas Bougie, Ryutaro Ichise

    Reinforcement learning methods rely on rewards provided by the environment that are extrinsic to the agent. However, many real-world scenarios involve sparse or delayed rewards. In such cases, the agent can develop its own intrinsic reward function called curiosity to enable the agent to explore its environment in the quest of new skills. We propose a novel end-to-end curiosity mechanism for deep reinforcement

    更新日期:2020-04-22
  • Classification with costly features as a sequential decision-making problem
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-28
    Jaromír Janisch, Tomáš Pevný, Viliam Lisý

    This work focuses on a specific classification problem, where the information about a sample is not readily available, but has to be acquired for a cost, and there is a per-sample budget. Inspired by real-world use-cases, we analyze average and hard variations of a directly specified budget. We postulate the problem in its explicit formulation and then convert it into an equivalent MDP, that can be

    更新日期:2020-04-22
  • Joint maximization of accuracy and information for learning the structure of a Bayesian network classifier
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-28
    Dan Halbersberg, Maydan Wienreb, Boaz Lerner

    Although recent studies have shown that a Bayesian network classifier (BNC) that maximizes the classification accuracy (i.e., minimizes the 0/1 loss function) is a powerful tool in both knowledge representation and classification, this classifier: (1) focuses on the majority class and, therefore, misclassifies minority classes; (2) is usually uninformative about the distribution of misclassifications;

    更新日期:2020-04-22
  • Scalable Bayesian preference learning for crowds
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-06
    Edwin Simpson, Iryna Gurevych

    We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples’ opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels per item

    更新日期:2020-04-22
  • A survey on semi-supervised learning
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-15
    Jesper E. van Engelen, Holger H. Hoos

    Semi-supervised learning is the branch of machine learning concerned with using labelled as well as unlabelled data to perform certain learning tasks. Conceptually situated between supervised and unsupervised learning, it permits harnessing the large amounts of unlabelled data available in many use cases in combination with typically smaller sets of labelled data. In recent years, research in this

    更新日期:2020-04-22
  • Predictive spreadsheet autocompletion with constraints
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-25
    Samuel Kolb, Stefano Teso, Anton Dries, Luc De Raedt

    Spreadsheets are arguably the most accessible data-analysis tool and are used by millions of people. Despite the fact that they lie at the core of most business practices, working with spreadsheets can be error prone, usage of formulas requires training and, crucially, spreadsheet users do not have access to state-of-the-art analysis techniques offered by machine learning. To tackle these issues, we

    更新日期:2020-04-22
  • Online Bayesian max-margin subspace learning for multi-view classification and regression
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-25
    Jia He, Changying Du, Fuzhen Zhuang, Xin Yin, Qing He, Guoping Long

    Multi-view data have become increasingly popular in many real-world applications where data are generated from different information channels or different views such as image + text, audio + video, and webpage + link data. Last decades have witnessed a number of studies devoted to multi-view learning algorithms, especially the predictive latent subspace learning approaches which aim at obtaining a

    更新日期:2020-04-22
  • A bad arm existence checking problem: How to utilize asymmetric problem structure?
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-30
    Koji Tabata, Atsuyoshi Nakamura, Junya Honda, Tamiki Komatsuzaki

    Abstract We study a bad arm existence checking problem in a stochastic K-armed bandit setting, in which a player’s task is to judge whether a positive arm exists or all the arms are negative among given K arms by drawing as small number of arms as possible. Here, an arm is positive if its expected loss suffered by drawing the arm is at least a given threshold \(\theta _U\), and it is negative if that

    更新日期:2020-03-02
  • An evaluation of machine-learning for predicting phenotype: studies in yeast, rice, and wheat
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-23
    Nastasiya F. Grinberg, Oghenejokpeme I. Orhobor, Ross D. King

    Abstract In phenotype prediction the physical characteristics of an organism are predicted from knowledge of its genotype and environment. Such studies, often called genome-wide association studies, are of the highest societal importance, as they are of central importance to medicine, crop-breeding, etc. We investigated three phenotype prediction problems: one simple and clean (yeast), and the other

    更新日期:2020-03-02
  • Sparse hierarchical regression with polynomials
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-24
    Dimitris Bertsimas, Bart Van Parys

    We present a novel method for sparse polynomial regression. We are interested in that degree r polynomial which depends on at most k inputs, counting at most \(\ell\) monomial terms, and minimizes the sum of the squares of its prediction errors. Such highly structured sparse regression was denoted by Bach (Advances in neural information processing systems, pp 105–112, 2009) as sparse hierarchical regression

    更新日期:2020-01-24
  • Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-23
    Emanuele Pesce, Giovanni Montana

    Abstract Deep reinforcement learning algorithms have recently been used to train multiple interacting agents in a centralised manner whilst keeping their execution decentralised. When the agents can only acquire partial observations and are faced with tasks requiring coordination and synchronisation skills, inter-agent communication plays an essential role. In this work, we propose a framework for

    更新日期:2020-01-23
  • Sum–product graphical models
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-27
    Mattia Desana, Christoph Schnörr

    This paper introduces a probabilistic architecture called sum–product graphical model (SPGM). SPGMs represent a class of probability distributions that combines, for the first time, the semantics of probabilistic graphical models (GMs) with the evaluation efficiency of sum–product networks (SPNs): Like SPNs, SPGMs always enable tractable inference using a class of models that incorporate context specific

    更新日期:2020-01-17
  • Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games
    Mach. Learn. (IF 2.672) Pub Date : 2019-07-25
    Vojtěch Kovařík, Viliam Lisý

    Abstract Hannan consistency, or no external regret, is a key concept for learning in games. An action selection algorithm is Hannan consistent (HC) if its performance is eventually as good as selecting the best fixed action in hindsight. If both players in a zero-sum normal form game use a Hannan consistent algorithm, their average behavior converges to a Nash equilibrium of the game. A similar result

    更新日期:2020-01-17
  • Provable accelerated gradient method for nonconvex low rank optimization
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-26
    Huan Li, Zhouchen Lin

    Optimization over low rank matrices has broad applications in machine learning. For large-scale problems, an attractive heuristic is to factorize the low rank matrix to a product of two much smaller matrices. In this paper, we study the nonconvex problem \(\min _{\mathbf {U}\in \mathbb {R}^{n\times r}} g(\mathbf {U})=f(\mathbf {U}\mathbf {U}^T)\) under the assumptions that \(f(\mathbf {X})\) is restricted

    更新日期:2020-01-17
  • Rankboost $$+$$+ : an improvement to Rankboost
    Mach. Learn. (IF 2.672) Pub Date : 2019-08-12
    Harold Connamacher, Nikil Pancha, Rui Liu, Soumya Ray

    Abstract Rankboost is a well-known algorithm that iteratively creates and aggregates a collection of “weak rankers” to build an effective ranking procedure. Initial work on Rankboost proposed two variants. One variant, that we call Rb-d and which is designed for the scenario where all weak rankers have the binary range \(\{0,1\}\), has good theoretical properties, but does not perform well in practice

    更新日期:2020-01-17
  • Combining Bayesian optimization and Lipschitz optimization
    Mach. Learn. (IF 2.672) Pub Date : 2019-08-22
    Mohamed Osama Ahmed, Sharan Vaswani, Mark Schmidt

    Abstract Bayesian optimization and Lipschitz optimization have developed alternative techniques for optimizing black-box functions. They each exploit a different form of prior about the function. In this work, we explore strategies to combine these techniques for better global optimization. In particular, we propose ways to use the Lipschitz continuity assumption within traditional BO algorithms, which

    更新日期:2020-01-17
  • Kappa Updated Ensemble for drifting data stream mining
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-02
    Alberto Cano, Bartosz Krawczyk

    Learning from data streams in the presence of concept drift is among the biggest challenges of contemporary machine learning. Algorithms designed for such scenarios must take into an account the potentially unbounded size of data, its constantly changing nature, and the requirement for real-time processing. Ensemble approaches for data stream mining have gained significant popularity, due to their

    更新日期:2020-01-17
  • Conditional density estimation and simulation through optimal transport
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-13
    Esteban G. Tabak, Giulio Trigila, Wenjun Zhao

    A methodology to estimate from samples the probability density of a random variable x conditional to the values of a set of covariates \(\{z_{l}\}\) is proposed. The methodology relies on a data-driven formulation of the Wasserstein barycenter, posed as a minimax problem in terms of the conditional map carrying each sample point to the barycenter and a potential characterizing the inverse of this map

    更新日期:2020-01-13
  • High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-07
    Tianbao Yang, Lijun Zhang, Qihang Lin, Shenghuo Zhu, Rong Jin

    Learning from large-scale and high-dimensional data still remains a computationally challenging problem, though it has received increasing interest recently. To address this issue, randomized reduction methods have been developed by either reducing the dimensionality or reducing the number of training instances to obtain a small sketch of the original data. In this paper, we focus on recovering a high-dimensional

    更新日期:2020-01-07
  • A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-04
    Kshitij Khare, Sang-Yun Oh, Syed Rahman, Bala Rajaratnam

    Abstract Covariance estimation for high-dimensional datasets is a fundamental problem in machine learning, and has numerous applications. In these high-dimensional settings the number of features or variables p is typically larger than the sample size n. A popular way of tackling this challenge is to induce sparsity in the covariance matrix, its inverse or a relevant transformation. In many applications

    更新日期:2020-01-04
  • Covariance-based dissimilarity measures applied to clustering wide-sense stationary ergodic processes
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-26
    Qidi Peng, Nan Rao, Ran Zhao

    We introduce a new unsupervised learning problem: clustering wide-sense stationary ergodic stochastic processes. A covariance-based dissimilarity measure together with asymptotically consistent algorithms is designed for clustering offline and online datasets, respectively. We also suggest a formal criterion on the efficiency of dissimilarity measures, and discuss an approach to improve the efficiency

    更新日期:2020-01-04
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
物理学研究前沿热点精选期刊推荐
chemistry
《自然》编辑与您分享如何成为优质审稿人-信息流
欢迎报名注册2020量子在线大会
化学领域亟待解决的问题
材料学研究精选新
GIANT
自然职场线上招聘会
科研绘图
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
阿拉丁试剂right
张晓晨
田蕾蕾
李闯创
刘天飞
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
天合科研
x-mol收录
X-MOL
苏州大学
廖矿标
深圳湾
试剂库存
down
wechat
bug