当前期刊: Machine Learning Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Double random forest
    Mach. Learn. (IF 2.672) Pub Date : 2020-07-02
    Sunwoo Han, Hyunjoong Kim, Yung-Seop Lee

    Random forest (RF) is one of the most popular parallel ensemble methods, using decision trees as classifiers. One of the hyper-parameters to choose from for RF fitting is the nodesize, which determines the individual tree size. In this paper, we begin with the observation that for many data sets (34 out of 58), the best RF prediction accuracy is achieved when the trees are grown fully by minimizing

    更新日期:2020-07-03
  • Propositionalization and embeddings: two sides of the same coin
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-28
    Nada Lavrač, Blaž Škrlj, Marko Robnik-Šikonja

    Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used in relational learning that enable data fusion from different input data types and formats into a single

    更新日期:2020-06-28
  • An empirical analysis of binary transformation strategies and base algorithms for multi-label learning
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-10
    Adriano Rivolli, Jesse Read, Carlos Soares, Bernhard Pfahringer, André C. P. L. F. de Carvalho

    Investigating strategies that are able to efficiently deal with multi-label classification tasks is a current research topic in machine learning. Many methods have been proposed, making the selection of the most suitable strategy a challenging issue. From this premise, this paper presents an extensive empirical analysis of the binary transformation strategies and base algorithms for multi-label learning

    更新日期:2020-06-10
  • Correction to: Efficient feature selection using shrinkage estimators
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-04
    Konstantinos Sechidis, Laura Azzimonti, Adam Pocock, Giorgio Corani, James Weatherall, Gavin Brown

    There was a mistake in the proof of the optimal shrinkage intensity for our estimator presented in Section 3.1.

    更新日期:2020-06-04
  • Correction to: Robust classification via MOM minimization
    Mach. Learn. (IF 2.672) Pub Date : 2020-06-03
    Guillaume Lecué, Matthieu Lerasle, Timothée Mathieu

    There is a mistake in one of the authors’ names (in both online and print versions of the article): it should be Timothée Mathieu instead of Timlothée Mathieu.

    更新日期:2020-06-03
  • Anomaly detection with inexact labels
    Mach. Learn. (IF 2.672) Pub Date : 2020-05-31
    Tomoharu Iwata, Machiko Toyoda, Shotaro Tora, Naonori Ueda

    We propose a supervised anomaly detection method for data with inexact anomaly labels, where each label, which is assigned to a set of instances, indicates that at least one instance in the set is anomalous. Although many anomaly detection methods have been proposed, they cannot handle inexact anomaly labels. To measure the performance with inexact anomaly labels, we define the inexact AUC, which is

    更新日期:2020-05-31
  • Transfer learning by mapping and revising boosted relational dependency networks
    Mach. Learn. (IF 2.672) Pub Date : 2020-05-11
    Rodrigo Azevedo Santos, Aline Paes, Gerson Zaverucha

    Statistical machine learning algorithms usually assume the availability of data of considerable size to train the models. However, they would fail in addressing domains where data is difficult or expensive to obtain. Transfer learning has emerged to address this problem of learning from scarce data by relying on a model learned in a source domain where data is easy to obtain to be a starting point

    更新日期:2020-05-11
  • Robust classification via MOM minimization
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-27
    Guillaume Lecué, Matthieu Lerasle, Timlothée Mathieu

    We present an extension of Chervonenkis and Vapnik’s classical empirical risk minimization (ERM) where the empirical risk is replaced by a median-of-means (MOM) estimator of the risk. The resulting new estimators are called MOM minimizers. While ERM is sensitive to corruption of the dataset for many classical loss functions used in classification, we show that MOM minimizers behave well in theory,

    更新日期:2020-04-27
  • Engineering problems in machine learning systems
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-23
    Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae

    Fatal accidents are a major issue hindering the wide acceptance of safety-critical systems that employ machine learning and deep learning models, such as automated driving vehicles. In order to use machine learning in a safety-critical system, it is necessary to demonstrate the safety and security of the system through engineering processes. However, thus far, no such widely accepted engineering concepts

    更新日期:2020-04-24
  • Learning from positive and unlabeled data: a survey
    Mach. Learn. (IF 2.672) Pub Date : 2020-04-02
    Jessa Bekker, Jesse Davis

    Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis

    更新日期:2020-04-22
  • Classification using proximity catch digraphs
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-31
    Artür Manukyan, Elvan Ceyhan

    We employ random geometric digraphs to construct semi-parametric classifiers. These data-random digraphs belong to parameterized random digraph families called proximity catch digraphs (PCDs). A related geometric digraph family, class cover catch digraph (CCCD), has been used to solve the class cover problem by using its approximate minimum dominating set and showed relatively good performance in the

    更新日期:2020-04-22
  • Discovering subjectively interesting multigraph patterns
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-16
    Sarang Kapoor, Dhish Kumar Saxena, Matthijs van Leeuwen

    Over the past decade, network analysis has attracted substantial interest because of its potential to solve many real-world problems. This paper lays the conceptual foundation for an application in aviation, through focusing on the discovery of patterns in multigraphs (graphs in which multiple edges can be present between vertices). Our main contributions are twofold. Firstly, we propose a novel subjective

    更新日期:2020-04-22
  • Detecting anomalous packets in network transfers: investigations using PCA, autoencoder and isolation forest in TCP
    Mach. Learn. (IF 2.672) Pub Date : 2020-03-12
    Mariam Kiran, Cong Wang, George Papadimitriou, Anirban Mandal, Ewa Deelman

    Large-scale scientific workflows rely heavily on high-performance file transfers. These transfers require strict quality parameters such as guaranteed bandwidth, no packet loss or data duplication. To have successful file transfers, methods such as predetermined thresholds and statistical analysis need to be done to determine abnormal patterns. Network administrators routinely monitor and analyze network

    更新日期:2020-04-22
  • Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-04
    Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik

    We consider the problem of learning a binary classifier from only positive and unlabeled observations (called PU learning). Recent studies in PU learning have shown superior performance theoretically and empirically. However, most existing algorithms may not be suitable for large-scale datasets because they face repeated computations of a large Gram matrix or require massive hyperparameter optimization

    更新日期:2020-04-22
  • Gradient descent optimizes over-parameterized deep ReLU networks
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-23
    Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu

    We study the problem of training deep fully connected neural networks with Rectified Linear Unit (ReLU) activation function and cross entropy loss function for binary classification using gradient descent. We show that with proper random weight initialization, gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under certain assumption on the

    更新日期:2020-04-22
  • Rank minimization on tensor ring: an efficient approach for tensor decomposition and completion
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-04
    Longhao Yuan, Chao Li, Jianting Cao, Qibin Zhao

    In recent studies, tensor ring decomposition (TRD) has become a promising model for tensor completion. However, TRD suffers from the rank selection problem due to the undetermined multilinear rank. For tensor decomposition with missing entries, the sub-optimal rank selection of traditional methods leads to the overfitting/underfitting problem. In this paper, we first explore the latent space of the

    更新日期:2020-04-22
  • Multi-label optimal margin distribution machine
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Zhi-Hao Tan, Peng Tan, Yuan Jiang, Zhi-Hua Zhou

    Multi-label support vector machine (Rank-SVM) is a classic and effective algorithm for multi-label classification. The pivotal idea is to maximize the minimum margin of label pairs, which is extended from SVM. However, recent studies disclosed that maximizing the minimum margin does not necessarily lead to better generalization performance, and instead, it is more crucial to optimize the margin distribution

    更新日期:2020-04-22
  • Joint consensus and diversity for multi-view semi-supervised classification
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-07
    Wenzhang Zhuge, Chenping Hou, Shaoliang Peng, Dongyun Yi

    As data can be acquired in an ever-increasing number of ways, multi-view data is becoming more and more available. Considering the high price of labeling data in many machine learning applications, we focus on multi-view semi-supervised classification problem. To address this problem, in this paper, we propose a method called joint consensus and diversity for multi-view semi-supervised classification

    更新日期:2020-04-22
  • Handling concept drift via model reuse
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou

    In many real-world applications, data are often collected in the form of a stream, and thus the distribution usually changes in nature, which is referred to as concept drift in the literature. We propose a novel and effective approach to handle concept drift via model reuse, that is, reusing models trained on previous data to tackle the changes. Each model is associated with a weight representing its

    更新日期:2020-04-22
  • Communication-efficient distributed multi-task learning with matrix sparsity regularization
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-07
    Qiang Zhou, Yu Chen, Sinno Jialin Pan

    This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization without

    更新日期:2020-04-22
  • Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Han-Jia Ye, Xiang-Rong Sheng, De-Chuan Zhan

    Considering the data collection and labeling cost in real-world applications, training a model with limited examples is an essential problem in machine learning, visual recognition, etc. Directly training a model on such few-shot learning (FSL) tasks falls into the over-fitting dilemma, which would turn to an effective task-level inductive bias as a key supervision. By treating the few-shot task as

    更新日期:2020-04-22
  • Skill-based curiosity for intrinsically motivated reinforcement learning
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-10
    Nicolas Bougie, Ryutaro Ichise

    Reinforcement learning methods rely on rewards provided by the environment that are extrinsic to the agent. However, many real-world scenarios involve sparse or delayed rewards. In such cases, the agent can develop its own intrinsic reward function called curiosity to enable the agent to explore its environment in the quest of new skills. We propose a novel end-to-end curiosity mechanism for deep reinforcement

    更新日期:2020-04-22
  • Classification with costly features as a sequential decision-making problem
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-28
    Jaromír Janisch, Tomáš Pevný, Viliam Lisý

    This work focuses on a specific classification problem, where the information about a sample is not readily available, but has to be acquired for a cost, and there is a per-sample budget. Inspired by real-world use-cases, we analyze average and hard variations of a directly specified budget. We postulate the problem in its explicit formulation and then convert it into an equivalent MDP, that can be

    更新日期:2020-04-22
  • Joint maximization of accuracy and information for learning the structure of a Bayesian network classifier
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-28
    Dan Halbersberg, Maydan Wienreb, Boaz Lerner

    Although recent studies have shown that a Bayesian network classifier (BNC) that maximizes the classification accuracy (i.e., minimizes the 0/1 loss function) is a powerful tool in both knowledge representation and classification, this classifier: (1) focuses on the majority class and, therefore, misclassifies minority classes; (2) is usually uninformative about the distribution of misclassifications;

    更新日期:2020-04-22
  • Scalable Bayesian preference learning for crowds
    Mach. Learn. (IF 2.672) Pub Date : 2020-02-06
    Edwin Simpson, Iryna Gurevych

    We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples’ opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels per item

    更新日期:2020-04-22
  • A survey on semi-supervised learning
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-15
    Jesper E. van Engelen, Holger H. Hoos

    Semi-supervised learning is the branch of machine learning concerned with using labelled as well as unlabelled data to perform certain learning tasks. Conceptually situated between supervised and unsupervised learning, it permits harnessing the large amounts of unlabelled data available in many use cases in combination with typically smaller sets of labelled data. In recent years, research in this

    更新日期:2020-04-22
  • Predictive spreadsheet autocompletion with constraints
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-25
    Samuel Kolb, Stefano Teso, Anton Dries, Luc De Raedt

    Spreadsheets are arguably the most accessible data-analysis tool and are used by millions of people. Despite the fact that they lie at the core of most business practices, working with spreadsheets can be error prone, usage of formulas requires training and, crucially, spreadsheet users do not have access to state-of-the-art analysis techniques offered by machine learning. To tackle these issues, we

    更新日期:2020-04-22
  • Online Bayesian max-margin subspace learning for multi-view classification and regression
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-25
    Jia He, Changying Du, Fuzhen Zhuang, Xin Yin, Qing He, Guoping Long

    Multi-view data have become increasingly popular in many real-world applications where data are generated from different information channels or different views such as image + text, audio + video, and webpage + link data. Last decades have witnessed a number of studies devoted to multi-view learning algorithms, especially the predictive latent subspace learning approaches which aim at obtaining a

    更新日期:2020-04-22
  • A bad arm existence checking problem: How to utilize asymmetric problem structure?
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-30
    Koji Tabata, Atsuyoshi Nakamura, Junya Honda, Tamiki Komatsuzaki

    Abstract We study a bad arm existence checking problem in a stochastic K-armed bandit setting, in which a player’s task is to judge whether a positive arm exists or all the arms are negative among given K arms by drawing as small number of arms as possible. Here, an arm is positive if its expected loss suffered by drawing the arm is at least a given threshold \(\theta _U\), and it is negative if that

    更新日期:2020-03-02
  • An evaluation of machine-learning for predicting phenotype: studies in yeast, rice, and wheat
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-23
    Nastasiya F. Grinberg, Oghenejokpeme I. Orhobor, Ross D. King

    Abstract In phenotype prediction the physical characteristics of an organism are predicted from knowledge of its genotype and environment. Such studies, often called genome-wide association studies, are of the highest societal importance, as they are of central importance to medicine, crop-breeding, etc. We investigated three phenotype prediction problems: one simple and clean (yeast), and the other

    更新日期:2020-03-02
  • Sparse hierarchical regression with polynomials
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-24
    Dimitris Bertsimas, Bart Van Parys

    We present a novel method for sparse polynomial regression. We are interested in that degree r polynomial which depends on at most k inputs, counting at most \(\ell\) monomial terms, and minimizes the sum of the squares of its prediction errors. Such highly structured sparse regression was denoted by Bach (Advances in neural information processing systems, pp 105–112, 2009) as sparse hierarchical regression

    更新日期:2020-01-24
  • Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-23
    Emanuele Pesce, Giovanni Montana

    Abstract Deep reinforcement learning algorithms have recently been used to train multiple interacting agents in a centralised manner whilst keeping their execution decentralised. When the agents can only acquire partial observations and are faced with tasks requiring coordination and synchronisation skills, inter-agent communication plays an essential role. In this work, we propose a framework for

    更新日期:2020-01-23
  • Sum–product graphical models
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-27
    Mattia Desana, Christoph Schnörr

    This paper introduces a probabilistic architecture called sum–product graphical model (SPGM). SPGMs represent a class of probability distributions that combines, for the first time, the semantics of probabilistic graphical models (GMs) with the evaluation efficiency of sum–product networks (SPNs): Like SPNs, SPGMs always enable tractable inference using a class of models that incorporate context specific

    更新日期:2020-01-17
  • Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games
    Mach. Learn. (IF 2.672) Pub Date : 2019-07-25
    Vojtěch Kovařík, Viliam Lisý

    Abstract Hannan consistency, or no external regret, is a key concept for learning in games. An action selection algorithm is Hannan consistent (HC) if its performance is eventually as good as selecting the best fixed action in hindsight. If both players in a zero-sum normal form game use a Hannan consistent algorithm, their average behavior converges to a Nash equilibrium of the game. A similar result

    更新日期:2020-01-17
  • Provable accelerated gradient method for nonconvex low rank optimization
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-26
    Huan Li, Zhouchen Lin

    Optimization over low rank matrices has broad applications in machine learning. For large-scale problems, an attractive heuristic is to factorize the low rank matrix to a product of two much smaller matrices. In this paper, we study the nonconvex problem \(\min _{\mathbf {U}\in \mathbb {R}^{n\times r}} g(\mathbf {U})=f(\mathbf {U}\mathbf {U}^T)\) under the assumptions that \(f(\mathbf {X})\) is restricted

    更新日期:2020-01-17
  • Rankboost $$+$$+ : an improvement to Rankboost
    Mach. Learn. (IF 2.672) Pub Date : 2019-08-12
    Harold Connamacher, Nikil Pancha, Rui Liu, Soumya Ray

    Abstract Rankboost is a well-known algorithm that iteratively creates and aggregates a collection of “weak rankers” to build an effective ranking procedure. Initial work on Rankboost proposed two variants. One variant, that we call Rb-d and which is designed for the scenario where all weak rankers have the binary range \(\{0,1\}\), has good theoretical properties, but does not perform well in practice

    更新日期:2020-01-17
  • Combining Bayesian optimization and Lipschitz optimization
    Mach. Learn. (IF 2.672) Pub Date : 2019-08-22
    Mohamed Osama Ahmed, Sharan Vaswani, Mark Schmidt

    Abstract Bayesian optimization and Lipschitz optimization have developed alternative techniques for optimizing black-box functions. They each exploit a different form of prior about the function. In this work, we explore strategies to combine these techniques for better global optimization. In particular, we propose ways to use the Lipschitz continuity assumption within traditional BO algorithms, which

    更新日期:2020-01-17
  • Kappa Updated Ensemble for drifting data stream mining
    Mach. Learn. (IF 2.672) Pub Date : 2019-10-02
    Alberto Cano, Bartosz Krawczyk

    Learning from data streams in the presence of concept drift is among the biggest challenges of contemporary machine learning. Algorithms designed for such scenarios must take into an account the potentially unbounded size of data, its constantly changing nature, and the requirement for real-time processing. Ensemble approaches for data stream mining have gained significant popularity, due to their

    更新日期:2020-01-17
  • Conditional density estimation and simulation through optimal transport
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-13
    Esteban G. Tabak, Giulio Trigila, Wenjun Zhao

    A methodology to estimate from samples the probability density of a random variable x conditional to the values of a set of covariates \(\{z_{l}\}\) is proposed. The methodology relies on a data-driven formulation of the Wasserstein barycenter, posed as a minimax problem in terms of the conditional map carrying each sample point to the barycenter and a potential characterizing the inverse of this map

    更新日期:2020-01-13
  • High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-07
    Tianbao Yang, Lijun Zhang, Qihang Lin, Shenghuo Zhu, Rong Jin

    Learning from large-scale and high-dimensional data still remains a computationally challenging problem, though it has received increasing interest recently. To address this issue, randomized reduction methods have been developed by either reducing the dimensionality or reducing the number of training instances to obtain a small sketch of the original data. In this paper, we focus on recovering a high-dimensional

    更新日期:2020-01-07
  • Learning higher-order logic programs
    Mach. Learn. (IF 2.672) Pub Date : 2019-12-03
    Andrew Cropper, Rolf Morel, Stephen Muggleton

    Abstract A key feature of inductive logic programming is its ability to learn first-order programs, which are intrinsically more expressive than propositional programs. In this paper, we introduce techniques to learn higher-order programs. Specifically, we extend meta-interpretive learning (MIL) to support learning higher-order programs by allowing for higher-order definitions to be used as background

    更新日期:2020-01-04
  • A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-04
    Kshitij Khare, Sang-Yun Oh, Syed Rahman, Bala Rajaratnam

    Abstract Covariance estimation for high-dimensional datasets is a fundamental problem in machine learning, and has numerous applications. In these high-dimensional settings the number of features or variables p is typically larger than the sample size n. A popular way of tackling this challenge is to induce sparsity in the covariance matrix, its inverse or a relevant transformation. In many applications

    更新日期:2020-01-04
  • Covariance-based dissimilarity measures applied to clustering wide-sense stationary ergodic processes
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-26
    Qidi Peng, Nan Rao, Ran Zhao

    We introduce a new unsupervised learning problem: clustering wide-sense stationary ergodic stochastic processes. A covariance-based dissimilarity measure together with asymptotically consistent algorithms is designed for clustering offline and online datasets, respectively. We also suggest a formal criterion on the efficiency of dissimilarity measures, and discuss an approach to improve the efficiency

    更新日期:2020-01-04
  • 2D compressed learning: support matrix machine with bilinear random projections
    Mach. Learn. (IF 2.672) Pub Date : 2019-05-23
    Di Ma, Songcan Chen

    Support matrix machine (SMM) is an efficient matrix classification method that can leverage the structure information within the matrix to improve the classification performance. However, its computational and storage costs are still expensive for high-dimensional data. To address these problems, in this paper, we consider a 2D compressed learning paradigm to learn the SMM classifier in some compressed

    更新日期:2020-01-04
  • The kernel Kalman rule
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-18
    Gregor H. W. Gebhardt, Andras Kupcsik, Gerhard Neumann

    Abstract Enabling robots to act in unstructured and unknown environments requires versatile state estimation techniques. While traditional state estimation methods require known models and make strong assumptions about the dynamics, such versatile techniques should be able to deal with high dimensional observations and non-linear, unknown system dynamics. The recent framework for nonparametric inference

    更新日期:2020-01-04
  • Speculate-correct error bounds for k -nearest neighbor classifiers
    Mach. Learn. (IF 2.672) Pub Date : 2019-06-18
    Eric Bax, Lingjie Weng, Xu Tian

    Abstract We introduce the speculate-correct method to derive error bounds for local classifiers. Using it, we show that k-nearest neighbor classifiers, in spite of their famously fractured decision boundaries, have exponential error bounds with \(\hbox {O} \left( \sqrt{(k + \ln n)/n} \right) \) range around an estimate of generalization error for n in-sample examples.

    更新日期:2020-01-04
  • Logical reduction of metarules
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-20
    Andrew Cropper, Sophie Tourret

    Many forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use

    更新日期:2020-01-04
  • Inductive general game playing
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-18
    Andrew Cropper, Richard Evans, Mark Law

    General game playing (GGP) is a framework for evaluating an agent’s general intelligence across a wide range of tasks. In the GGP competition, an agent is given the rules of a game (described as a logic program) that it has never seen before. The task is for the agent to play the game, thus generating game traces. The winner of the GGP competition is the agent that gets the best total score over all

    更新日期:2020-01-04
  • Constructing generative logical models for optimisation problems using domain knowledge
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-13
    Ashwin Srinivasan, Lovekesh Vig, Gautam Shroff

    In this paper we seek to identify data instances with a low value of some objective (or cost) function. Normally posed as optimisation problems, our interest is in problems that have the following characteristics: (a) optimal, or even near-optimal solutions are very rare; (b) it is expensive to obtain the value of the objective function for large numbers of data instances; and (c) there is domain knowledge

    更新日期:2020-01-04
  • On some graph-based two-sample tests for high dimension, low sample size data
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-13
    Soham Sarkar, Rahul Biswas, Anil K. Ghosh

    Abstract Testing for equality of two high-dimensional distributions is a challenging problem, and this becomes even more challenging when the sample size is small. Over the last few decades, several graph-based two-sample tests have been proposed in the literature, which can be used for data of arbitrary dimensions. Most of these test statistics are computed using pairwise Euclidean distances among

    更新日期:2020-01-04
  • Active deep Q-learning with demonstration
    Mach. Learn. (IF 2.672) Pub Date : 2019-11-08
    Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama

    Reinforcement learning (RL) is a machine learning technique aiming to learn how to take actions in an environment to maximize some kind of reward. Recent research has shown that although the learning efficiency of RL can be improved with expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration

    更新日期:2020-01-04
  • Asymptotically optimal algorithms for budgeted multiple play bandits
    Mach. Learn. (IF 2.672) Pub Date : 2019-05-16
    Alex Luedtke, Emilie Kaufmann, Antoine Chambaz

    Abstract We study a generalization of the multi-armed bandit problem with multiple plays where there is a cost associated with pulling each arm and the agent has a budget at each time that dictates how much she can expect to spend. We derive an asymptotic regret lower bound for any uniformly efficient algorithm in our setting. We then study a variant of Thompson sampling for Bernoulli rewards and a

    更新日期:2020-01-04
  • Model-based kernel sum rule: kernel Bayesian inference with probabilistic models
    Mach. Learn. (IF 2.672) Pub Date : 2020-01-02
    Yu Nishiyama, Motonobu Kanagawa, Arthur Gretton, Kenji Fukumizu

    Kernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the

    更新日期:2020-01-02
  • Improved graph-based SFA: information preservation complements the slowness principle
    Mach. Learn. (IF 2.672) Pub Date : 2019-12-26
    Alberto N. Escalante-B., Laurenz Wiskott

    Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a multi-dimensional time series. SFA has been extended to supervised learning (classification and regression) by an algorithm called graph-based SFA (GSFA). GSFA relies on a particular graph structure to extract features that preserve label similarities. Processing of high dimensional input

    更新日期:2019-12-26
  • On cognitive preferences and the plausibility of rule-based models
    Mach. Learn. (IF 2.672) Pub Date : 2019-12-24
    Johannes Fürnkranz, Tomáš Kliegr, Heiko Paulheim

    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly

    更新日期:2019-12-24
  • Distributed block-diagonal approximation methods for regularized empirical risk minimization
    Mach. Learn. (IF 2.672) Pub Date : 2019-12-18
    Ching-pei Lee, Kai-Wei Chang

    In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description

    更新日期:2019-12-18
  • Exploiting causality in gene network reconstruction based on graph embedding
    Mach. Learn. (IF 2.672) Pub Date : 2019-12-03
    Gianvito Pio, Michelangelo Ceci, Francesca Prisciandaro, Donato Malerba

    Gene network reconstruction is a bioinformatics task that aims at modelling the complex regulatory activities that may occur among genes. This task is typically solved by means of link prediction methods that analyze gene expression data. However, the reconstructed networks often suffer from a high amount of false positive edges, which are actually the result of indirect regulation activities due to

    更新日期:2019-12-03
  • A greedy feature selection algorithm for Big Data of high dimensionality.
    Mach. Learn. (IF 2.672) Pub Date : 2019-03-25
    Ioannis Tsamardinos,Giorgos Borboudakis,Pavlos Katsogridakis,Polyvios Pratikakis,Vassilis Christophides

    We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) for Big Data of high dimensionality. PFBP partitions the data matrix both in terms of rows as well as columns. By employing the concepts of p-values of conditional independence tests and meta-analysis techniques, PFBP relies only on computations local to a partition while minimizing communication costs

    更新日期:2019-11-01
  • Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation.
    Mach. Learn. (IF 2.672) Pub Date : 2018-11-06
    Ioannis Tsamardinos,Elissavet Greasidou,Giorgos Borboudakis

    Cross-Validation (CV), and out-of-sample performance-estimation protocols in general, are often employed both for (a) selecting the optimal combination of algorithms and values of hyper-parameters (called a configuration) for producing the final predictive model, and (b) estimating the predictive performance of the final model. However, the cross-validated performance of the best configuration is optimistically

    更新日期:2019-11-01
  • Preserving differential privacy in convolutional deep belief networks.
    Mach. Learn. (IF 2.672) Pub Date : 2017-10-01
    NhatHai Phan,Xintao Wu,Dejing Dou

    The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
胸部和胸部成像专题
自然科研论文编辑服务
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
鲁照永
华东师范大学
苏州大学
南京工业大学
南开大学
中科大
唐勇
跟Nature、Science文章学绘图
隐藏1h前已浏览文章
中洪博元
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
x-mol收录
广东实验室
南京大学
王杰
南科大
刘尊峰
湖南大学
清华大学
王小野
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug