当前期刊: Machine Learning Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Learning from positive and unlabeled data: a survey
    Mach. Learn. (IF 2.809) Pub Date : 2020-04-02
    Jessa Bekker, Jesse Davis

    Abstract Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical

    更新日期:2020-04-03
  • Classification using proximity catch digraphs
    Mach. Learn. (IF 2.809) Pub Date : 2020-03-31
    Artür Manukyan, Elvan Ceyhan

    Abstract We employ random geometric digraphs to construct semi-parametric classifiers. These data-random digraphs belong to parameterized random digraph families called proximity catch digraphs (PCDs). A related geometric digraph family, class cover catch digraph (CCCD), has been used to solve the class cover problem by using its approximate minimum dominating set and showed relatively good performance

    更新日期:2020-04-01
  • Discovering subjectively interesting multigraph patterns
    Mach. Learn. (IF 2.809) Pub Date : 2020-03-16
    Sarang Kapoor, Dhish Kumar Saxena, Matthijs van Leeuwen

    Abstract Over the past decade, network analysis has attracted substantial interest because of its potential to solve many real-world problems. This paper lays the conceptual foundation for an application in aviation, through focusing on the discovery of patterns in multigraphs (graphs in which multiple edges can be present between vertices). Our main contributions are twofold. Firstly, we propose a

    更新日期:2020-03-19
  • Detecting anomalous packets in network transfers: investigations using PCA, autoencoder and isolation forest in TCP
    Mach. Learn. (IF 2.809) Pub Date : 2020-03-12
    Mariam Kiran, Cong Wang, George Papadimitriou, Anirban Mandal, Ewa Deelman

    Abstract Large-scale scientific workflows rely heavily on high-performance file transfers. These transfers require strict quality parameters such as guaranteed bandwidth, no packet loss or data duplication. To have successful file transfers, methods such as predetermined thresholds and statistical analysis need to be done to determine abnormal patterns. Network administrators routinely monitor and

    更新日期:2020-03-12
  • Gradient descent optimizes over-parameterized deep ReLU networks
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-23
    Difan Zou, Yuan Cao, Dongruo Zhou, Quanquan Gu

    Abstract We study the problem of training deep fully connected neural networks with Rectified Linear Unit (ReLU) activation function and cross entropy loss function for binary classification using gradient descent. We show that with proper random weight initialization, gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under certain assumption

    更新日期:2020-03-12
  • Multi-label optimal margin distribution machine
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-10
    Zhi-Hao Tan, Peng Tan, Yuan Jiang, Zhi-Hua Zhou

    Abstract Multi-label support vector machine (Rank-SVM) is a classic and effective algorithm for multi-label classification. The pivotal idea is to maximize the minimum margin of label pairs, which is extended from SVM. However, recent studies disclosed that maximizing the minimum margin does not necessarily lead to better generalization performance, and instead, it is more crucial to optimize the margin

    更新日期:2020-03-12
  • Joint consensus and diversity for multi-view semi-supervised classification
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-07
    Wenzhang Zhuge, Chenping Hou, Shaoliang Peng, Dongyun Yi

    Abstract As data can be acquired in an ever-increasing number of ways, multi-view data is becoming more and more available. Considering the high price of labeling data in many machine learning applications, we focus on multi-view semi-supervised classification problem. To address this problem, in this paper, we propose a method called joint consensus and diversity for multi-view semi-supervised classification

    更新日期:2020-03-12
  • Handling concept drift via model reuse
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-10
    Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou

    Abstract In many real-world applications, data are often collected in the form of a stream, and thus the distribution usually changes in nature, which is referred to as concept drift in the literature. We propose a novel and effective approach to handle concept drift via model reuse, that is, reusing models trained on previous data to tackle the changes. Each model is associated with a weight representing

    更新日期:2020-03-12
  • Communication-efficient distributed multi-task learning with matrix sparsity regularization
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-07
    Qiang Zhou, Yu Chen, Sinno Jialin Pan

    Abstract This work focuses on distributed optimization for multi-task learning with matrix sparsity regularization. We propose a fast communication-efficient distributed optimization method for solving the problem. With the proposed method, training data of different tasks can be geo-distributed over different local machines, and the tasks can be learned jointly through the matrix sparsity regularization

    更新日期:2020-03-12
  • Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-10
    Han-Jia Ye, Xiang-Rong Sheng, De-Chuan Zhan

    Abstract Considering the data collection and labeling cost in real-world applications, training a model with limited examples is an essential problem in machine learning, visual recognition, etc. Directly training a model on such few-shot learning (FSL) tasks falls into the over-fitting dilemma, which would turn to an effective task-level inductive bias as a key supervision. By treating the few-shot

    更新日期:2020-03-12
  • Skill-based curiosity for intrinsically motivated reinforcement learning
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-10
    Nicolas Bougie, Ryutaro Ichise

    Abstract Reinforcement learning methods rely on rewards provided by the environment that are extrinsic to the agent. However, many real-world scenarios involve sparse or delayed rewards. In such cases, the agent can develop its own intrinsic reward function called curiosity to enable the agent to explore its environment in the quest of new skills. We propose a novel end-to-end curiosity mechanism for

    更新日期:2020-03-12
  • Classification with costly features as a sequential decision-making problem
    Mach. Learn. (IF 2.809) Pub Date : 2020-02-28
    Jaromír Janisch, Tomáš Pevný, Viliam Lisý

    Abstract This work focuses on a specific classification problem, where the information about a sample is not readily available, but has to be acquired for a cost, and there is a per-sample budget. Inspired by real-world use-cases, we analyze average and hard variations of a directly specified budget. We postulate the problem in its explicit formulation and then convert it into an equivalent MDP, that

    更新日期:2020-03-02
  • Joint maximization of accuracy and information for learning the structure of a Bayesian network classifier
    Mach. Learn. (IF 2.809) Pub Date : 2020-02-28
    Dan Halbersberg, Maydan Wienreb, Boaz Lerner

    Abstract Although recent studies have shown that a Bayesian network classifier (BNC) that maximizes the classification accuracy (i.e., minimizes the 0/1 loss function) is a powerful tool in both knowledge representation and classification, this classifier: (1) focuses on the majority class and, therefore, misclassifies minority classes; (2) is usually uninformative about the distribution of misclassifications;

    更新日期:2020-03-02
  • Predictive spreadsheet autocompletion with constraints
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-25
    Samuel Kolb, Stefano Teso, Anton Dries, Luc De Raedt

    Abstract Spreadsheets are arguably the most accessible data-analysis tool and are used by millions of people. Despite the fact that they lie at the core of most business practices, working with spreadsheets can be error prone, usage of formulas requires training and, crucially, spreadsheet users do not have access to state-of-the-art analysis techniques offered by machine learning. To tackle these

    更新日期:2020-03-02
  • Online Bayesian max-margin subspace learning for multi-view classification and regression
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-25
    Jia He, Changying Du, Fuzhen Zhuang, Xin Yin, Qing He, Guoping Long

    Abstract Multi-view data have become increasingly popular in many real-world applications where data are generated from different information channels or different views such as image + text, audio + video, and webpage + link data. Last decades have witnessed a number of studies devoted to multi-view learning algorithms, especially the predictive latent subspace learning approaches which aim at obtaining

    更新日期:2020-03-02
  • A bad arm existence checking problem: How to utilize asymmetric problem structure?
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-30
    Koji Tabata, Atsuyoshi Nakamura, Junya Honda, Tamiki Komatsuzaki

    Abstract We study a bad arm existence checking problem in a stochastic K-armed bandit setting, in which a player’s task is to judge whether a positive arm exists or all the arms are negative among given K arms by drawing as small number of arms as possible. Here, an arm is positive if its expected loss suffered by drawing the arm is at least a given threshold \(\theta _U\), and it is negative if that

    更新日期:2020-03-02
  • An evaluation of machine-learning for predicting phenotype: studies in yeast, rice, and wheat
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-23
    Nastasiya F. Grinberg, Oghenejokpeme I. Orhobor, Ross D. King

    Abstract In phenotype prediction the physical characteristics of an organism are predicted from knowledge of its genotype and environment. Such studies, often called genome-wide association studies, are of the highest societal importance, as they are of central importance to medicine, crop-breeding, etc. We investigated three phenotype prediction problems: one simple and clean (yeast), and the other

    更新日期:2020-03-02
  • Scalable Bayesian preference learning for crowds
    Mach. Learn. (IF 2.809) Pub Date : 2020-02-06
    Edwin Simpson, Iryna Gurevych

    Abstract We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples’ opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels

    更新日期:2020-02-07
  • Sparse hierarchical regression with polynomials
    Mach. Learn. (IF 2.809) Pub Date : 2020-01-24
    Dimitris Bertsimas, Bart Van Parys

    Abstract We present a novel method for sparse polynomial regression. We are interested in that degree r polynomial which depends on at most k inputs, counting at most \(\ell\) monomial terms, and minimizes the sum of the squares of its prediction errors. Such highly structured sparse regression was denoted by Bach (Advances in neural information processing systems, pp 105–112, 2009) as sparse hierarchical

    更新日期:2020-01-26
  • Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
    Mach. Learn. (IF 2.809) Pub Date : 2020-01-23
    Emanuele Pesce, Giovanni Montana

    Abstract Deep reinforcement learning algorithms have recently been used to train multiple interacting agents in a centralised manner whilst keeping their execution decentralised. When the agents can only acquire partial observations and are faced with tasks requiring coordination and synchronisation skills, inter-agent communication plays an essential role. In this work, we propose a framework for

    更新日期:2020-01-23
  • Sum–product graphical models
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-27
    Mattia Desana, Christoph Schnörr

    This paper introduces a probabilistic architecture called sum–product graphical model (SPGM). SPGMs represent a class of probability distributions that combines, for the first time, the semantics of probabilistic graphical models (GMs) with the evaluation efficiency of sum–product networks (SPNs): Like SPNs, SPGMs always enable tractable inference using a class of models that incorporate context specific

    更新日期:2020-01-17
  • Analysis of Hannan consistent selection for Monte Carlo tree search in simultaneous move games
    Mach. Learn. (IF 2.809) Pub Date : 2019-07-25
    Vojtěch Kovařík, Viliam Lisý

    Abstract Hannan consistency, or no external regret, is a key concept for learning in games. An action selection algorithm is Hannan consistent (HC) if its performance is eventually as good as selecting the best fixed action in hindsight. If both players in a zero-sum normal form game use a Hannan consistent algorithm, their average behavior converges to a Nash equilibrium of the game. A similar result

    更新日期:2020-01-17
  • Provable accelerated gradient method for nonconvex low rank optimization
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-26
    Huan Li, Zhouchen Lin

    Optimization over low rank matrices has broad applications in machine learning. For large-scale problems, an attractive heuristic is to factorize the low rank matrix to a product of two much smaller matrices. In this paper, we study the nonconvex problem \(\min _{\mathbf {U}\in \mathbb {R}^{n\times r}} g(\mathbf {U})=f(\mathbf {U}\mathbf {U}^T)\) under the assumptions that \(f(\mathbf {X})\) is restricted

    更新日期:2020-01-17
  • Rankboost $$+$$+ : an improvement to Rankboost
    Mach. Learn. (IF 2.809) Pub Date : 2019-08-12
    Harold Connamacher, Nikil Pancha, Rui Liu, Soumya Ray

    Abstract Rankboost is a well-known algorithm that iteratively creates and aggregates a collection of “weak rankers” to build an effective ranking procedure. Initial work on Rankboost proposed two variants. One variant, that we call Rb-d and which is designed for the scenario where all weak rankers have the binary range \(\{0,1\}\), has good theoretical properties, but does not perform well in practice

    更新日期:2020-01-17
  • Combining Bayesian optimization and Lipschitz optimization
    Mach. Learn. (IF 2.809) Pub Date : 2019-08-22
    Mohamed Osama Ahmed, Sharan Vaswani, Mark Schmidt

    Abstract Bayesian optimization and Lipschitz optimization have developed alternative techniques for optimizing black-box functions. They each exploit a different form of prior about the function. In this work, we explore strategies to combine these techniques for better global optimization. In particular, we propose ways to use the Lipschitz continuity assumption within traditional BO algorithms, which

    更新日期:2020-01-17
  • Kappa Updated Ensemble for drifting data stream mining
    Mach. Learn. (IF 2.809) Pub Date : 2019-10-02
    Alberto Cano, Bartosz Krawczyk

    Learning from data streams in the presence of concept drift is among the biggest challenges of contemporary machine learning. Algorithms designed for such scenarios must take into an account the potentially unbounded size of data, its constantly changing nature, and the requirement for real-time processing. Ensemble approaches for data stream mining have gained significant popularity, due to their

    更新日期:2020-01-17
  • Conditional density estimation and simulation through optimal transport
    Mach. Learn. (IF 2.809) Pub Date : 2020-01-13
    Esteban G. Tabak, Giulio Trigila, Wenjun Zhao

    Abstract A methodology to estimate from samples the probability density of a random variable x conditional to the values of a set of covariates \(\{z_{l}\}\) is proposed. The methodology relies on a data-driven formulation of the Wasserstein barycenter, posed as a minimax problem in terms of the conditional map carrying each sample point to the barycenter and a potential characterizing the inverse

    更新日期:2020-01-14
  • Model-based kernel sum rule: kernel Bayesian inference with probabilistic models
    Mach. Learn. (IF 2.809) Pub Date : 2020-01-02
    Yu Nishiyama, Motonobu Kanagawa, Arthur Gretton, Kenji Fukumizu

    Kernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes’ rule. However, the

    更新日期:2020-01-04
  • Improved graph-based SFA: information preservation complements the slowness principle
    Mach. Learn. (IF 2.809) Pub Date : 2019-12-26
    Alberto N. Escalante-B., Laurenz Wiskott

    Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a multi-dimensional time series. SFA has been extended to supervised learning (classification and regression) by an algorithm called graph-based SFA (GSFA). GSFA relies on a particular graph structure to extract features that preserve label similarities. Processing of high dimensional input

    更新日期:2020-01-04
  • On cognitive preferences and the plausibility of rule-based models
    Mach. Learn. (IF 2.809) Pub Date : 2019-12-24
    Johannes Fürnkranz, Tomáš Kliegr, Heiko Paulheim

    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly

    更新日期:2020-01-04
  • Distributed block-diagonal approximation methods for regularized empirical risk minimization
    Mach. Learn. (IF 2.809) Pub Date : 2019-12-18
    Ching-pei Lee, Kai-Wei Chang

    Abstract In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified

    更新日期:2020-01-04
  • Learning higher-order logic programs
    Mach. Learn. (IF 2.809) Pub Date : 2019-12-03
    Andrew Cropper, Rolf Morel, Stephen Muggleton

    Abstract A key feature of inductive logic programming is its ability to learn first-order programs, which are intrinsically more expressive than propositional programs. In this paper, we introduce techniques to learn higher-order programs. Specifically, we extend meta-interpretive learning (MIL) to support learning higher-order programs by allowing for higher-order definitions to be used as background

    更新日期:2020-01-04
  • Exploiting causality in gene network reconstruction based on graph embedding
    Mach. Learn. (IF 2.809) Pub Date : 2019-12-03
    Gianvito Pio, Michelangelo Ceci, Francesca Prisciandaro, Donato Malerba

    Gene network reconstruction is a bioinformatics task that aims at modelling the complex regulatory activities that may occur among genes. This task is typically solved by means of link prediction methods that analyze gene expression data. However, the reconstructed networks often suffer from a high amount of false positive edges, which are actually the result of indirect regulation activities due to

    更新日期:2020-01-04
  • A scalable sparse Cholesky based approach for learning high-dimensional covariance matrices in ordered data
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-04
    Kshitij Khare, Sang-Yun Oh, Syed Rahman, Bala Rajaratnam

    Abstract Covariance estimation for high-dimensional datasets is a fundamental problem in machine learning, and has numerous applications. In these high-dimensional settings the number of features or variables p is typically larger than the sample size n. A popular way of tackling this challenge is to induce sparsity in the covariance matrix, its inverse or a relevant transformation. In many applications

    更新日期:2020-01-04
  • Covariance-based dissimilarity measures applied to clustering wide-sense stationary ergodic processes
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-26
    Qidi Peng, Nan Rao, Ran Zhao

    We introduce a new unsupervised learning problem: clustering wide-sense stationary ergodic stochastic processes. A covariance-based dissimilarity measure together with asymptotically consistent algorithms is designed for clustering offline and online datasets, respectively. We also suggest a formal criterion on the efficiency of dissimilarity measures, and discuss an approach to improve the efficiency

    更新日期:2020-01-04
  • 2D compressed learning: support matrix machine with bilinear random projections
    Mach. Learn. (IF 2.809) Pub Date : 2019-05-23
    Di Ma, Songcan Chen

    Support matrix machine (SMM) is an efficient matrix classification method that can leverage the structure information within the matrix to improve the classification performance. However, its computational and storage costs are still expensive for high-dimensional data. To address these problems, in this paper, we consider a 2D compressed learning paradigm to learn the SMM classifier in some compressed

    更新日期:2020-01-04
  • The kernel Kalman rule
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-18
    Gregor H. W. Gebhardt, Andras Kupcsik, Gerhard Neumann

    Abstract Enabling robots to act in unstructured and unknown environments requires versatile state estimation techniques. While traditional state estimation methods require known models and make strong assumptions about the dynamics, such versatile techniques should be able to deal with high dimensional observations and non-linear, unknown system dynamics. The recent framework for nonparametric inference

    更新日期:2020-01-04
  • Speculate-correct error bounds for k -nearest neighbor classifiers
    Mach. Learn. (IF 2.809) Pub Date : 2019-06-18
    Eric Bax, Lingjie Weng, Xu Tian

    Abstract We introduce the speculate-correct method to derive error bounds for local classifiers. Using it, we show that k-nearest neighbor classifiers, in spite of their famously fractured decision boundaries, have exponential error bounds with \(\hbox {O} \left( \sqrt{(k + \ln n)/n} \right) \) range around an estimate of generalization error for n in-sample examples.

    更新日期:2020-01-04
  • Logical reduction of metarules
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-20
    Andrew Cropper, Sophie Tourret

    Many forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use

    更新日期:2020-01-04
  • Inductive general game playing
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-18
    Andrew Cropper, Richard Evans, Mark Law

    General game playing (GGP) is a framework for evaluating an agent’s general intelligence across a wide range of tasks. In the GGP competition, an agent is given the rules of a game (described as a logic program) that it has never seen before. The task is for the agent to play the game, thus generating game traces. The winner of the GGP competition is the agent that gets the best total score over all

    更新日期:2020-01-04
  • A survey on semi-supervised learning
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-15
    Jesper E. van Engelen, Holger H. Hoos

    Abstract Semi-supervised learning is the branch of machine learning concerned with using labelled as well as unlabelled data to perform certain learning tasks. Conceptually situated between supervised and unsupervised learning, it permits harnessing the large amounts of unlabelled data available in many use cases in combination with typically smaller sets of labelled data. In recent years, research

    更新日期:2020-01-04
  • Constructing generative logical models for optimisation problems using domain knowledge
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-13
    Ashwin Srinivasan, Lovekesh Vig, Gautam Shroff

    In this paper we seek to identify data instances with a low value of some objective (or cost) function. Normally posed as optimisation problems, our interest is in problems that have the following characteristics: (a) optimal, or even near-optimal solutions are very rare; (b) it is expensive to obtain the value of the objective function for large numbers of data instances; and (c) there is domain knowledge

    更新日期:2020-01-04
  • On some graph-based two-sample tests for high dimension, low sample size data
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-13
    Soham Sarkar, Rahul Biswas, Anil K. Ghosh

    Abstract Testing for equality of two high-dimensional distributions is a challenging problem, and this becomes even more challenging when the sample size is small. Over the last few decades, several graph-based two-sample tests have been proposed in the literature, which can be used for data of arbitrary dimensions. Most of these test statistics are computed using pairwise Euclidean distances among

    更新日期:2020-01-04
  • Active deep Q-learning with demonstration
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-08
    Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama

    Reinforcement learning (RL) is a machine learning technique aiming to learn how to take actions in an environment to maximize some kind of reward. Recent research has shown that although the learning efficiency of RL can be improved with expert demonstration, it usually takes considerable efforts to obtain enough demonstration. The efforts prevent training decent RL agents with expert demonstration

    更新日期:2020-01-04
  • Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-04
    Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik

    We consider the problem of learning a binary classifier from only positive and unlabeled observations (called PU learning). Recent studies in PU learning have shown superior performance theoretically and empirically. However, most existing algorithms may not be suitable for large-scale datasets because they face repeated computations of a large Gram matrix or require massive hyperparameter optimization

    更新日期:2020-01-04
  • Rank minimization on tensor ring: an efficient approach for tensor decomposition and completion
    Mach. Learn. (IF 2.809) Pub Date : 2019-11-04
    Longhao Yuan, Chao Li, Jianting Cao, Qibin Zhao

    In recent studies, tensor ring decomposition (TRD) has become a promising model for tensor completion. However, TRD suffers from the rank selection problem due to the undetermined multilinear rank. For tensor decomposition with missing entries, the sub-optimal rank selection of traditional methods leads to the overfitting/underfitting problem. In this paper, we first explore the latent space of the

    更新日期:2020-01-04
  • Asymptotically optimal algorithms for budgeted multiple play bandits
    Mach. Learn. (IF 2.809) Pub Date : 2019-05-16
    Alex Luedtke, Emilie Kaufmann, Antoine Chambaz

    Abstract We study a generalization of the multi-armed bandit problem with multiple plays where there is a cost associated with pulling each arm and the agent has a budget at each time that dictates how much she can expect to spend. We derive an asymptotic regret lower bound for any uniformly efficient algorithm in our setting. We then study a variant of Thompson sampling for Bernoulli rewards and a

    更新日期:2020-01-04
  • A greedy feature selection algorithm for Big Data of high dimensionality.
    Mach. Learn. (IF 2.809) Pub Date : 2019-03-25
    Ioannis Tsamardinos,Giorgos Borboudakis,Pavlos Katsogridakis,Polyvios Pratikakis,Vassilis Christophides

    We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) for Big Data of high dimensionality. PFBP partitions the data matrix both in terms of rows as well as columns. By employing the concepts of p-values of conditional independence tests and meta-analysis techniques, PFBP relies only on computations local to a partition while minimizing communication costs

    更新日期:2019-11-01
  • Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation.
    Mach. Learn. (IF 2.809) Pub Date : 2018-11-06
    Ioannis Tsamardinos,Elissavet Greasidou,Giorgos Borboudakis

    Cross-Validation (CV), and out-of-sample performance-estimation protocols in general, are often employed both for (a) selecting the optimal combination of algorithms and values of hyper-parameters (called a configuration) for producing the final predictive model, and (b) estimating the predictive performance of the final model. However, the cross-validated performance of the best configuration is optimistically

    更新日期:2019-11-01
  • Boosted Multivariate Trees for Longitudinal Data.
    Mach. Learn. (IF 2.809) Pub Date : 2017-12-19
    Amol Pande,Liang Li,Jeevanantham Rajeswaran,John Ehrlinger,Udaya B Kogalur,Eugene H Blackstone,Hemant Ishwaran

    Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines

    更新日期:2019-11-01
  • Preserving differential privacy in convolutional deep belief networks.
    Mach. Learn. (IF 2.809) Pub Date : 2017-10-01
    NhatHai Phan,Xintao Wu,Dejing Dou

    The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users' personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private

    更新日期:2019-11-01
  • Learning Classification Models of Cognitive Conditions from Subtle Behaviors in the Digital Clock Drawing Test.
    Mach. Learn. (IF 2.809) Pub Date : 2016-04-09
    William Souillard-Mandar,Randall Davis,Cynthia Rudin,Rhoda Au,David J Libon,Rodney Swenson,Catherine C Price,Melissa Lamar,Dana L Penney

    The Clock Drawing Test - a simple pencil and paper test - has been used for more than 50 years as a screening tool to differentiate normal individuals from those with cognitive impairment, and has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer's disease, Parkinson's disease, and other dementias and conditions. We have been administering

    更新日期:2019-11-01
  • The Effect of Splitting on Random Forests.
    Mach. Learn. (IF 2.809) Pub Date : 2015-04-01
    Hemant Ishwaran

    The effect of a splitting rule on random forests (RF) is systematically studied for regression and classification problems. A class of weighted splitting rules, which includes as special cases CART weighted variance splitting and Gini index splitting, are studied in detail and shown to possess a unique adaptive property to signal and noise. We show for noisy variables that weighted splitting favors

    更新日期:2019-11-01
  • A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing.
    Mach. Learn. (IF 2.809) Pub Date : 2015-03-31
    Karthik Devarajan,Guoli Wang,Nader Ebrahimi

    Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H, such that V ∼ W H. It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information

    更新日期:2019-11-01
  • Detecting Inappropriate Access to Electronic Health Records Using Collaborative Filtering.
    Mach. Learn. (IF 2.809) Pub Date : 2014-04-01
    Aditya Krishna Menon,Xiaoqian Jiang,Jihoon Kim,Jaideep Vaidya,Lucila Ohno-Machado

    Many healthcare facilities enforce security on their electronic health records (EHRs) through a corrective mechanism: some staff nominally have almost unrestricted access to the records, but there is a strict ex post facto audit process for inappropriate accesses, i.e., accesses that violate the facility's security and privacy policies. This process is inefficient, as each suspicious access has to

    更新日期:2019-11-01
  • Differential privacy based on importance weighting.
    Mach. Learn. (IF 2.809) Pub Date : 2014-02-01
    Zhanglong Ji,Charles Elkan

    This paper analyzes a novel method for publishing data while still protecting privacy. The method is based on computing weights that make an existing dataset, for which there are no confidentiality issues, analogous to the dataset that must be kept private. The existing dataset may be genuine but public already, or it may be synthetic. The weights are importance sampling weights, but to protect privacy

    更新日期:2019-11-01
  • Using random forests to diagnose aviation turbulence.
    Mach. Learn. (IF 2.809) Pub Date : 2014-01-01
    John K Williams

    Atmospheric turbulence poses a significant hazard to aviation, with severe encounters costing airlines millions of dollars per year in compensation, aircraft damage, and delays due to required post-event inspections and repairs. Moreover, attempts to avoid turbulent airspace cause flight delays and en route deviations that increase air traffic controller workload, disrupt schedules of air crews and

    更新日期:2019-11-01
  • Enhancing understanding and improving prediction of severe weather through spatiotemporal relational learning.
    Mach. Learn. (IF 2.809) Pub Date : 2014-01-01
    Amy McGovern,David J Gagne,John K Williams,Rodger A Brown,Jeffrey B Basara

    Severe weather, including tornadoes, thunderstorms, wind, and hail annually cause significant loss of life and property. We are developing spatiotemporal machine learning techniques that will enable meteorologists to improve the prediction of these events by improving their understanding of the fundamental causes of the phenomena and by building skillful empirical predictive models. In this paper,

    更新日期:2019-11-01
  • Informing sequential clinical decision-making through reinforcement learning: an empirical study.
    Mach. Learn. (IF 2.809) Pub Date : 2011-07-30
    Susan M Shortreed,Eric Laber,Daniel J Lizotte,T Scott Stroup,Joelle Pineau,Susan A Murphy

    This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the

    更新日期:2019-11-01
  • Ensemble Clustering using Semidefinite Programming with Applications.
    Mach. Learn. (IF 2.809) Pub Date : 2010-05-01
    Vikas Singh,Lopamudra Mukherjee,Jiming Peng,Jinhui Xu

    In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
欢迎探索2019年最具下载量的材料科学论文
宅家赢大奖
向世界展示您的会议墙报和演示文稿
全球疫情及响应:BMC Medicine专题征稿
新版X-MOL期刊搜索和高级搜索功能介绍
化学材料学全球高引用
ACS材料视界
x-mol收录
自然科研论文编辑服务
南方科技大学
南方科技大学
舒伟
中国科学院长春应化所于聪-4-8
复旦大学
课题组网站
X-MOL
香港大学化学系刘俊治
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug