-
Stress detection with encoding physiological signals and convolutional neural network Mach. Learn. (IF 7.5) Pub Date : 2024-03-15 Michela Quadrini, Antonino Capuccio, Denise Falcone, Sebastian Daberdaku, Alessandro Blanda, Luca Bellanova, Gianluca Gerard
-
Glacier: guided locally constrained counterfactual explanations for time series classification Mach. Learn. (IF 7.5) Pub Date : 2024-03-13 Zhendong Wang, Isak Samsten, Ioanna Miliou, Rami Mochaourab, Panagiotis Papapetrou
-
Neural network relief: a pruning algorithm based on neural activity Mach. Learn. (IF 7.5) Pub Date : 2024-03-05
Abstract Current deep neural networks (DNNs) are overparameterized and use most of their neuronal connections during inference for each task. The human brain, however, developed specialized regions for different tasks and performs inference with a small fraction of its neuronal connections. We propose an iterative pruning strategy introducing a simple importance-score metric that deactivates unimportant
-
Tackle balancing constraints in semi-supervised ordinal regression Mach. Learn. (IF 7.5) Pub Date : 2024-03-04 Chenkang Zhang, Heng Huang, Bin Gu
-
An encoding approach for stable change point detection Mach. Learn. (IF 7.5) Pub Date : 2024-02-28 Xiaodong Wang, Fushing Hsieh
-
Fair and green hyperparameter optimization via multi-objective and multiple information source Bayesian optimization Mach. Learn. (IF 7.5) Pub Date : 2024-02-28
Abstract It has been recently remarked that focusing only on accuracy in searching for optimal Machine Learning models amplifies biases contained in the data, leading to unfair predictions and decision supports. Recently, multi-objective hyperparameter optimization has been proposed to search for Machine Learning models which offer equally Pareto-efficient trade-offs between accuracy and fairness.
-
Dynamic datasets and market environments for financial reinforcement learning Mach. Learn. (IF 7.5) Pub Date : 2024-02-26 Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
-
Re-attentive experience replay in off-policy reinforcement learning Mach. Learn. (IF 7.5) Pub Date : 2024-02-22 Wei Wei, Da Wang, Lin Li, Jiye Liang
-
Reinforcement learning tutor better supported lower performers in a math task Mach. Learn. (IF 7.5) Pub Date : 2024-02-09
Abstract Resource limitations make it challenging to provide all students with one of the most effective educational interventions: personalized instruction. Reinforcement learning could be a pivotal tool to decrease the development costs and enhance the effectiveness of intelligent tutoring software, that aims to provide the right support, at the right time, to a student. Here we illustrate that deep
-
Goal exploration augmentation via pre-trained skills for sparse-reward long-horizon goal-conditioned reinforcement learning Mach. Learn. (IF 7.5) Pub Date : 2024-02-05 Lisheng Wu, Ke Chen
-
Goal-conditioned offline reinforcement learning through state space partitioning Mach. Learn. (IF 7.5) Pub Date : 2024-02-05 Mianchu Wang, Yue Jin, Giovanni Montana
-
Differentially private Riemannian optimization Mach. Learn. (IF 7.5) Pub Date : 2024-02-01 Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
-
DPQ: dynamic pseudo-mean mixed-precision quantization for pruned neural network Mach. Learn. (IF 7.5) Pub Date : 2024-01-31 Songwen Pei, Jiyao Wang, Bingxue Zhang, Wei Qin, Hai Xue, Xiaochun Ye, Mingsong Chen
-
Reduced implication-bias logic loss for neuro-symbolic learning Mach. Learn. (IF 7.5) Pub Date : 2024-01-30 Hao-Yuan He, Wang-Zhou Dai, Ming Li
-
Recurrent segmentation meets block models in temporal networks Mach. Learn. (IF 7.5) Pub Date : 2024-01-30 Chamalee Wickrama Arachchi, Nikolaj Tatti
-
DPG: a model to build feature subspace against adversarial patch attack Mach. Learn. (IF 7.5) Pub Date : 2024-01-24
Abstract Adversarial patch attacks in the physical world are a major threat to the application of deep learning. However, current research on adversarial patch defense algorithms focuses on image pre-processing defenses, it has been demonstrated that this defense reduces the classification accuracy of clean images and is unable to defend against physically realizable attacks. In this paper, we propose
-
Paf-tracker: a novel pre-frame auxiliary and fusion visual tracker Mach. Learn. (IF 7.5) Pub Date : 2024-01-24 Wei Liang, Derui Ding, Hui Yu
-
Hybrid approaches to optimization and machine learning methods: a systematic literature review Mach. Learn. (IF 7.5) Pub Date : 2024-01-24 Beatriz Flamia Azevedo, Ana Maria A. C. Rocha, Ana I. Pereira
-
Utilising energy function and variational inference training for learning a graph neural network architecture Mach. Learn. (IF 7.5) Pub Date : 2024-01-23 Gayathri Girish, Deepak Mishra, Subrahamanian K. S. Moosath
-
DynamiSE: dynamic signed network embedding for link prediction Mach. Learn. (IF 7.5) Pub Date : 2024-01-23 Haiting Sun, Peng Tian, Yun Xiong, Yao Zhang, Yali Xiang, Xing Jia, Haofen Wang
-
Empirical analysis of performance assessment for imbalanced classification Mach. Learn. (IF 7.5) Pub Date : 2024-01-23 Jean-Gabriel Gaudreault, Paula Branco
-
Survey on extreme learning machines for outlier detection Mach. Learn. (IF 7.5) Pub Date : 2024-01-23 Rasoul Kiani, Wei Jin, Victor S. Sheng
-
Distributed and explainable GHSOM for anomaly detection in sensor networks Mach. Learn. (IF 7.5) Pub Date : 2024-01-22 Paolo Mignone, Roberto Corizzo, Michelangelo Ceci
-
A neural meta model for predicting winter wheat crop yield Mach. Learn. (IF 7.5) Pub Date : 2024-01-22 Yogesh Bansal, David Lillis, M.-Tahar Kechadi
-
Mental stress detection from ultra-short heart rate variability using explainable graph convolutional network with network pruning and quantisation Mach. Learn. (IF 7.5) Pub Date : 2024-01-22
Abstract This study introduces a novel pruning approach based on explainable graph convolutional networks, strategically amalgamating pruning and quantisation, aimed to tackle the complexities associated with existing machine learning and deep learning models for stress detection using ultra-short heart rate variability analysis. These complexities often impede the implementation ability of such models
-
Event causality extraction through external event knowledge learning and polyhedral word embedding Mach. Learn. (IF 7.5) Pub Date : 2024-01-22
Abstract Extracting causal relations between events from text is vital in natural language processing. Existing methods, which explore the text shallowly, usually aim at casual connection words but neglect implicit causal cues. Furthermore, most of them represent words based solely on contextual semantics, without explicitly considering information related to causality. All of these factors contribute
-
Margin distribution and structural diversity guided ensemble pruning Mach. Learn. (IF 7.5) Pub Date : 2024-01-18
Abstract Ensemble methods that train and combine multiple learners have always been among the state-of-the-art learning methods, and ensemble pruning aims at generating a smaller-sized ensemble with even better generalization performance. Abundant ensemble pruning methods that use evaluation criteria such as diversity or margin together with validation error have been proposed. However, as these evaluation
-
Learning sample-aware threshold for semi-supervised learning Mach. Learn. (IF 7.5) Pub Date : 2024-01-18 Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Rundong He, Yilong Yin
-
Communication-efficient clustered federated learning via model distance Mach. Learn. (IF 7.5) Pub Date : 2024-01-17 Mao Zhang, Tie Zhang, Yifei Cheng, Changcun Bao, Haoyu Cao, Deqiang Jiang, Linli Xu
-
On-the-fly image-level oversampling for imbalanced datasets of manufacturing defects Mach. Learn. (IF 7.5) Pub Date : 2024-01-17 Spyros Theodoropoulos, Patrik Zajec, Jože M. Rožanec, Dimosthenis Kyriazis, Panayiotis Tsanakas
-
Understanding imbalanced data: XAI & interpretable ML framework Mach. Learn. (IF 7.5) Pub Date : 2024-01-16
Abstract There is a gap between current methods that explain deep learning models that work on imbalanced image data and the needs of the imbalanced learning community. Existing methods that explain imbalanced data are geared toward binary classification, single layer machine learning models and low dimensional data. Current eXplainable Artificial Intelligence (XAI) techniques for vision data mainly
-
On the effects of biased quantum random numbers on the initialization of artificial neural networks Mach. Learn. (IF 7.5) Pub Date : 2024-01-16 Raoul Heese, Moritz Wolter, Sascha Mücke, Lukas Franken, Nico Piatkowski
-
OT-net: a reusable neural optimal transport solver Mach. Learn. (IF 7.5) Pub Date : 2024-01-16 Zezeng Li, Shenghao Li, Lianbao Jin, Na Lei, Zhongxuan Luo
-
PANACEA: a neural model ensemble for cyber-threat detection Mach. Learn. (IF 7.5) Pub Date : 2024-01-12 Malik AL-Essa, Giuseppina Andresini, Annalisa Appice, Donato Malerba
-
Hierarchical U-net with re-parameterization technique for spatio-temporal weather forecasting Mach. Learn. (IF 7.5) Pub Date : 2024-01-12
Abstract Due to the considerable computational demands of physics-based numerical weather prediction, especially when modeling fine-grained spatio-temporal atmospheric phenomena, deep learning methods offer an advantageous approach by leveraging specialized computing devices to accelerate training and significantly reduce computational costs. Consequently, the application of deep learning methods has
-
Compositional scene modeling with global object-centric representations Mach. Learn. (IF 7.5) Pub Date : 2024-01-11
Abstract The appearance of the same object may vary in different scene images due to occlusions between objects. Humans can quickly identify the same object, even if occlusions exist, by completing the occluded parts based on its complete canonical image in the memory. Achieving this ability is still challenging for existing models, especially in the unsupervised learning setting. Inspired by such
-
Tracking treatment effect heterogeneity in evolving environments Mach. Learn. (IF 7.5) Pub Date : 2024-01-11 Tian Qin, Long-Fei Li, Tian-Zuo Wang, Zhi-Hua Zhou
-
Explaining neural networks without access to training data Mach. Learn. (IF 7.5) Pub Date : 2024-01-10 Sascha Marton, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, Heiner Stuckenschmidt
-
Principled diverse counterfactuals in multilinear models Mach. Learn. (IF 7.5) Pub Date : 2024-01-10
Abstract Machine learning (ML) applications have automated numerous real-life tasks, improving both private and public life. However, the black-box nature of many state-of-the-art models poses the challenge of model verification; how can one be sure that the algorithm bases its decisions on the proper criteria, or that it does not discriminate against certain minority groups? In this paper we propose
-
Nrat: towards adversarial training with inherent label noise Mach. Learn. (IF 7.5) Pub Date : 2024-01-10
Abstract Adversarial training (AT) has been widely recognized as the most effective defense approach against adversarial attacks on deep neural networks and it is formulated as a min-max optimization. Most AT algorithms are geared towards research-oriented datasets such as MNIST, CIFAR10, etc., where the labels are generally correct. However, noisy labels, e.g., mislabelling, are inevitable in real-world
-
Style spectroscope: improve interpretability and controllability through Fourier analysis Mach. Learn. (IF 7.5) Pub Date : 2024-01-09 Zhiyu Jin, Xuli Shen, Bin Li, Xiangyang Xue
-
Structural causal models reveal confounder bias in linear program modelling Mach. Learn. (IF 7.5) Pub Date : 2024-01-09
Abstract The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks. Specifically, we investigate optimization problems as they constitute a fundamental
-
Better schedules for low precision training of deep neural networks Mach. Learn. (IF 7.5) Pub Date : 2024-01-08
Abstract Low precision training can significantly reduce the computational overhead of training deep neural networks (DNNs). Though many such techniques exist, cyclic precision training (CPT), which dynamically adjusts precision throughout training according to a cyclic schedule, achieves particularly impressive improvements in training efficiency, while actually improving DNN performance. Existing
-
Exploiting counter-examples for active learning with partial labels Mach. Learn. (IF 7.5) Pub Date : 2024-01-08
Abstract This paper studies a new problem, active learning with partial labels (ALPL). In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process. To address ALPL, we first build an intuitive baseline that can be seamlessly incorporated into existing AL frameworks. Though effective, this baseline is still susceptible
-
Learning de-biased regression trees and forests from complex samples Mach. Learn. (IF 7.5) Pub Date : 2024-01-08 Malte Nalenz, Julian Rodemann, Thomas Augustin
-
Fast deep mixtures of Gaussian process experts Mach. Learn. (IF 7.5) Pub Date : 2024-01-08 Clement Etienam, Kody J. H. Law, Sara Wade, Vitaly Zankin
-
Task-decoupled interactive embedding network for object detection Mach. Learn. (IF 7.5) Pub Date : 2024-01-05 Mai Liu, Jichao Jiao, Ning Li, Min Pang
-
No regret sample selection with noisy labels Mach. Learn. (IF 7.5) Pub Date : 2024-01-05 Heon Song, Nariaki Mitsuo, Seiichi Uchida, Daiki Suehiro
-
GS2P: a generative pre-trained learning to rank model with over-parameterization for web-scale search Mach. Learn. (IF 7.5) Pub Date : 2024-01-05 Yuchen Li, Haoyi Xiong, Linghe Kong, Jiang Bian, Shuaiqiang Wang, Guihai Chen, Dawei Yin
-
Entity recognition based on heterogeneous graph reasoning of visual region and text candidate Mach. Learn. (IF 7.5) Pub Date : 2024-01-05 Xinzhi Wang, Nengjun Zhu, Jiahao Li, Yudong Chang, Zhennan Li
-
Persistence B-spline grids: stable vector representation of persistence diagrams based on data fitting Mach. Learn. (IF 7.5) Pub Date : 2024-01-03 Zhetong Dong, Hongwei Lin, Chi Zhou, Ben Zhang, Gengchen Li
-
Sanitized clustering against confounding bias Mach. Learn. (IF 7.5) Pub Date : 2023-12-27 Yinghua Yao, Yuangang Pan, Jing Li, Ivor W. Tsang, Xin Yao
-
Bayesian tensor factorisations for time series of counts Mach. Learn. (IF 7.5) Pub Date : 2023-12-27 Zhongzhen Wang, Petros Dellaportas, Ioannis Kosmidis
-
Active learning algorithm through the lens of rejection arguments Mach. Learn. (IF 7.5) Pub Date : 2023-12-26 Christophe Denis, Mohamed Hebiri, Boris Ndjia Njike, Xavier Siebert
-
Scalable variable selection for two-view learning tasks with projection operators Mach. Learn. (IF 7.5) Pub Date : 2023-12-22
Abstract In this paper we propose a novel variable selection method for two-view settings, or for vector-valued supervised learning problems. Our framework is able to handle extremely large scale selection tasks, where number of data samples could be even millions. In a nutshell, our method performs variable selection by iteratively selecting variables that are highly correlated with the output variables
-
logicDT: a procedure for identifying response-associated interactions between binary predictors Mach. Learn. (IF 7.5) Pub Date : 2023-12-22 Michael Lau, Tamara Schikowski, Holger Schwender
-
Chinese character recognition with radical-structured stroke trees Mach. Learn. (IF 7.5) Pub Date : 2023-12-22 Haiyang Yu, Jingye Chen, Bin Li, Xiangyang Xue
-
Deep doubly robust outcome weighted learning Mach. Learn. (IF 7.5) Pub Date : 2023-12-22 Xiaotong Jiang, Xin Zhou, Michael R. Kosorok
-
Generation, augmentation, and alignment: a pseudo-source domain based method for source-free domain adaptation Mach. Learn. (IF 7.5) Pub Date : 2023-12-20 Yuntao Du, Haiyang Yang, Mingcai Chen, Hongtao Luo, Juan Jiang, Yi Xin, Chongjun Wang
-
Hybrid acceleration techniques for the physics-informed neural networks: a comparative analysis Mach. Learn. (IF 7.5) Pub Date : 2023-12-20 Fedor Buzaev, Jiexing Gao, Ivan Chuprov, Evgeniy Kazakov