-
Facilitating interaction between partial differential equation-based dynamics and unknown dynamics for regional wind speed prediction Neural Netw. (IF 7.8) Pub Date : 2024-03-11 Shidong Chen, Baoquan Zhang, Xutao Li, Yunming Ye, Kenghong Lin
Regional wind speed prediction is an important spatiotemporal prediction problem which is crucial for optimizing wind power utilization. Nevertheless, the complex dynamics of wind speed pose a formidable challenge to prediction tasks. The evolving dynamics of wind could be governed by underlying physical principles that can be described by partial differential equations (PDE). This study proposes a
-
FE-Net: Feature enhancement segmentation network Neural Netw. (IF 7.8) Pub Date : 2024-03-11 Zhangyan Zhao, Xiaoming Chen, Jingjing Cao, Qiangwei Zhao, Wenxi Liu
Semantic segmentation is one of the directions in image research. It aims to obtain the contours of objects of interest, facilitating subsequent engineering tasks such as measurement and feature selection. However, existing segmentation methods still lack precision in class edge, particularly in multi-class mixed region. To this end, we present the Feature Enhancement Network (FE-Net), a novel approach
-
Source-free unsupervised domain adaptation: A survey Neural Netw. (IF 7.8) Pub Date : 2024-03-11 Yuqi Fang, Pew-Thian Yap, Weili Lin, Hongtu Zhu, Mingxia Liu
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden
-
Efficient learning of Scale-Adaptive Nearly Affine Invariant Networks Neural Netw. (IF 7.8) Pub Date : 2024-03-11 Zhengyang Shen, Yeqing Qiu, Jialun Liu, Lingshen He, Zhouchen Lin
Recent research has demonstrated the significance of incorporating invariance into neural networks. However, existing methods require direct sampling over the entire transformation set, notably computationally taxing for large groups like the affine group. In this study, we propose a more efficient approach by addressing the invariances of the subgroups within a larger group. For tackling affine invariance
-
DWSSA: Alleviating over-smoothness for deep Graph Neural Networks Neural Netw. (IF 7.8) Pub Date : 2024-03-06 Qirong Zhang, Jin Li, Qingqing Ye, Yuxi Lin, Xinlong Chen, Yang-Geng Fu
Graph Neural Networks (GNNs) have demonstrated great potential in achieving outstanding performance in various graph-related tasks, e.g., graph classification and link prediction. However, most of them suffer from the following issue: shallow networks capture very limited knowledge. Prior works design deep GNNs with more layers to solve the issue, which however introduces a new challenge, i.e., the
-
Generalization analysis of deep CNNs under maximum correntropy criterion Neural Netw. (IF 7.8) Pub Date : 2024-03-05 Yingqiao Zhang, Zhiying Fang, Jun Fan
Convolutional neural networks (CNNs) have gained immense popularity in recent years, finding their utility in diverse fields such as image recognition, natural language processing, and bio-informatics. Despite the remarkable progress made in deep learning theory, most studies on CNNs, especially in regression tasks, tend to heavily rely on the least squares loss function. However, there are situations
-
Adaptive Relation-Aware Network for zero-shot classification Neural Netw. (IF 7.8) Pub Date : 2024-03-05 Xun Zhang, Yang Liu, Yuhao Dang, Xinbo Gao, Jungong Han, Ling Shao
Supervised learning-based image classification in computer vision relies on visual samples containing a large amount of labeled information. Considering that it is labor-intensive to collect and label images and construct datasets manually, Zero-Shot Learning (ZSL) achieves knowledge transfer from seen categories to unseen categories by mining auxiliary information, which reduces the dependence on
-
Attributed Multi-Order Graph Convolutional Network for Heterogeneous Graphs Neural Netw. (IF 7.8) Pub Date : 2024-03-04 Zhaoliang Chen, Zhihao Wu, Luying Zhong, Claudia Plant, Shiping Wang, Wenzhong Guo
Heterogeneous graph neural networks play a crucial role in discovering discriminative node embeddings and relations from multi-relational networks. One of the key challenges in heterogeneous graph learning lies in designing learnable meta-paths, which significantly impact the quality of learned embeddings. In this paper, we propose an ttributed ulti-rder raph onvolutional etwork (AMOGCN), which automatically
-
Multi-view graph pooling with coarsened graph disentanglement Neural Netw. (IF 7.8) Pub Date : 2024-03-04 Zidong Wang, Huilong Fan
Multi-view graph pooling utilizes information from multiple perspectives to generate a coarsened graph, exhibiting superior performance in graph-level tasks. However, existing methods mainly focus on the types of multi-view information to improve graph pooling operations, lacking explicit control over the pooling process and theoretical analysis of the relationships between views. In this paper, we
-
A self-supervised network for image denoising and watermark removal Neural Netw. (IF 7.8) Pub Date : 2024-03-01 Chunwei Tian, Jingyu Xiao, Bob Zhang, Wangmeng Zuo, Yudong Zhang, Chia-Wen Lin
In image watermark removal, popular methods depend on given reference non-watermark images in a supervised way. However, reference non-watermark images are difficult to obtain in the real world. At the same time, they often suffer from the influence of noise when captured by digital devices. To resolve these issues, in this paper, we present a self-supervised network for image denoising and watermark
-
Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks Neural Netw. (IF 7.8) Pub Date : 2024-03-01 Demetrio Labate, Ji Shi
The expressive power of deep neural networks is manifested by their remarkable ability to approximate multivariate functions in a way that appears to overcome the curse of dimensionality. This ability is exemplified by their success in solving high-dimensional problems where traditional numerical solvers fail due to their limitations in accurately representing high-dimensional structures. To provide
-
An Inductive Reasoning Model based on Interpretable Logical Rules over temporal knowledge graph Neural Netw. (IF 7.8) Pub Date : 2024-02-29 Xin Mei, Libin Yang, Zuowei Jiang, Xiaoyan Cai, Dehong Gao, Junwei Han, Shirui Pan
Extrapolating future events based on historical information in temporal knowledge graphs (TKGs) holds significant research value and practical applications. In this field, the methods currently utilized can be classified as either embedding-based or logical rule-based. Embedding-based methods depend on learned entity and relation embeddings for prediction, but they suffer from the lack of interpretability
-
ARPruning: An automatic channel pruning based on attention map ranking Neural Netw. (IF 7.8) Pub Date : 2024-02-29 Tongtong Yuan, Zulin Li, Bo Liu, Yinan Tang, Yujia Liu
Structured pruning is a representative model compression technology for convolutional neural networks (CNNs), aiming to prune some less important filters or channels of CNNs. Most recent structured pruning methods have established some criteria to measure the importance of filters, which are mainly based on the magnitude of weights or other parameters in CNNs. However, these judgment criteria lack
-
MuLAN: Multi-level attention-enhanced matching network for few-shot knowledge graph completion Neural Netw. (IF 7.8) Pub Date : 2024-02-29 Qianyu Li, Bozheng Feng, Xiaoli Tang, Han Yu, Hengjie Song
Recent years have witnessed increasing interest in the few-shot knowledge graph completion due to its potential to augment the coverage of few-shot relations in knowledge graphs. Existing methods often use the one-hop neighbors of the entity to enhance its embedding and match the query instance and support set at the instance level. However, such methods cannot handle inter-neighbor interaction, local
-
Tolerant Self-Distillation for image classification Neural Netw. (IF 7.8) Pub Date : 2024-02-28 Mushui Liu, Yunlong Yu, Zhong Ji, Jungong Han, Zhongfei Zhang
Deep neural networks tend to suffer from the overfitting issue when the training data are not enough. In this paper, we introduce two metrics from the intra-class distribution of correct-predicted and incorrect-predicted samples to provide a new perspective on the overfitting issue. Based on it, we propose a knowledge distillation approach without pretraining a teacher model in advance named Tolerant
-
Spatial multi-attention conditional neural processes Neural Netw. (IF 7.8) Pub Date : 2024-02-28 Li-Li Bao, Jiang-She Zhang, Chun-Xia Zhang
Spatial prediction tasks are challenging when observed samples are sparse and prediction samples are abundant. Gaussian processes (GPs) are commonly used in spatial prediction tasks and have the advantage of measuring the uncertainty of the interpolation result. However, as the sample size increases, GPs suffer from significant overhead. Standard neural networks (NNs) provide a powerful and scalable
-
Weisfeiler–Lehman goes dynamic: An analysis of the expressive power of Graph Neural Networks for attributed and dynamic graphs Neural Netw. (IF 7.8) Pub Date : 2024-02-28 Silvia Beddar-Wiesing, Giuseppe Alessio D’Inverno, Caterina Graziani, Veronica Lachi, Alice Moallemy-Oureh, Franco Scarselli, Josephine Maria Thomas
Graph Neural Networks (GNNs) are a large class of relational models for graph processing. Recent theoretical studies on the expressive power of GNNs have focused on two issues. On the one hand, it has been proven that GNNs are as powerful as the Weisfeiler–Lehman test (1-WL) in their ability to distinguish graphs. Moreover, it has been shown that the equivalence enforced by 1-WL equals unfolding equivalence
-
Communication-efficient distributed cubic Newton with compressed lazy Hessian Neural Netw. (IF 7.8) Pub Date : 2024-02-27 Zhen Zhang, Keqin Che, Shaofu Yang, Wenying Xu
Recently, second-order distributed optimization algorithms have been becoming a research hot in distributed learning, due to their faster convergence rate than the first-order algorithms. However, second-order algorithms always suffer from serious communication bottleneck. To conquer such challenge, we propose communication-efficient second-order distributed optimization algorithms in the parameter-server
-
Structure-aware contrastive hashing for unsupervised cross-modal retrieval Neural Netw. (IF 7.8) Pub Date : 2024-02-27 Jinrong Cui, Zhipeng He, Qiong Huang, Yulu Fu, Yuting Li, Jie Wen
Cross-modal hashing has attracted a lot of attention and achieved remarkable success in large-scale cross-media similarity retrieval applications because of its superior computational efficiency and low storage overhead. However, constructing similarity relationship among samples in cross-modal unsupervised hashing is challenging because of the lack of manual annotation. Most existing unsupervised
-
A Comprehensive Survey on Deep Graph Representation Learning Neural Netw. (IF 7.8) Pub Date : 2024-02-27 Wei Ju, Zheng Fang, Yiyang Gu, Zequn Liu, Qingqing Long, Ziyue Qiao, Yifang Qin, Jianhao Shen, Fang Sun, Zhiping Xiao, Junwei Yang, Jingyang Yuan, Yusheng Zhao, Yifan Wang, Xiao Luo, Ming Zhang
Graph representation learning aims to effectively encode high-dimensional sparse graph-structured data into low-dimensional dense vectors, which is a fundamental task that has been widely studied in a range of fields, including machine learning and data mining. Classic graph embedding methods follow the basic idea that the embedding vectors of interconnected nodes in the graph can still maintain a
-
Multi-level multilingual semantic alignment for zero-shot cross-lingual transfer learning Neural Netw. (IF 7.8) Pub Date : 2024-02-27 Anchun Gui, Han Xiao
Recently, cross-lingual transfer learning has attracted extensive attention from both academia and industry. Previous studies usually focus only on the single-level alignment (e.g., word-level, sentence-level), based on pre-trained language models. However, it leads to suboptimal performance in downstream tasks of the low-resource language due to the missing correlation of hierarchical semantic information
-
Graph-based social relation inference with multi-level conditional attention Neural Netw. (IF 7.8) Pub Date : 2024-02-27 Xiaotian Yu, Hanling Yi, Qie Tang, Kun Huang, Wenze Hu, Shiliang Zhang, Xiaoyu Wang
Social relation inference intrinsically requires high-level semantic understanding. In order to accurately infer relations of persons in images, one needs not only to understand scenes and objects in images, but also to adaptively attend to important clues. Unlike prior works of classifying social relations using attention on detected objects, we propose a MUlti-level Conditional Attention (MUCA) mechanism
-
On the approximation of bi-Lipschitz maps by invertible neural networks Neural Netw. (IF 7.8) Pub Date : 2024-02-24 Bangti Jin, Zehui Zhou, Jun Zou
Invertible neural networks (INNs) represent an important class of deep neural network architectures that have been widely used in applications. The universal approximation properties of INNs have been established recently. However, the approximation rate of INNs is largely missing. In this work, we provide an analysis of the capacity of a class of coupling-based INNs to approximate bi-Lipschitz continuous
-
A regularized orthogonal activated inverse-learning neural network for regression and classification with outliers Neural Netw. (IF 7.8) Pub Date : 2024-02-24 Zhijun Zhang, Yating Song, Tao Chen, Jie He
A novel regularized orthogonal activated inverse-learning (ROAIL) neural network is proposed and investigated for reducing the impact of outliers in regression and classification fields. The proposed ROAIL network does not require extensive iterative computations. Instead, it can achieve the desired results with a single step of computation, allowing for the efficient acquisition of network weights
-
Multi-node knowledge graph assisted distributed fault detection for large-scale industrial processes based on graph attention network and bidirectional LSTMs Neural Netw. (IF 7.8) Pub Date : 2024-02-24 Qing Li, Yangfan Wang, Jie Dong, Chi Zhang, Kaixiang Peng
Modern industrial processes are characterized by extensive, multiple operation units, and strong coupled correlation of subsystems. Fault detection of large-scale processes is still a challenging problem, especially for tandem plant-wide processes in multiple fields such as water treatment process. In this paper, a novel distributed graph attention network-bidirectional long short-term memory (D-GATBLSTM)
-
TCDformer: A transformer framework for non-stationary time series forecasting based on trend and change-point detection Neural Netw. (IF 7.8) Pub Date : 2024-02-23 Jiashan Wan, Na Xia, Yutao Yin, Xulei Pan, Jin Hu, Jun Yi
Although time series prediction models based on Transformer architecture have achieved significant advances, concerns have arisen regarding their performance with non-stationary real-world data. Traditional methods often use stabilization techniques to boost predictability, but this often results in the loss of non-stationarity, notably underperforming when tackling major events in practical applications
-
Mode combinability: Exploring convex combinations of permutation aligned models Neural Netw. (IF 7.8) Pub Date : 2024-02-23 Adrián Csiszárik, Melinda F. Kiss, Péter Kőrösi-Szabó, Márton Muntag, Gergely Papp, Dániel Varga
We explore element-wise convex combinations of two permutation-aligned neural network parameter vectors and of size . We conduct extensive experiments by examining various distributions of such model combinations parametrized by elements of the hypercube and its vicinity. Our findings reveal that broad regions of the hypercube form surfaces of low loss values, indicating that the notion of linear mode
-
Towards a unified framework for graph-based multi-view clustering Neural Netw. (IF 7.8) Pub Date : 2024-02-23 F. Dornaika, S. El Hajjar
Recently, clustering data collected from various sources has become a hot topic in real-world applications. The most common methods for multi-view clustering can be divided into several categories: Spectral clustering algorithms, subspace multi-view clustering algorithms, matrix factorization approaches, and kernel methods. Despite the high performance of these methods, they directly fuse all similarity
-
Hierarchical matching and reasoning for multi-query image retrieval Neural Netw. (IF 7.8) Pub Date : 2024-02-22 Zhong Ji, Zhihao Li, Yan Zhang, Haoran Wang, Yanwei Pang, Xuelong Li
As a promising field, Multi-Query Image Retrieval (MQIR) aims at searching for the semantically relevant image given multiple region-specific text queries. Existing works mainly focus on a single-level similarity between image regions and text queries, which neglect the hierarchical guidance of multi-level similarities and result in incomplete alignments. Besides, the high-level semantic correlations
-
How to evaluate uncertainty estimates in machine learning for regression? Neural Netw. (IF 7.8) Pub Date : 2024-02-22 Laurens Sluijterman, Eric Cator, Tom Heskes
As neural networks become more popular, the need for accompanying uncertainty estimates increases. There are currently two main approaches to test the quality of these estimates. Most methods output a density. They can be compared by evaluating their loglikelihood on a test set. Other methods output a prediction interval directly. These methods are often tested by examining the fraction of test points
-
SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning Neural Netw. (IF 7.8) Pub Date : 2024-02-21 Peihao Li, Jie Huang, Huaqing Wu, Zeping Zhang, Chunyang Qi
With the widespread application of deep neural networks (DNNs), the risk of privacy breaches against DNN models is constantly on the rise, resulting in an increasing need for intellectual property (IP) protection for such models. Although neural network watermarking techniques are widely used to safeguard the IP of DNNs, they can only achieve passive protection and cannot actively prevent unauthorized
-
Robust noise-aware algorithm for randomized neural network and its convergence properties Neural Netw. (IF 7.8) Pub Date : 2024-02-21 Yuqi Xiao, Muideen Adegoke, Chi-Sing Leung, Kwok Wa Leung
The concept of randomized neural networks (RNNs), such as the random vector functional link network (RVFL) and extreme learning machine (ELM), is a widely accepted and efficient network method for constructing single-hidden layer feedforward networks (SLFNs). Due to its exceptional approximation capabilities, RNN is being extensively used in various fields. While the RNN concept has shown great promise
-
Enhancing adversarial attacks with resize-invariant and logical ensemble Neural Netw. (IF 7.8) Pub Date : 2024-02-20 Yanling Shao, Yuzhi Zhang, Wenyong Dong, Qikun Zhang, Pingping Shan, Junying Guo, Hairui Xu
In black-box scenarios, most transfer-based attacks usually improve the transferability of adversarial examples by optimizing the gradient calculation of the input image. Unfortunately, since the gradient information is only calculated and optimized for each pixel point in the image individually, the generated adversarial examples tend to overfit the local model and have poor transferability to the
-
Delay-dependent Lurie–Postnikov type Lyapunov–Krasovskii functionals for stability analysis of discrete-time delayed neural networks Neural Netw. (IF 7.8) Pub Date : 2024-02-20 Ke-You Xie, Chuan-Ke Zhang, Sangmoon Lee, Yong He, Yajuan Liu
This paper addresses the influence of time-varying delay and nonlinear activation functions with sector restrictions on the stability of discrete-time neural networks. Compared to previous works that mainly focuses on the influence of delay information, this paper devotes to activation nonlinear functions information to help compensate the analysis technique based on Lyapunov–Krasovskii functional
-
A feature refinement and adaptive generative adversarial network for thermal infrared image colorization Neural Netw. (IF 7.8) Pub Date : 2024-02-17 Yu Chen, Weida Zhan, Yichun Jiang, Depeng Zhu, Xiaoyu Xu, Ziqiang Hao, Jin Li, Jinxin Guo
Colorizing thermal infrared images poses a significant challenge as current methods struggle with issues such as unrealistic color saturation and limited texture. To address these challenges, we propose the Feature Refinement and Adaptive Generative Adversarial Network (FRAGAN). Our approach enhances the detailed, semantic, and contextual capabilities of image coloring by combining multi-level interactions
-
Higher-order neurodynamical equation for simplex prediction Neural Netw. (IF 7.8) Pub Date : 2024-02-17 Zhihui Wang, Jianrui Chen, Maoguo Gong, Zhongshi Shao
It is demonstrated that higher-order patterns beyond pairwise relations can significantly enhance the learning capability of existing graph-based models, and simplex is one of the primary form for graphically representing higher-order patterns. Predicting unknown (disappeared) simplices in real-world complex networks can provide us with deeper insights, thereby assisting us in making better decisions
-
Towards a better negative sampling strategy for dynamic graphs Neural Netw. (IF 7.8) Pub Date : 2024-02-17 Kuang Gao, Chuang Liu, Jia Wu, Bo Du, Wenbin Hu
As dynamic graphs have become indispensable in numerous fields due to their capacity to represent evolving relationships over time, there has been a concomitant increase in the development of Temporal Graph Neural Networks (TGNNs). When training TGNNs for dynamic graph link prediction, the commonly used negative sampling method often produces starkly contrasting samples, which can lead the model to
-
Triplet-constrained deep hashing for chest X-ray image retrieval in COVID-19 assessment Neural Netw. (IF 7.8) Pub Date : 2024-02-16 Linmin Wang, Qianqian Wang, Xiaochuan Wang, Yunling Ma, Limei Zhang, Mingxia Liu
Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest
-
CGO-ensemble: Chaos game optimization algorithm-based fusion of deep neural networks for accurate Mpox detection Neural Netw. (IF 7.8) Pub Date : 2024-02-16 Sohaib Asif, Ming Zhao, Yangfan Li, Fengxiao Tang, Yusen Zhu
The rising global incidence of human Mpox cases necessitates prompt and accurate identification for effective disease control. Previous studies have predominantly delved into traditional ensemble methods for detection, we introduce a novel approach by leveraging a metaheuristic-based ensemble framework. In this research, we present an innovative CGO-Ensemble framework designed to elevate the accuracy
-
Methodology based on spiking neural networks for univariate time-series forecasting Neural Netw. (IF 7.8) Pub Date : 2024-02-16 Sergio Lucas, Eva Portillo
Spiking Neural Networks (SNN) are recognized as well-suited for processing spatiotemporal information with ultra-low energy consumption. However, proposals based on SNN for classification tasks are more common than for forecasting problems. In this sense, this paper presents a new general training methodology for univariate time-series forecasting based on SNN. The methodology is focused on one-step
-
Graph Neural Network contextual embedding for Deep Learning on tabular data Neural Netw. (IF 7.8) Pub Date : 2024-02-16 Mario Villaizán-Vallelado, Matteo Salvatori, Belén Carro, Antonio Javier Sanchez-Esguevillas
All industries are trying to leverage Artificial Intelligence (AI) based on their existing big data which is available in so called tabular form, where each record is composed of a number of heterogeneous continuous and categorical columns also known as features. Deep Learning (DL) has constituted a major breakthrough for AI in fields related to human skills like natural language processing, but its
-
Efficient spiking neural network design via neural architecture search Neural Netw. (IF 7.8) Pub Date : 2024-02-16 Jiaqi Yan, Qianhui Liu, Malu Zhang, Lang Feng, De Ma, Haizhou Li, Gang Pan
Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures
-
Fading memory as inductive bias in residual recurrent networks Neural Netw. (IF 7.8) Pub Date : 2024-02-15 Igor Dubinin, Felix Effenberger
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce
-
Extended Dynamic Mode Decomposition with Invertible Dictionary Learning Neural Netw. (IF 7.8) Pub Date : 2024-02-15 Yuhong Jin, Lei Hou, Shun Zhong
The Koopman operator has received attention for providing a potentially global linearization representation of the nonlinear dynamical system. To estimate or control the original system, the invertibility problem is introduced into the data-driven modeling, i.e., the observables are required to be reconstructed the original system’s states. Existing methods cannot solve this problem perfectly. Only
-
SCMEA: A stacked co-enhanced model for entity alignment based on multi-aspect information fusion and bidirectional contrastive learning Neural Netw. (IF 7.8) Pub Date : 2024-02-15 Yunfeng Zhou, Cui Zhu, Wenjun Zhu, Hongyang Li
Entity alignment refers to discovering the entity pairs with the same realistic meaning in different knowledge graphs. This technology is of great significance for completing and fusing knowledge graphs. Recently, methods based on knowledge representation learning have achieved remarkable achievements in entity alignment. However, most existing approaches do not mine hidden information in the knowledge
-
Noncompact uniform universal approximation Neural Netw. (IF 7.8) Pub Date : 2024-02-15 Teun D.H. van Nuland
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space . All continuous functions that vanish at infinity can be uniformly approximated by neural networks with one hidden layer, for all activation functions that are continuous, nonpolynomial, and asymptotically polynomial at . When is moreover bounded, we exactly determine which functions can be uniformly
-
Defense against adversarial attacks based on color space transformation Neural Netw. (IF 7.8) Pub Date : 2024-02-14 Haoyu Wang, Chunhua Wu, Kangfeng Zheng
Deep Learning algorithms have achieved state-of-the-art performance in various important tasks. However, recent studies have found that an elaborate perturbation may cause a network to misclassify, which is known as an adversarial attack. Based on current research, it is suggested that adversarial examples cannot be eliminated completely. Consequently, it is always possible to determine an attack that
-
Hebbian dreaming for small datasets Neural Netw. (IF 7.8) Pub Date : 2024-02-12 Elena Agliari, Francesco Alemanno, Miriam Aquaro, Adriano Barra, Fabrizio Durante, Ido Kanter
The dreaming Hopfield model constitutes a generalization of the Hebbian paradigm for neural networks, that is able to perform on-line learning when “awake” and also to account for off-line “sleeping” mechanisms. The latter have been shown to enhance storing in such a way that, in the long sleep-time limit, this model can reach the maximal storage capacity achievable by networks equipped with symmetric
-
A comprehensive and reliable feature attribution method: Double-sided remove and reconstruct (DoRaR) Neural Netw. (IF 7.8) Pub Date : 2024-02-10 Dong Qin, George T. Amariucai, Daji Qiao, Yong Guan, Shen Fu
The limited transparency of the inner decision-making mechanism in deep neural networks (DNN) and other machine learning (ML) models has hindered their application in several domains. In order to tackle this issue, feature attribution methods have been developed to identify the crucial features that heavily influence decisions made by these black box models. However, many feature attribution methods
-
Confounder balancing in adversarial domain adaptation for pre-trained large models fine-tuning Neural Netw. (IF 7.8) Pub Date : 2024-02-10 Shuoran Jiang, Qingcai Chen, Yang Xiang, Youcheng Pan, Xiangping Wu, Yukang Lin
The excellent generalization, contextual learning, and emergence abilities in the pre-trained large models (PLMs) handle specific tasks without direct training data, making them the better foundation models in the adversarial domain adaptation (ADA) methods to transfer knowledge learned from the source domain to target domains. However, existing ADA methods fail to account for the confounder properly
-
DDK: Dynamic structure pruning based on differentiable search and recursive knowledge distillation for BERT Neural Netw. (IF 7.8) Pub Date : 2024-02-09 Zhou Zhang, Yang Lu, Tengfei Wang, Xing Wei, Zhen Wei
Large-scale pre-trained models, such as BERT, have demonstrated outstanding performance in Natural Language Processing (NLP). Nevertheless, the high number of parameters in these models has increased the demand for hardware storage and computational resources while posing a challenge for their practical deployment. In this article, we propose a combined method of model pruning and knowledge distillation
-
Fast multi-view clustering via correntropy-based orthogonal concept factorization Neural Netw. (IF 7.8) Pub Date : 2024-02-09 Jinghan Wu, Ben Yang, Zhiyuan Xue, Xuetao Zhang, Zhiping Lin, Badong Chen
Owing to its ability to handle negative data and promising clustering performance, concept factorization (CF), an improved version of non-negative matrix factorization, has been incorporated into multi-view clustering recently. Nevertheless, existing CF-based multi-view clustering methods still have the following issues: 1) they directly conduct factorization in the original data space, which means
-
A universal multi-source domain adaptation method with unsupervised clustering for mechanical fault diagnosis under incomplete data Neural Netw. (IF 7.8) Pub Date : 2024-02-08 Jinghui Tian, Dongying Han, Hamid Reza Karimi, Yu Zhang, Peiming Shi
Recently, due to the difficulty of collecting condition data covering all mechanical fault types in industrial scenarios, the fault diagnosis problem under incomplete data is receiving increasing attention where no target prior information can be available. The existing open-set or universal domain adaptation (DA) diagnosis methods typically treat private fault samples in the target as a generalized
-
One-step Bayesian example-dependent cost classification: The OsC-MLP method Neural Netw. (IF 7.8) Pub Date : 2024-02-08 Javier Mediavilla-Relaño, Marcelino Lázaro
Example-dependent cost classification problems are those where the decision costs depend not only on the true and the attributed classes but also on the sample features. Discriminative algorithms that carry out such classification tasks must take this dependence into account. In some applications, the decision costs are known for the training set but not in production, which complicates the problem
-
Node-personalized multi-graph convolutional networks for recommendation Neural Netw. (IF 7.8) Pub Date : 2024-02-08 Tiantian Zhou, Hailiang Ye, Feilong Cao
Graph neural networks have revealed powerful potential in ranking recommendation. Existing methods based on bipartite graphs for ranking recommendation mainly focus on homogeneous graphs and usually treat user and item nodes as the same kind of nodes, however, the user–item bipartite graph is always heterogeneous. Additionally, various types of nodes have varying effects on recommendations, and a good
-
Priors-assisted dehazing network with attention supervision and detail preservation Neural Netw. (IF 7.8) Pub Date : 2024-02-06 Weichao Yi, Liquan Dong, Ming Liu, Mei Hui, Lingqin Kong, Yuejin Zhao
Single image dehazing is a challenging computer vision task for other high-level applications, object detection, navigation, and positioning systems. Recently, most existing dehazing methods have followed a “black box” recovery paradigm that obtains the haze-free image from its corresponding hazy input by network learning. Unfortunately, these algorithms ignore the effective utilization of relevant
-
Unsupervised distribution-aware keypoints generation from 3D point clouds Neural Netw. (IF 7.8) Pub Date : 2024-02-06 Yiqi Wu, Xingye Chen, Xuan Huang, Kelin Song, Dejun Zhang
Keypoints extraction from 3D objects is a fundamental task in point cloud processing. The ideal keypoints should be an ordered and well-aligned set of points that effectively reflect the shape and structure of the object. To this end, this paper proposes an unsupervised 3D point cloud keypoints generation network with the consideration of the probability distribution of keypoints and spatial distribution
-
Cross-modality interaction for few-shot multispectral object detection with semantic knowledge Neural Netw. (IF 7.8) Pub Date : 2024-02-05 Lian Huang, Zongju Peng, Fen Chen, Shaosheng Dai, Ziqiang He, Kesheng Liu
Multispectral object detection (MOD), which incorporates additional information from thermal images into object detection (OD) to robustly cope with complex illumination conditions, has garnered significant attention. However, existing MOD methods always demand a considerable amount of annotated data for training. Inspired by the concept of few-shot learning, we propose a novel task called few-shot
-
PSA-GNN: An augmented GNN framework with priori subgraph knowledge Neural Netw. (IF 7.8) Pub Date : 2024-02-04 Guotong Xue, Ming Zhong, Tieyun Qian, Jianxin Li
Graph neural networks have become the primary graph representation learning paradigm, in which nodes update their embeddings by aggregating messages from their neighbors iteratively. However, current message passing based GNNs exploit the higher-order subgraph information other than 1st-order neighbors insufficiently. In contrast, the long-standing graph research has investigated various subgraphs
-
Bayesian hypernetwork collaborates with time-difference evolutional network for temporal knowledge prediction Neural Netw. (IF 7.8) Pub Date : 2024-02-01 Pengpeng Shao, Jianhua Tao, Dawei Zhang
A Temporal Knowledge Graph (TKG) is a sequence of Knowledge Graphs (KGs) attached with time information, in which each KG contains the facts that co-occur at the same timestamp. Temporal knowledge prediction (TKP) aims to predict future events given observed historical KGs in TKGs, which is essential for many applications to provide intelligent analysis services. However, most existing TKP methods