-
Multimodal fusion for large-scale traffic prediction with heterogeneous retentive networks Inform. Fusion (IF 14.7) Pub Date : 2024-09-13 Yimo Yan, Songyi Cui, Jiahui Liu, Yaping Zhao, Bodong Zhou, Yong-Hong Kuo
Traffic speed prediction is a critical challenge in transportation research due to the complex spatiotemporal dynamics of urban mobility. This study proposes a novel framework for fusing diverse data modalities to enhance short-term traffic speed forecasting accuracy. We introduce the Heterogeneous Retentive Network (H-RetNet), which integrates multisource urban data into high-dimensional representations
-
Scalable data fusion via a scale-based hierarchical framework: Adapting to multi-source and multi-scale scenarios Inform. Fusion (IF 14.7) Pub Date : 2024-09-12 Xiaoyan Zhang, Jiajia Lin
Multi-source information fusion addresses challenges in integrating and transforming complementary data from diverse sources to facilitate unified information representation for centralized knowledge discovery. However, traditional methods face difficulties when applied to multi-scale data, where optimal scale selection can effectively resolve these issues but typically lack the advantage of identifying
-
Tensor-based unsupervised feature selection for error-robust handling of unbalanced incomplete multi-view data Inform. Fusion (IF 14.7) Pub Date : 2024-09-12 Xuanhao Yang, Hangjun Che, Man-Fai Leung
Recent advancements in multi-view unsupervised feature selection (MUFS) have been notable, yet two primary challenges persist. First, real-world datasets frequently consist of unbalanced incomplete multi-view data, a scenario not adequately addressed by current MUFS methodologies. Second, the inherent complexity and heterogeneity of multi-view data often introduce significant noise, an aspect largely
-
High performance RGB-Thermal Video Object Detection via hybrid fusion with progressive interaction and temporal-modal difference Inform. Fusion (IF 14.7) Pub Date : 2024-09-12 Qishun Wang, Zhengzheng Tu, Chenglong Li, Jin Tang
RGB-Thermal Video Object Detection (RGBT VOD) is to localize and classify the predefined objects in visible and thermal spectrum videos. The key issue in RGBT VOD lies in integrating multi-modal information effectively to improve detection performance. Current multi-modal fusion methods predominantly employ middle fusion strategies, but the inherent modal difference directly influences the effect of
-
Evolving intra-and inter-session graph fusion for next item recommendation Inform. Fusion (IF 14.7) Pub Date : 2024-09-10 Jain-Wun Su, Chiao-Ting Chen, De-Ren Toh, Szu-Hao Huang
Next-item recommendation aims to predict users’ subsequent behaviors using their historical sequence data. However, sessions are often anonymous, short, and time-varying, making it challenging to capture accurate and evolving item representations. Existing methods using static graphs may fail to model the evolving semantics of items over time. To address this problem, we propose the Evolving Intra-session
-
Competitive resource allocation on a network considering opinion dynamics with self-confidence evolution Inform. Fusion (IF 14.7) Pub Date : 2024-09-10 Xia Chen, Zhaogang Ding, Yuan Gao, Hengjie Zhang, Yucheng Dong
The formation of public opinion is typically influenced by different stakeholders, such as governments and firms. Recently, various real-world problems related to the management of public opinion have emerged, necessitating stakeholders to strategically allocate resources on networks to achieve their objectives. To address this, it is imperative to consider the dynamics of opinion formation. Notably
-
Unsupervised multi-view graph representation learning with dual weight-net Inform. Fusion (IF 14.7) Pub Date : 2024-09-10 Yujie Mo, Heng Tao Shen, Xiaofeng Zhu
Unsupervised multi-view graph representation learning (UMGRL) aims to capture the complex relationships in the multi-view graph without human annotations, so it has been widely applied in real-world applications. However, existing UMGRL methods still face the issues as follows: (i) Previous UMGRL methods tend to overlook the importance of nodes with different influences and the importance of graphs
-
STSNet: A cross-spatial resolution multi-modal remote sensing deep fusion network for high resolution land-cover segmentation Inform. Fusion (IF 14.7) Pub Date : 2024-09-08 Beibei Yu, Jiayi Li, Xin Huang
Recently, deep learning models have found extensive application in high-resolution land-cover segmentation research. However, the most current research still suffers from issues such as insufficient utilization of multi-modal information, which limits further improvement in high-resolution land-cover segmentation accuracy. Moreover, differences in the size and spatial resolution of multi-modal datasets
-
SFGCN: Synergetic fusion-based graph convolutional networks approach for link prediction in social networks Inform. Fusion (IF 14.7) Pub Date : 2024-09-07 Sang-Woong Lee, Jawad Tanveer, Amir Masoud Rahmani, Hamid Alinejad-Rokny, Parisa Khoshvaght, Gholamreza Zare, Pegah Malekpour Alamdari, Mehdi Hosseinzadeh
Accurate Link Prediction (LP) in Social Networks (SNs) is crucial for various practical applications, such as recommendation systems and network security. However, traditional techniques often struggle to capture the intricate and multidimensional nature of these networks. This paper presents a novel approach, the Synergetic Fusion-based Graph Convolutional Networks (SFGCN), designed to enhance LP
-
Hyper-relational interaction modeling in multi-modal trajectory prediction for intelligent connected vehicles in smart cites Inform. Fusion (IF 14.7) Pub Date : 2024-09-07 Yuhuan Lu, Wei Wang, Rufan Bai, Shengwei Zhou, Lalit Garg, Ali Kashif Bashir, Weiwei Jiang, Xiping Hu
Trajectory prediction of surrounding traffic participants is vital for the driving safety of Intelligent Connected Vehicles (ICVs). It has been enabled with the help of the availability of multi-sensor information collected by ICVs. For accurately predicting the future movements of traffic agents, it is crucial to subtlety model the inter-agent interaction. However, existing works focus on the correlations
-
Less is more: A closer look at semantic-based few-shot learning Inform. Fusion (IF 14.7) Pub Date : 2024-09-07 Chunpeng Zhou, Zhi Yu, Xilu Yuan, Sheng Zhou, Jiajun Bu, Haishuai Wang
Few-shot Learning (FSL) aims to learn and distinguish new categories from a scant number of available samples, presenting a significant challenge in the realm of deep learning. Recent researchers have sought to leverage the additional semantic or linguistic information of scarce categories with a pre-trained language model to facilitate learning, thus partially alleviating the problem of insufficient
-
New trends of adversarial machine learning for data fusion and intelligent system Inform. Fusion (IF 14.7) Pub Date : 2024-09-06 Weiping Ding, Zheng Zhang, Luis Martínez, Yu Huang, Zehong (Jimmy) Cao, Jun Liu, Abhirup Banerjee
-
Zero-shot sim-to-real transfer using Siamese-Q-Based reinforcement learning Inform. Fusion (IF 14.7) Pub Date : 2024-09-06 Zhenyu Zhang, Shaorong Xie, Han Zhang, Xiangfeng Luo, Hang Yu
To address real world decision problems in reinforcement learning, it is common to train a policy in a simulator first for safety. Unfortunately, the sim-real gap hinders effective simulation-to-real transfer without substantial training data. However, collecting real samples of complex tasks is often impractical, and the sample inefficiency of reinforcement learning exacerbates the simulation-to-real
-
Multimodal manifold learning using kernel interpolation along geodesic paths Inform. Fusion (IF 14.7) Pub Date : 2024-09-06 Ori Katz, Roy R. Lederman, Ronen Talmon
In this paper, we present a new spectral analysis and a low-dimensional embedding of two aligned multimodal datasets. Our approach combines manifold learning with the Riemannian geometry of symmetric and positive-definite (SPD) matrices. Manifold learning typically includes the spectral analysis of a single kernel matrix corresponding to a single dataset or a concatenation of several datasets. Here
-
A survey on occupancy perception for autonomous driving: The information fusion perspective Inform. Fusion (IF 14.7) Pub Date : 2024-09-05 Huaiyuan Xu, Junliang Chen, Shiyu Meng, Yi Wang, Lap-Pui Chau
3D occupancy perception technology aims to observe and understand dense 3D environments for autonomous vehicles. Owing to its comprehensive perception capability, this technology is emerging as a trend in autonomous driving perception systems, and is attracting significant attention from both industry and academia. Similar to traditional bird’s-eye view (BEV) perception, 3D occupancy perception has
-
Detecting Android malware: A multimodal fusion method with fine-grained feature Inform. Fusion (IF 14.7) Pub Date : 2024-09-05 Xun Li, Lei Liu, Yuzhou Liu, Huaxiao Liu
Context: Recently, many studies have been proposed to address the threat posed by Android malware. However, the continuous evolution of malware poses challenges to the task of representing application features in current detection methods. Objective: This paper introduces a novel Android malware detection approach based on the source code and binary code of software by leveraging large pre-trained
-
Scene understanding method utilizing global visual and spatial interaction features for safety production Inform. Fusion (IF 14.7) Pub Date : 2024-09-04 Fuqi Ma, Bo Wang, Xuzhu Dong, Min Li, Hengrui Ma, Rong Jia, Amar Jain
Risk identification in power operations is crucial for both personal safety and power production. Existing risk identification methods mainly use target detection models to identify the common risks but the scene specificity of risk occurrence. For example, not wearing a safety harness, not wearing insulated gloves, etc. Since most methods for detecting safety gears make sense only under specific scene
-
A Contemporary Survey on Multisource Information Fusion for Smart Sustainable Cities: Emerging Trends and Persistent Challenges Inform. Fusion (IF 14.7) Pub Date : 2024-09-04 Houda Orchi, Abdoulaye Baniré Diallo, Halima Elbiaze, Essaid Sabir, Mohamed Sadik
The emergence of smart sustainable cities has unveiled a wealth of data sources, each contributing to a vast array of urban applications. At the heart of managing this plethora of data is multisource information fusion (MSIF), a sophisticated approach that not only improves the quality of data collected from myriad sources, including sensors, satellites, social media, and citizen-generated content
-
MMIF-INet: Multimodal medical image fusion by invertible network Inform. Fusion (IF 14.7) Pub Date : 2024-09-04 Dan He, Weisheng Li, Guofen Wang, Yuping Huang, Shiqiang Liu
Multimodal medical image fusion (MMIF) technology aims to generate fused images that comprehensively reflect the information of tissues, organs, and metabolism, thereby assisting medical diagnosis and enhancing the reliability of clinical diagnosis. However, most approaches suffer from information loss during feature extraction and fusion, and rarely explore how to directly process multichannel data
-
Triple disentangled representation learning for multimodal affective analysis Inform. Fusion (IF 14.7) Pub Date : 2024-09-03 Ying Zhou, Xuefeng Liang, Han Chen, Yin Zhao, Xin Chen, Lida Yu
In multimodal affective analysis (MAA) tasks, the presence of heterogeneity among different modalities has propelled the exploration of the disentanglement methods as a pivotal area. Many emerging studies focus on disentangling the modality-invariant and modality-specific representations from input data and then fusing them for prediction. However, our study shows that modality-specific representations
-
Frontiers and developments of data augmentation for image: From unlearnable to learnable Inform. Fusion (IF 14.7) Pub Date : 2024-09-03 Gan Lin, JinZhe Jiang, Jing Bai, YaWen Su, ZengHui Su, HongShuo Liu
Data augmentation is a crucial technique for expanding training datasets, effectively alleviating the overfitting issue that arises from limited training data in deep learning models. This paper takes a fresh perspective and offers a scholarly exploration of image data augmentation, following a logical progression from unlearnable to learnable methods. The paper begins by providing a brief overview
-
Divergence-guided disentanglement of view-common and view-unique representations for multi-view data Inform. Fusion (IF 14.7) Pub Date : 2024-09-02 Mingfei Lu, Qi Zhang, Badong Chen
In the field of multi-view learning (MVL), it is crucial to extract both common (consistent) and unique (complementary) information across different views. While the focus has traditionally been on acquiring common information, there has been a recent shift towards exploring unique information as well. However, developing an MVL model that can simultaneously capture both common and unique information
-
Generative AIBIM: An automatic and intelligent structural design pipeline integrating BIM and generative AI Inform. Fusion (IF 14.7) Pub Date : 2024-09-01 Zhili He, Yu-Hsing Wang, Jian Zhang
AI-based intelligent structural design represents a transformative approach that addresses the inefficiencies inherent in traditional structural design practices. This paper innovates the existing AI-based design frameworks from four aspects and proposes Generative AIBIM: an automatic and intelligent structural design pipeline that integrates Building Information Modeling (BIM) and generative AI. First
-
Integrating imprecise data in generative models using interval-valued Variational Autoencoders Inform. Fusion (IF 14.7) Pub Date : 2024-08-31 Luciano Sánchez, Nahuel Costa, Inés Couso, Olivier Strauss
Variational Autoencoders (VAEs) enable the integration of diverse data sources into a unified latent representation, facilitating the fusion of information from various inputs and the creation of disentangled representations that separate different factors of variation in the data. Traditional VAEs, however, are limited by assuming a single prior distribution for latent variables, which restricts their
-
Model compression techniques in biometrics applications: A survey Inform. Fusion (IF 14.7) Pub Date : 2024-08-31 Eduarda Caldeira, Pedro C. Neto, Marco Huber, Naser Damer, Ana F. Sequeira
The development of deep learning algorithms has extensively empowered humanity’s task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression
-
SSTtrack: A unified hyperspectral video tracking framework via modeling spectral-spatial-temporal conditions Inform. Fusion (IF 14.7) Pub Date : 2024-08-29 Yuzeng Chen, Qiangqiang Yuan, Yuqi Tang, Yi Xiao, Jiang He, Te Han, Zhenqi Liu, Liangpei Zhang
Hyperspectral video contains rich spectral, spatial, and temporal conditions that are crucial for capturing complex object variations and overcoming the inherent limitations (e.g., multi-device imaging, modality alignment, and finite spectral bands) of regular RGB and multi-modal video tracking. However, existing hyperspectral tracking methods frequently encounter issues including data anxiety, band
-
DDBFusion: An unified image decomposition and fusion framework based on dual decomposition and Bézier curves Inform. Fusion (IF 14.7) Pub Date : 2024-08-28 Zeyang Zhang, Hui Li, Tianyang Xu, Xiao-Jun Wu, Josef Kittler
Existing image fusion algorithms mostly concentrate on the design of network architecture and loss functions, and using unified feature extraction strategies while neglecting the division of redundant and effective information. However, for complementary information, unified feature extractor may not appropriate. Thus, this paper presents a unified image fusion algorithm based on Bézier curves image
-
A systematic literature review of low-cost 3D mapping solutions Inform. Fusion (IF 14.7) Pub Date : 2024-08-28 Jesús Balado, Raissa Garozzo, Lukas Winiwarter, Sofia Tilon
In "low-cost" solutions, ensuring economic accessibility and democratizing the availability of emerging technologies stand as pivotal considerations. This study undertakes a systematic literature review of low-cost 3D mapping solutions. Leveraging SCOPUS as the primary database, a comprehensive bibliometric analysis encompassing 1380 publications was conducted, subsequently narrowing the focus to 87
-
Temporal-spatial-fusion-based risk assessment on the adjacent building during deep excavation Inform. Fusion (IF 14.7) Pub Date : 2024-08-27 Yue Pan, Xiaojing Zhou, Jin-Jian Chen, Yi Hong
Foundation pit excavation will inevitably cause uneven ground settlement to pose potential risks to adjacent structures and infrastructures. To better perceive the risk status of adjacent buildings using multi-source information, a temporal-spatial-fusion-based risk assessment (TSFRA) model under the consideration of uncertainty and causality is developed by integrating FAHP (Fuzzy Analytic Hierarchy
-
Enhancing consistency with the fusion of paralleled decoders for text generation Inform. Fusion (IF 14.7) Pub Date : 2024-08-26 Yaolin Li, Heyan Huang, Yu Bai, Yang Gao
Generating coherent and consistent long text is an important but challenging task. Despite the recent success of planning-based methods in maintaining consistency and modeling long-distance coherence for text, the existing generative model still suffers from the inconsistency problem among prompt, plan, and target text. In this paper, we propose a novel generative model MDFUT, which leverages an autoregressive
-
Multi-view human pose and shape estimation via mesh-aligned voxel interpolation Inform. Fusion (IF 14.7) Pub Date : 2024-08-26 Yixuan Zhang, Jiguang Zhang, Shibiao Xu, Jun Xiao
Although multi-view human pose and shape regression methods have information from other views for complementing and correcting, existing ones still have its own drawback of not fully taking advantage of multi-view setup. Thus they are far from efficiently aligning and merging features in different views. In order to tackle these problems, we propose a multi-view framework where features from all views
-
IoT-FAR: A multi-sensor fusion approach for IoT-based firefighting activity recognition Inform. Fusion (IF 14.7) Pub Date : 2024-08-25 Xiaoqing Chai, Boon Giin Lee, Chenhang Hu, Matthew Pike, David Chieng, Renjie Wu, Wan-Young Chung
Inadequate training poses a significant risk of injury among young firefighters. Although Human Activity Recognition (HAR) algorithms have shown potential in monitoring and evaluating performance, most existing studies focus on daily activities and have difficulty distinguishing complex firefighting tasks. This study introduces the Internet of things (IoT)-based wearable firefighting activity recognition
-
Breaking through clouds: A hierarchical fusion network empowered by dual-domain cross-modality interactive attention for cloud-free image reconstruction Inform. Fusion (IF 14.7) Pub Date : 2024-08-22 Congyu Li, Shutao Li, Xinxin Liu
Cloud obscuration undermines the availability of optical images for continuous monitoring in earth observation. Fusing features from synthetic aperture radar (SAR) has been recognized as a feasible strategy to guide the reconstruction of corrupted signals in cloud-contaminated regions. However, due to the different imaging mechanisms and reflection characteristics, the substantial domain gap between
-
Deep evidential fusion with uncertainty quantification and reliability learning for multimodal medical image segmentation Inform. Fusion (IF 14.7) Pub Date : 2024-08-22 Ling Huang, Su Ruan, Pierre Decazes, Thierry Denœux
Single-modality medical images generally do not contain enough information to reach an accurate and reliable diagnosis. For this reason, physicians commonly rely on multimodal medical images for comprehensive diagnostic assessments. This study introduces a deep evidential fusion framework designed for segmenting multimodal medical images, leveraging the Dempster–Shafer theory of evidence in conjunction
-
Learning deformable hypothesis sampling for patchmatch multi-view stereo in the wild Inform. Fusion (IF 14.7) Pub Date : 2024-08-22 Yao Guo, Xianwei Zheng, Hongjie Li, Linxi Huan, Jiayi Ma, Jianya Gong
The learnable PatchMatch formulations have recently made progress in Multi-View Stereo (MVS). However, their performance often degrades under complex wild scenes. In this study, we observe that the degradation is mainly caused by the noisy depth hypothesis sampling during iterations of PatchMatch MVS: (i) Within a single iteration, the method mixes all information of a fixed-shape region in the spatial
-
Diffusion-based diverse audio captioning with retrieval-guided Langevin dynamics Inform. Fusion (IF 14.7) Pub Date : 2024-08-21 Yonggang Zhu, Aidong Men, Li Xiao
Audio captioning, a comprehensive task of audio understanding, aims to provide a natural-language description of an audio clip. Beyond accuracy, diversity is also a critical requirement for this task. Human-produced captions possess rich variability due to the ambiguity of audio semantics (such as insects buzzing and electrical humming making similar sounds) and the existence of subjective judgments
-
Terrain detection and segmentation for autonomous vehicle navigation: A state-of-the-art systematic review Inform. Fusion (IF 14.7) Pub Date : 2024-08-19 Md Mohsin Kabir, Jamin Rahman Jim, Zoltán Istenes
This review comprehensively investigates the current state and emerging trends of autonomous vehicle terrain detection and segmentation. By systematically reviewing literature from various databases, this study outlines the evolution of detection and segmentation techniques from traditional computer vision methods to advanced machine learning and deep learning approaches. It identifies critical technological
-
MREIFlow: Unsupervised dense and time-continuous optical flow estimation from image and event data Inform. Fusion (IF 14.7) Pub Date : 2024-08-19 Jianlang Hu, Chi Guo, Yarong Luo, Zihao Mao
We focus on exploring an unsupervised learning-based model which can take advantage of a single image and events to estimate dense and time-continuous optical flow. We propose a multi-scale optical flow recurrent estimation network, called MREIFlow, which mainly consists of a triplet feature encoder, a feature fusion subnetwork, and a flow iterative decoder. The triplet feature encoder is capable of
-
Fusion-based modeling of an intelligent algorithm for enhanced object detection using a Deep Learning Approach on radar and camera data Inform. Fusion (IF 14.7) Pub Date : 2024-08-18 Yuwen Wu
Object detection, the process of detecting and classifying objects within a given environment, forms the foundational element. Multisensory fusion incorporates data from diverse sensors, like radar and cameras, to refine the reliability and accuracy of detection. Further, Radar and camera data fusion refine this process by integrating the unique strength of both technologies, which leverage the radar's
-
FedCCL: Federated dual-clustered feature contrast under domain heterogeneity Inform. Fusion (IF 14.7) Pub Date : 2024-08-17 Yu Qiao, Huy Q. Le, Mengchun Zhang, Apurba Adhikary, Chaoning Zhang, Choong Seon Hong
Federated learning (FL) facilitates a privacy-preserving neural network training paradigm through collaboration between edge clients and a central server. One significant challenge is that the distributed data is not independently and identically distributed (non-IID), typically including both intra-domain and inter-domain heterogeneity. However, recent research is limited to simply using averaged
-
PMANet: Malicious URL detection via post-trained language model guided multi-level feature attention network Inform. Fusion (IF 14.7) Pub Date : 2024-08-17 Ruitong Liu, Yanbin Wang, Haitao Xu, Zhan Qin, Fan Zhang, Yiwei Liu, Zheng Cao
The expansion of the Internet has led to the widespread proliferation of malicious URLs, becoming a primary vector for cyber threats. Detecting malicious URLs is now essential for improving network security. The technological revolution spurred by pre-trained language models holds great promise for advancing the detection of malicious URLs. However, current research applying these models to URLs fails
-
Automatic movie genre classification & emotion recognition via a BiProjection Multimodal Transformer Inform. Fusion (IF 14.7) Pub Date : 2024-08-16 Diego Aarón Moreno-Galván, Roberto López-Santillán, Luis Carlos González-Gurrola, Manuel Montes-Y-Gómez, Fernando Sánchez-Vega, Adrián Pastor López-Monroy
Analyzing, manipulating, and comprehending data from multiple sources (e.g., websites, software applications, files, or databases) and of diverse modalities (e.g., video, images, audio and text) has become increasingly important in many domains. Despite recent advances in (MC), there are still several challenges to be addressed, such as: the combination of modalities of very diverse nature, the optimal
-
LFDT-Fusion: A latent feature-guided diffusion Transformer model for general image fusion Inform. Fusion (IF 14.7) Pub Date : 2024-08-16 Bo Yang, Zhaohui Jiang, Dong Pan, Haoyang Yu, Gui Gui, Weihua Gui
For image fusion tasks, it is inefficient for the diffusion model to iterate multiple times on the original resolution image for feature mapping. To address this issue, this paper proposes an efficient latent feature-guided diffusion model for general image fusion. The model consists of a pixel space autoencoder and a compact Transformer-based diffusion network. Specifically, the pixel space autoencoder
-
Exploring adversarial deep learning for fusion in multi-color channel skin detection applications Inform. Fusion (IF 14.7) Pub Date : 2024-08-14 Mohammed Chyad, B.B. Zaidan, A.A. Zaidan, Hossein Pilehkouhi, Roqia Aalaa, Sarah Qahtan, Hassan A. Alsattar, Dragan Pamucar, Vladimir Simic
Deep learning, a robust framework for complex learning, outperforms previous machine learning approaches and finds widespread use. However, security vulnerabilities, especially in fusion in multi-color channel skin detection applications using adversarial machine learning (AML) and generative adversarial networks (GANs), lead to misclassifications. Researchers are actively exploring AML's and GANs'
-
Image segmentation review: Theoretical background and recent advances Inform. Fusion (IF 14.7) Pub Date : 2024-08-14 Khushmeen Kaur Brar, Bhawna Goyal, Ayush Dogra, Mohammed Ahmed Mustafa, Rana Majumdar, Ahmed Alkhayyat, Vinay Kukreja
Image segmentation is a significant topic in image refining and automated image analysis with relevance for instance object recognition, diagnostic imaging scanning, mechanized perception, monitoring cameras, satellite imaging, and image compression, and so on. This technology has become an essential component of image assessment as it facilitates the depiction, taxonomy, and conception of the subject
-
An endogenous and continual learning approach to personalize individual semantics to support linguistic consensus reaching Inform. Fusion (IF 14.7) Pub Date : 2024-08-14 Yuzhu Wu, Zhaojin Li, Yuan Gao, Francisco Chiclana, Xia Chen, Yucheng Dong
In linguistic group decision making, it is known that decision makers are individualized in understanding the meanings of words, i.e., decision makers have personalized individual semantics (PISs) in the representation of linguistic preferences. Since individuals influence each other mutually in the consensus reaching process, PISs will accordingly change. This suggests that there is an updating process
-
DNIM: Deep-sea netting intelligent enhancement and exposure monitoring using bio-vision Inform. Fusion (IF 14.7) Pub Date : 2024-08-14 Shunmin An, Lihong Xu, Zhichao Deng, Huapeng Zhang
Intelligent monitoring of deep-sea nets is affected by light attenuation, light scattering, and limited dynamic range factors of the camera, which can cause color shift, low visibility, and over/underexposure in monitoring, resulting in reduced farming efficiency, and misdetection of biological behavior. Therefore, we propose a method for underwater net tank scene enhancement and exposure using multicolor
-
Managing flexible linguistic expressions with subjective preferences and objective information in group decision-making: A perspective based on personalized individual semantics Inform. Fusion (IF 14.7) Pub Date : 2024-08-12 Shitao Zhang, Hao Tian, Lei Hu, Muhammet Deveci, Xiaodi Liu
Flexible linguistic expression (FLE) is a complex linguistic representation that almost generalizes all distributed representations, posing the following challenges for the personalized individual semantics (PIS) issues in group decision-making (GDM) with FLEs. (1) The available measurement methods for FLEs are limited and suffer from some unreasonable or poorly discriminated deficiencies. (2) The
-
Multi-modal and multi-criteria conflict analysis model based on deep learning and dominance-based rough sets: Application to clinical non-parallel decision problems Inform. Fusion (IF 14.7) Pub Date : 2024-08-12 Xiaoli Chu, Bingzhen Sun, Xiaodong Chu, Lu Wang, Kun Bao, Nanguan Chen
The non-parallel disease progression and curative effect are the difficulties of clinical diagnosis and treatment decisions. Experts (doctors) constantly summarize these non-parallel phenomena for more accurate diagnosis and treatment. In order to discover the mechanism of clinical non-parallel decision-making, this paper constructs a multi-modal and multi-criteria conflict analysis method based on
-
Multi-view clustering via high-order bipartite graph fusion Inform. Fusion (IF 14.7) Pub Date : 2024-08-10 Zihua Zhao, Ting Wang, Haonan Xin, Rong Wang, Feiping Nie
Multi-view clustering is widely applied in engineering and scientific research. It helps reveal the underlying structures and correlations behind complex multi-view data. Graph-based multi-view clustering stands as a prominent research frontier within the multi-view clustering field, yet faces persistent challenges. Firstly, typically constructed initial input graphs for each view yields sparse clustering
-
An ordinal–cardinal consensus model for three-way large-scale group decision-making considering co-opetition relations Inform. Fusion (IF 14.7) Pub Date : 2024-08-10 Kaixin Gong, Weimin Ma, Mark Goh
In large-scale group decision-making (LSGDM), the complexity of the decision-making environment highlights the significance of co-opetition in reaching a consensus among decision-makers (DMs) with diverse knowledge. Thus, this paper proposes an ordinal–cardinal consensus model for three-way LSGDM that considers co-opetition relationships among DMs. Specifically, we first improve the ordinal consensus
-
CSWin-UNet: Transformer UNet with cross-shaped windows for medical image segmentation Inform. Fusion (IF 14.7) Pub Date : 2024-08-10 Xiao Liu, Peng Gao, Tao Yu, Fei Wang, Ru-Yue Yuan
Deep learning, especially convolutional neural networks (CNNs) and Transformer architectures, have become the focus of extensive research in medical image segmentation, achieving impressive results. However, CNNs come with inductive biases that limit their effectiveness in more complex, varied segmentation scenarios. Conversely, while Transformer-based methods excel at capturing global and long-range
-
Adapting the segment anything model for multi-modal retinal anomaly detection and localization Inform. Fusion (IF 14.7) Pub Date : 2024-08-08 Jingtao Li, Ting Chen, Xinyu Wang, Yanfei Zhong, Xuan Xiao
The fusion of optical coherence tomography (OCT) and fundus modality information can provide a comprehensive diagnosis for retinal artery occlusion (RAO) disease, where OCT provides the cross-sectional examination of the fundus image. Given multi-modal retinal images, an anomaly diagnosis model can discriminate RAO without the need for real diseased samples. Despite this, previous studies have only
-
DGGI: Deep Generative Gradient Inversion with diffusion model Inform. Fusion (IF 14.7) Pub Date : 2024-08-08 Liwen Wu, Zhizhi Liu, Bin Pu, Kang Wei, Hangcheng Cao, Shaowen Yao
Federated learning is a privacy-preserving distributed framework that facilitates information fusion and sharing among different clients, enabling the training of a global model without exposing raw data. However, the gradient inversion attack that can reconstruct the training data via gradients has posed a significant threat. Prior attack approaches have demonstrated the efficacy of gradient inversion
-
HTTPS: Heterogeneous Transfer learning for spliT Prediction System evaluated on healthcare data Inform. Fusion (IF 14.7) Pub Date : 2024-08-08 Jia-Hao Syu, Marcin Fojcik, Rafał Cupek, Jerry Chun-Wei Lin
Internet of Medical Things (IoMT) facilitate revolutionary development in healthcare services, recognized as smart healthcare. By collecting big healthcare data and utilizing artificial intelligence algorithms, P4-medicine can be realized in intelligent diagnosis, risk analysis, and health management. As more data is collected, privacy and security become imperatives in healthcare research, and split
-
Rotating machinery fault diagnosis method based on multi-level fusion framework of multi-sensor information Inform. Fusion (IF 14.7) Pub Date : 2024-08-08 Xiangqu Xiao, Chaoshun Li, Hongxiang He, Jie Huang, Tian Yu
High-precision fault diagnosis of rotating machinery plays an important role in industrial systems. Today, rotating machinery often has multiple sensors to monitor equipment condition, so it is important to fuse data from multiple rotating machinery sensors for fault diagnosis. Most of the current multi-sensor fusion fault diagnosis methods are single-level, which cannot fully utilize the effective
-
Hierarchical spatio-temporal graph ODE networks for traffic forecasting Inform. Fusion (IF 14.7) Pub Date : 2024-08-06 Tao Xu, Jiaming Deng, Ruolin Ma, Zixiang Zhang, Yingying Zhao, Zhilong Zhao, Juntao Zhang
Recently, many works have been proposed for traffic forecasting to improve people’s daily lives. Although these works have achieved good predictive performance, they have three fundamental limitations. (i) The regional features of traffic flow are not fully utilized; (ii) The discretized spatiotemporal features of traffic flow do not facilitate a full understanding of the actual traffic dynamic features;
-
FC-HGNN: A heterogeneous graph neural network based on brain functional connectivity for mental disorder identification Inform. Fusion (IF 14.7) Pub Date : 2024-08-06 Yuheng Gu, Shoubo Peng, Yaqin Li, Linlin Gao, Yihong Dong
Rapid and accurate diagnosis of mental disorders has long been an essential challenge in clinical medicine. Due to the advantage in addressing non-Euclidean structures, graph neural networks have been increasingly used to study brain networks. Among the existing methods, the population graph models have achieved high predictive accuracy by considering intersubject relationships but weak interpretability
-
An image information fusion based simple diffusion network leveraging the segment anything model for guided attention on thermal images producing colorized pedestrian masks Inform. Fusion (IF 14.7) Pub Date : 2024-08-05 Suranjan Goswami, Satish Kumar Singh
Because of the synergistic characteristics of the data they provide, thermal imagers are becoming more and more common as secondary data-collecting modules to complement classical optical imagers. Their application is especially beneficial when ambient elements like light, smoke, or other particulates need to be handled. As a result, fusion-based detection technology, like assistive driving systems
-
Data-driven stock forecasting models based on neural networks: A review Inform. Fusion (IF 14.7) Pub Date : 2024-08-05 Wuzhida Bao, Yuting Cao, Yin Yang, Hangjun Che, Junjian Huang, Shiping Wen
As a core branch of financial forecasting, stock forecasting plays a crucial role for financial analysts, investors, and policymakers in managing risks and optimizing investment strategies, significantly enhancing the efficiency and effectiveness of economic decision-making. With the rapid development of information technology and computer science, data-driven neural network technologies have increasingly