当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated fusion learning with attention mechanism for multi-client medical image analysis
Information Fusion ( IF 18.6 ) Pub Date : 2024-03-18 , DOI: 10.1016/j.inffus.2024.102364
Muhammad Irfan , Khalid Mahmood Malik , Khan Muhammad

Federated Learning (FL) has gained significant attention because of its potential for privacy-preserving distributed learning. However, statistical heterogeneity and label scarcity remain major issues in multi-client scenarios. Furthermore, existing FL algorithms do not consider the unique data distribution of each client, which is essential for enhancing the overall performance of a model. In this study, a new FL algorithm called federated fusion learning (FFL) is proposed to address the challenges of statistical heterogeneity and label scarcity in multiclient data fusion. In FFL, each device calculates its data distribution parameters and sends them to the server in a single communication round. The server then uses the received distribution parameters to generate synthetic data for each client and combines them to create a larger dataset. FFL comprises three modules that identify the latent space of each client, intelligently fuse various modalities at the server, and construct global weights. The study also introduces a novel refinement mechanism that embeds training images into the space, thereby fusing the latent spaces of different clients using a custom multiclient latent space fusion (MCLSF) module. The attention mechanism used in FFL enables effective multiclient fusion by leveraging the inherent correlations between various data modalities. Ten datasets sourced from MedMNIST encompassing diverse imaging modalities such as X-rays, computed tomography (CT), and optical coherence tomography (OCT) were considered. In addition, detailed ablation studies were conducted on multiple client configurations and non-independent and identical (non-IID) setups to ensure the generalizability of the proposed approach. The results demonstrate the enhanced generalizability and overall performance of the global model compared to existing state-of-the-art methods, confirming its broader applicability and effectiveness.

中文翻译:

具有注意力机制的联合融合学习用于多客户端医学图像分析

联邦学习(FL)因其在保护隐私的分布式学习方面的潜力而受到广泛关注。然而,统计异质性和标签稀缺仍然是多客户端场景中的主要问题。此外,现有的 FL 算法没有考虑每个客户端的独特数据分布,而这对于提高模型的整体性能至关重要。在本研究中,提出了一种称为联邦融合学习(FFL)的新 FL 算法,以解决多客户端数据融合中统计异质性和标签稀缺性的挑战。在FFL中,每个设备计算其数据分布参数并在单轮通信中将它们发送到服务器。然后,服务器使用接收到的分布参数为每个客户端生成合成数据,并将它们组合起来创建更大的数据集。 FFL 包括三个模块,分别识别每个客户端的潜在空间、在服务器上智能融合各种模态并构建全局权重。该研究还引入了一种新颖的细化机制,将训练图像嵌入到空间中,从而使用自定义的多客户端潜在空间融合(MCLSF)模块融合不同客户端的潜在空间。 FFL 中使用的注意力机制通过利用各种数据模式之间的固有相关性,实现有效的多客户端融合。考虑了来自 MedMNIST 的 10 个数据集,涵盖不同的成像模式,例如 X 射线、计算机断层扫描 (CT) 和光学相干断层扫描 (OCT)。此外,还对多个客户端配置和非独立且相同(非 IID)设置进行了详细的消融研究,以确保所提出方法的普遍性。结果表明,与现有最先进的方法相比,全局模型的通用性和整体性能得到了增强,证实了其更广泛的适用性和有效性。
更新日期:2024-03-18
down
wechat
bug