当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Graph-Based Region and Boundary Aggregation for Biomedical Image Segmentation
IEEE Transactions on Medical Imaging ( IF 10.6 ) Pub Date : 2021-11-01 , DOI: 10.1109/tmi.2021.3123567
Yanda Meng 1 , Hongrun Zhang 1 , Yitian Zhao 2 , Xiaoyun Yang 3 , Yihong Qiao 4 , Ian J. C. MacCormick 5 , Xiaowei Huang 6 , Yalin Zheng 1
Affiliation  

Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network ( GNN ) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module ( AEM ). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module’s global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary

中文翻译:

用于生物医学图像分割的基于图的区域和边界聚合

分割是生物医学图像分析中的一项基本任务。与现有的基于区域的密集像素分类方法或基于边界的多边形回归方法不同,我们构建了一种新颖的图神经网络( 基于 GNN 的深度学习框架,具有多个图形推理模块,以端到端的方式显式利用区域和边界特征。该机制使用提出的注意力增强模块(Attention Enhancement Module)提取判别区域和边界特征,称为初始化区域和边界节点嵌入( AEM)。每个图中的跨域节点(区域和边界特征域)之间的加权链接以数据依赖的方式定义,它保留了全局和局部的跨节点关系。迭代消息聚合和节点更新机制可以增强每个图推理模块的全局语义信息和局部空间特征之间的交互。特别是,由于在不同的并行图推理模块中提出了多级特征节点嵌入,我们的模型能够同时处理多个不同特征级别的区域和边界特征推理和聚合。对两种具有挑战性的数据集的实验表明,我们的方法在结肠镜检查图像中的息肉分割以及彩色眼底图像中的视盘和视杯分割方面优于最先进的方法。训练后的模型将在以下位置提供:https://github.com/smallmax00/Graph_Region_Boudnary
更新日期:2021-11-01
down
wechat
bug