当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
One-Pass Multi-Task Networks With Cross-Task Guided Attention for Brain Tumor Segmentation
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-02-19 , DOI: 10.1109/tip.2020.2973510
Chenhong Zhou , Changxing Ding , Xinchao Wang , Zhentai Lu , Dacheng Tao

Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams. The code is publicly available at https://github.com/chenhong-zhou/OM-Net.

中文翻译:


具有跨任务引导注意力的一次性多任务网络用于脑肿瘤分割



类别不平衡已成为医学图像分割的主要挑战之一。模型级联(MC)策略是一种流行的方案,通过运行一组单独的深度模型进行从粗到细的分割,显着缓解了类别不平衡问题。然而,尽管其性能出色,但该方法会导致系统复杂性,并且忽略了模型之间的相关性。为了解决 MC 方法中的这些缺陷,我们在本文中提出了一种轻量级深度模型,即单通多任务网络(OM-Net),可以比 MC 更好地解决类不平衡问题,同时只需要一个:通过脑肿瘤分割的计算。首先,OM-Net 将单独的分割任务集成到一个深度模型中,该模型由用于学习联合特征的共享参数和用于学习判别特征的特定于任务的参数组成。其次,为了更有效地优化 OM-Net,我们利用任务之间的相关性来设计在线训练数据传输策略和基于课程学习的训练策略。第三,我们进一步建议在任务之间共享预测结果,这使我们能够设计跨任务引导注意力(CGA)模块。通过遵循前一个任务提供的预测结果的指导,CGA 可以根据特定类别的统计数据自适应地重新校准通道特征响应。最后,引入了一种简单而有效的后处理方法来细化所提出的注意网络的分割结果。进行了大量的实验来证明所提出的技术的有效性。最令人印象深刻的是,我们在 BraTS 2015 测试集和 BraTS 2017 在线验证集上实现了最先进的性能。 利用这些提出的方法,我们还在 BraTS 2018 挑战赛的 64 个参赛团队中获得并列第三名。该代码可在 https://github.com/chenhong-zhou/OM-Net 上公开获取。
更新日期:2020-02-19
down
wechat
bug