当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Whole Slide Images based Cancer Survival Prediction using Attention Guided Deep Multiple Instance Learning Networks
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-09-23 , DOI: arxiv-2009.11169
Jiawen Yao, Xinliang Zhu, Jitendra Jonnagaddala, Nicholas Hawkins, Junzhou Huang

Traditional image-based survival prediction models rely on discriminative patch labeling which make those methods not scalable to extend to large datasets. Recent studies have shown Multiple Instance Learning (MIL) framework is useful for histopathological images when no annotations are available in classification task. Different to the current image-based survival models that limit to key patches or clusters derived from Whole Slide Images (WSIs), we propose Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) by introducing both siamese MI-FCN and attention-based MIL pooling to efficiently learn imaging features from the WSI and then aggregate WSI-level information to patient-level. Attention-based aggregation is more flexible and adaptive than aggregation techniques in recent survival models. We evaluated our methods on two large cancer whole slide images datasets and our results suggest that the proposed approach is more effective and suitable for large datasets and has better interpretability in locating important patterns and features that contribute to accurate cancer survival predictions. The proposed framework can also be used to assess individual patient's risk and thus assisting in delivering personalized medicine. Codes are available at https://github.com/uta-smile/DeepAttnMISL_MEDIA.

中文翻译:

使用注意力引导的深度多实例学习网络进行基于全幻灯片图像的癌症生存预测

传统的基于图像的生存预测模型依赖于判别性补丁标记,这使得这些方法无法扩展到大型数据集。最近的研究表明,当分类任务中没有可用的注释时,多实例学习 (MIL) 框架对组织病理学图像很有用。与当前基于图像的生存模型仅限于从整个幻灯片图像 (WSI) 派生的关键补丁或集群不同,我们通过引入 siamese MI-FCN 和基于注意力的 MIL 池来提出深度注意多实例生存学习 (DeepAttnMISL)有效地从 WSI 学习成像特征,然后将 WSI 级别的信息聚合到患者级别。在最近的生存模型中,基于注意力的聚合比聚合技术更灵活和适应性更强。我们在两个大型癌症全幻灯片图像数据集上评估了我们的方法,我们的结果表明,所提出的方法更有效,更适合大型数据集,并且在定位有助于准确癌症生存预测的重要模式和特征方面具有更好的可解释性。提议的框架还可用于评估个体患者的风险,从而帮助提供个性化医疗。代码可在 https://github.com/uta-smile/DeepAttnMISL_MEDIA 获得。s 风险,从而有助于提供个性化医疗。代码可在 https://github.com/uta-smile/DeepAttnMISL_MEDIA 获得。s 风险,从而有助于提供个性化医疗。代码可在 https://github.com/uta-smile/DeepAttnMISL_MEDIA 获得。
更新日期:2020-09-24
down
wechat
bug