当前位置: X-MOL 学术Neural Comput. & Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Show, tell and summarise: learning to generate and summarise radiology findings from medical images
Neural Computing and Applications ( IF 6 ) Pub Date : 2021-04-05 , DOI: 10.1007/s00521-021-05943-6
Sonit Singh , Sarvnaz Karimi , Kevin Ho-Shon , Len Hamey

Radiology plays a vital role in health care by viewing the human body for diagnosis, monitoring, and treatment of medical problems. In radiology practice, radiologists routinely examine medical images such as chest X-rays and describe their findings in the form of radiology reports. However, this task of reading medical images and summarising its insights is time consuming, tedious, and error-prone, which often represents a bottleneck in the clinical diagnosis process. A computer-aided diagnosis system which can automatically generate radiology reports from medical images can be of great significance in reducing workload, reducing diagnostic errors, speeding up clinical workflow, and helping to alleviate any shortage of radiologists. Existing research in radiology report generation focuses on generating the concatenation of the findings and impression sections. Also, existing work ignores important differences between normal and abnormal radiology reports. The text of normal and abnormal reports differs in style and it is difficult for a single model to learn both the text style and learn to transition from findings to impression. To alleviate these challenges, we propose a Show, Tell and Summarise model that first generates findings from chest X-rays and then summarises them to provide impression section. The proposed work generates the findings and impression sections separately, overcoming the limitation of previous research. Also, we use separate models for generating normal and abnormal radiology reports which provide true insight of model’s performance. Experimental results on the publicly available IU-CXR dataset show the effectiveness of our proposed model. Finally, we highlight limitations in the radiology report generation research and present recommendations for future work.



中文翻译:

显示,讲述和总结:学习从医学图像生成和总结放射学发现

放射学在医疗保健中起着至关重要的作用,它通过观察人体来诊断,监视和治疗医学问题。在放射医学实践中,放射科医生会例行检查胸部X射线等医学图像,并以放射报告的形式描述其发现。但是,读取医学图像并总结其见解的任务非常耗时,乏味且容易出错,这通常是临床诊断过程中的瓶颈。可以自动从医学图像生成放射线报告的计算机辅助诊断系统在减少工作量,减少诊断错误,加快临床工作流程以及减轻放射医生的短缺方面具有重要意义。放射学报告生成的现有研究集中于生成发现和印象部分的串联。此外,现有工作忽略了正常和异常放射学报告之间的重要差异。正常和异常报告的文本样式不同,单个模型很难同时学习文本样式和学习从发现到印象的过渡。为了缓解这些挑战,我们建议显示,讲述和总结模型,该模型首先从胸部X射线生成发现,然后对其进行汇总以提供印象部分。拟议的工作分别产生了发现和印象部分,克服了先前研究的局限性。此外,我们使用单独的模型来生成正常和异常的放射学报告,从而提供对模型性能的真实了解。在公开可用的IU-CXR数据集上的实验结果表明了我们提出的模型的有效性。最后,我们强调了放射学报告生成研究的局限性,并提出了对未来工作的建议。

更新日期:2021-04-06
down
wechat
bug