当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep learning-based anatomical site classification for upper gastrointestinal endoscopy.
International Journal of Computer Assisted Radiology and Surgery ( IF 2.3 ) Pub Date : 2020-05-06 , DOI: 10.1007/s11548-020-02148-5
Qi He 1 , Sophia Bano 2 , Omer F Ahmad 2 , Bo Yang 3 , Xin Chen 3 , Pietro Valdastri 4 , Laurence B Lovat 2 , Danail Stoyanov 2 , Siyang Zuo 1
Affiliation  

PURPOSE Upper gastrointestinal (GI) endoscopic image documentation has provided an efficient, low-cost solution to address quality control for endoscopic reporting. The problem is, however, challenging for computer-assisted techniques, because different sites have similar appearances. Additionally, across different patients, site appearance variation may be large and inconsistent. Therefore, according to the British and modified Japanese guidelines, we propose a set of oesophagogastroduodenoscopy (EGD) images to be routinely captured and evaluate its efficiency for deep learning-based classification methods. METHODS A novel EGD image dataset standardising upper GI endoscopy to several steps is established following landmarks proposed in guidelines and annotated by an expert clinician. To demonstrate the discrimination of proposed landmarks that enable the generation of an automated endoscopic report, we train several deep learning-based classification models utilising the well-annotated images. RESULTS We report results for a clinical dataset composed of 211 patients (comprising a total of 3704 EGD images) acquired during routine upper GI endoscopic examinations. We find close agreement between predicted labels using our method and the ground truth labelled by human experts. We observe the limitation of current static image classification scheme for EGD image classification. CONCLUSION Our study presents a framework for developing automated EGD reports using deep learning. We demonstrate that our method is feasible to address EGD image classification and can lead towards improved performance and additionally qualitatively demonstrate its performance on our dataset.

中文翻译:

基于深度学习的上消化道内窥镜解剖部位分类。

目的 上消化道 (GI) 内窥镜图像记录为解决内窥镜报告的质量控制问题提供了一种高效、低成本的解决方案。然而,这个问题对于计算机辅助技术来说具有挑战性,因为不同的站点具有相似的外观。此外,在不同的患者中,部位外观变化可能很大且不一致。因此,根据英国和修改后的日本指南,我们提出了一组常规捕获的食管胃十二指肠镜 (EGD) 图像,并评估其对基于深度学习的分类方法的效率。方法 一个新的 EGD 图像数据集将上消化道内窥镜标准化为几个步骤,建立在指南中提出的地标之后,并由临床专家注释。为了证明能够生成自动内窥镜报告的提议地标的区分,我们利用注释良好的图像训练了几个基于深度学习的分类模型。结果 我们报告了在常规上消化道内窥镜检查期间获得的由 211 名患者(总共包括 3704 张 EGD 图像)组成的临床数据集的结果。我们发现使用我们的方法预测的标签与人类专家标记的基本事实之间非常一致。我们观察到当前静态图像分类方案对 EGD 图像分类的限制。结论 我们的研究提出了一个使用深度学习开发自动 EGD 报告的框架。
更新日期:2020-05-06
down
wechat
bug