当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learned optical flow for intra-operative tracking of the retinal fundus.
International Journal of Computer Assisted Radiology and Surgery ( IF 3 ) Pub Date : 2020-04-22 , DOI: 10.1007/s11548-020-02160-9
Claudio S Ravasio 1, 2 , Theodoros Pissas 1, 2 , Edward Bloch 3 , Blanca Flores 3 , Sepehr Jalali 1 , Danail Stoyanov 1 , Jorge M Cardoso 2 , Lyndon Da Cruz 3 , Christos Bergeles 2
Affiliation  

PURPOSE Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. METHODS As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. RESULTS The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. CONCLUSIONS The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.

中文翻译:

用于术中跟踪视网膜眼底的学习光流。

目的 由机器人系统持续提供再生视网膜治疗需要术中跟踪视网膜眼底。我们提出了一种有监督的深度卷积神经网络,将视网膜的语义分割和光流作为相互支持的任务进行密集预测,隐式修复由于手术工具遮挡而丢失的视网膜流信息。方法由于手动注释光流是不可行的,我们提出了一种灵活的算法,用于根据给定的术中视网膜图像生成大型合成训练数据集。我们通过在具有挑战性的真实术中剪辑的基准上跟踪网格和稀疏注释的地面实况点来评估光流估计,该基准是从包含代表性玻璃体视网膜手术病例的广泛内部采集数据集获得的。结果表明,在合成数据集上训练的基于 U-Net 的网络可以很好地推广到真实手术视频的基准。当用于跟踪视网膜感兴趣点时,我们的流量估计优于包含遮挡感兴趣点的工具运动的剪辑的变分基线方法,正如在手术中记录的手术视频中经常观察到的那样。结论 结果表明,复杂的合成训练数据集可用于专门指导光流估计。因此,我们提出的算法为一个强大的系统奠定了基础,即使在被遮挡的情况下,该系统也可以帮助对移动的手术目标进行术中跟踪。当用于跟踪视网膜感兴趣点时,我们的流量估计优于包含遮挡感兴趣点的工具运动的剪辑的变分基线方法,正如在手术中记录的手术视频中经常观察到的那样。结论 结果表明,复杂的合成训练数据集可用于专门指导光流估计。因此,我们提出的算法为一个强大的系统奠定了基础,即使在被遮挡的情况下,该系统也可以帮助对移动的手术目标进行术中跟踪。当用于跟踪视网膜感兴趣点时,我们的流量估计优于包含遮挡感兴趣点的工具运动的剪辑的变分基线方法,正如在手术中记录的手术视频中经常观察到的那样。结论 结果表明,复杂的合成训练数据集可用于专门指导光流估计。因此,我们提出的算法为一个强大的系统奠定了基础,即使在被遮挡的情况下,该系统也可以帮助对移动的手术目标进行术中跟踪。结论 结果表明,复杂的合成训练数据集可用于专门指导光流估计。因此,我们提出的算法为一个强大的系统奠定了基础,即使在被遮挡的情况下,该系统也可以帮助对移动的手术目标进行术中跟踪。结论 结果表明,复杂的合成训练数据集可用于专门指导光流估计。因此,我们提出的算法为一个强大的系统奠定了基础,即使在被遮挡的情况下,该系统也可以帮助对移动的手术目标进行术中跟踪。
更新日期:2020-04-23
down
wechat
bug