当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration.
International Journal of Computer Assisted Radiology and Surgery ( IF 2.3 ) Pub Date : 2020-04-24 , DOI: 10.1007/s11548-020-02162-7
Robert B Grupp 1 , Mathias Unberath 1 , Cong Gao 1 , Rachel A Hegeman 2 , Ryan J Murphy 3 , Clayton P Alexander 4 , Yoshito Otake 5 , Benjamin A McArthur 6, 7 , Mehran Armand 2, 4, 8 , Russell H Taylor 1
Affiliation  

PURPOSE Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper, we propose a method for fully automatic registration using anatomical annotations produced by a neural network. METHODS Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data are obtained using a computationally intensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensity-based strategy with annotations inferred by the network and requires no human assistance. RESULTS Ground truth segmentation labels and anatomical landmarks were obtained in 366 fluoroscopic images across 6 cadaveric specimens. In a leave-one-subject-out experiment, networks trained on these data obtained mean dice coefficients for left and right hemipelves, left and right femurs of 0.86, 0.87, 0.90, and 0.84, respectively. The mean 2D landmark localization error was 5.0 mm. The pelvis was registered within [Formula: see text] for 86% of the images when using the proposed intraoperative approach with an average runtime of 7 s. In comparison, an intensity-only approach without manual initialization registered the pelvis to [Formula: see text] in 18% of images. CONCLUSIONS We have created the first accurately annotated, non-synthetic, dataset of hip fluoroscopy. By using these annotations as training data for neural networks, state-of-the-art performance in fluoroscopic segmentation and landmark localization was achieved. Integrating these annotations allows for a robust, fully automatic, and efficient intraoperative registration during fluoroscopic navigation of the hip.

中文翻译:


在透视检查中自动注释髋部解剖结构,以实现稳健、高效的 2D/3D 配准。



目的 透视是用于指导髋关节手术的标准成像方式,因此是计算机辅助导航的天然传感器。为了有效解决导航过程中出现的复杂配准问题,通常需要对术中图像进行人工辅助注释。这种手动初始化会干扰手术工作流程并削弱从导航中获得的任何优势。在本文中,我们提出了一种使用神经网络产生的解剖注释进行全自动配准的方法。方法 神经网络经过训练,可以同时分割解剖结构并识别荧光透视中的标志。训练数据是通过骨盆和每个股骨的计算密集型、术中不兼容的 2D/3D 配准获得的。使用投影 3D 注释建立地面实况 2D 分割标签和解剖标志位置。术中配准将传统的基于强度的策略与网络推断的注释结合起来,不需要人工帮助。结果 在 6 个尸体标本的 366 个荧光透视图像中获得了地面实况分割标签和解剖标志。在留一受试者实验中,根据这些数据训练的网络获得的左右半骨、左右股骨的平均骰子系数分别为 0.86、0.87、0.90 和 0.84。平均 2D 地标定位误差为 5.0 毫米。当使用建议的术中方法时,86% 的图像在 [公式:见文本] 中注册了骨盆,平均运行时间为 7 秒。相比之下,没有手动初始化的仅强度方法在 18% 的图像中将骨盆注册到[公式:参见文本]。 结论 我们创建了第一个精确注释的、非合成的髋关节透视数据集。通过使用这些注释作为神经网络的训练数据,在透视分割和地标定位方面实现了最先进的性能。集成这些注释可以在髋关节荧光镜导航期间实现稳健、全自动且高效的术中配准。
更新日期:2020-04-24
down
wechat
bug