当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
F3RNet: full-resolution residual registration network for deformable image registration
International Journal of Computer Assisted Radiology and Surgery ( IF 3 ) Pub Date : 2021-05-03 , DOI: 10.1007/s11548-021-02359-4
Zhe Xu 1, 2 , Jie Luo 2 , Jiangpeng Yan 1 , Xiu Li 1 , Jagadeesan Jayender 2
Affiliation  

Purpose

Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes.

Methods

We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency.

Results

We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches.

Conclusion

By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.



中文翻译:

F3RNet:用于可变形图像配准的全分辨率残差配准网络

目的

可变形图像配准 (DIR) 对于许多图像引导治疗至关重要。最近,深度学习方法在 DIR 中获得了极大的普及和成功。大多数深度学习方法采用所谓的单流从高到低、从低到高的网络结构,可以达到满意的整体配准效果。然而,一些严重变形的局部区域的准确对齐对于精确定位手术目标至关重要,但往往被忽视。因此,这些方法对一些难以对齐的区域不敏感,例如,变形肝叶的患者内配准。

方法

我们提出了一种新的无监督配准网络,即全分辨率残差配准网络(F3RNet),用于严重变形器官的可变形配准。所提出的方法以残差学习方式组合了两个并行处理流。一个流利用促进准确体素级配准的全分辨率信息。另一个流学习深度多尺度残差表示以获得稳健的识别。我们还分解了 3D 卷积以减少训练参数并提高网络效率。

结果

我们在临床获得的患者腹部 CT-MRI 数据集和公共吸气和呼气胸部 CT 数据集上验证了所提出的方法。与最先进的方法相比,多模式和单模式配准的实验证明了有希望的结果。

结论

通过以高度交互的残差学习方式结合高分辨率信息和多尺度表示,所提出的 F3RNet 可以实现准确的整体和局部配准。使用 GPU 注册一对图像的运行时间少于 3 秒。在未来的工作中,我们将研究如何经济高效地处理高分辨率信息并融合多尺度表示。

更新日期:2021-05-03
down
wechat
bug