当前位置: X-MOL 学术Int. J. CARS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning ultrasound rendering from cross-sectional model slices for simulated training
International Journal of Computer Assisted Radiology and Surgery ( IF 2.3 ) Pub Date : 2021-04-08 , DOI: 10.1007/s11548-021-02349-6
Lin Zhang 1 , Tiziano Portenier 1 , Orcun Goksel 1, 2
Affiliation  

Purpose

Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised.

Methods

We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task.

Results

Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images.

Conclusion

A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.



中文翻译:


从横截面模型切片学习超声渲染以进行模拟训练


 目的


鉴于超声图像的导航和解释需要高水平的专业知识,计算模拟可以促进虚拟现实中此类技能的培训。通过基于射线追踪的模拟,可以生成逼真的超声图像。然而,由于交互性的计算限制,图像质量通常需要妥协。

 方法


我们在此建议通过在非时间关键的离线阶段进行此类模拟,然后学习从横截面模型切片到此类模拟帧的图像转换,来绕过交互时的任何渲染和模拟过程。我们使用具有专用生成器架构和输入馈送方案的生成对抗框架,这两者都可以在不增加网络参数的情况下显着提高图像质量。此处示出了从横截面模型切片导出的积分衰减图、纹理友好的跨步卷积、向中间层提供随机噪声和输入图以保留局部性,以极大地促进此类转换任务。

 结果


给定几个质量指标,所提出的仅以组织图作为输入的方法被证明可以提供与使用低质量超声渲染的附加图像的现有技术相当或更好的结果。基于定性示例和定量超声相似性指标,一项广泛的消融研究表明了这项工作中使用的个人贡献的需求和益处。为此,提出并演示了基于局部直方图统计的误差度量,用于超声图像之间局部差异的可视化。

 结论


基于深度学习的从交互式组织切片到高质量渲染相似性的直接转换可以避免任何复杂的实时渲染过程,这可以通过将时间密集型过程转移到一个单一的过程,从而在消费者硬件上实现极其逼真的超声模拟-可在专用高端硬件上执行的实时、离线、预处理数据准备阶段。

更新日期:2021-04-09
down
wechat
bug