Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework and Open Datasets.
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control ( IF 3.6 ) Pub Date : 2021-11-23 , DOI: 10.1109/tuffc.2021.3094849
Dongwoon Hyun 1 , Alycen Wiacek 2 , Sobhan Goudarzi 3 , Sven Rothlübbers 4 , Amir Asif 5 , Klaus Eickel 6 , Yonina C. Eldar 7 , Jiaqi Huang 8 , Massimo Mischi 9 , Hassan Rivaz 3 , David Sinden 4 , Ruud J. G. van Sloun 9 , Hannah Strohm 4 , Muyinatu A. Lediju Bell 10
Affiliation  

Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).

中文翻译:

超声图像形成的深度学习:CUBDL 评估框架和开放数据集。

超声图像形成的深度学习正在迅速获得研究支持和关注,迅速成为超声图像形成的最新前沿,有望平衡图像质量和显示速度。尽管有这样的承诺,但确定最佳解决方案的一个挑战是缺乏统一的评估方法和数据集,这些方法和数据集并不特定于单个研究组。本文介绍了已知最大的国际超声通道数据数据库,并描述了最初为应对具有深度学习的超声波束成形 (CUBDL) 挑战而开发的相关评估方法,该方法是 2020 年 IEEE 国际超声研讨会的一个组成部分。我们总结了挑战结果,并使用最初封闭的 CUBDL 评估测试数据集(来自世界各地的多个群体的众包)和挑战完成后提供的其他体内乳房超声数据进行了定性和定量评估。作为一个定量评估的例子,来自 CUBDL 任务 1 数据集的单平面波图像产生的平均广义对比噪声比 (gCNR) 为 0.67,当使用延迟和和波束成形形成时,平均横向分辨率为 0.42 mm,比较当与挑战获胜者提交的网络形成时,平均 gCNR 高达 0.81,平均横向分辨率低至 0.32 mm。我们还描述了可用于训练未来网络的贡献的 CUBDL 数据。编译的数据库包括总共 576 个图像采集序列。我们还介绍了一种基于神经网络的全局声速估计器实现,这是公平评估使用该国际数据库获得的结果所必需的。CUBDL 评估方法、评估代码、挑战获胜者的网络权重以及此处描述的所有数据集的集成都是公开的(访问 https://cubdl.jhu.edu 了解详细信息)。
更新日期:2021-07-05
down
wechat
bug