当前位置: X-MOL 学术IEEE J. Sel. Top. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms
IEEE Journal of Selected Topics in Signal Processing ( IF 8.7 ) Pub Date : 2020-10-01 , DOI: 10.1109/jstsp.2020.3003170
Seyed Amir Hossein Hosseini 1 , Burhaneddin Yaman 1 , Steen Moeller 2 , Mingyi Hong 3 , Mehmet Akçakaya 1
Affiliation  

Inverse problems for accelerated MRI typically incorporate domain-specific knowledge about the forward encoding operator in a regularized reconstruction framework. Recently physics-driven deep learning (DL) methods have been proposed to use neural networks for data-driven regularization. These methods unroll iterative optimization algorithms to solve the inverse problem objective function, by alternating between domain-specific data consistency and data-driven regularization via neural networks. The whole unrolled network is then trained end-to-end to learn the parameters of the network. Due to simplicity of data consistency updates with gradient descent steps, proximal gradient descent (PGD) is a common approach to unroll physics-driven DL reconstruction methods. However, PGD methods have slow convergence rates, necessitating a higher number of unrolled iterations, leading to memory issues in training and slower reconstruction times in testing. Inspired by efficient variants of PGD methods that use a history of the previous iterates, in this article, we propose a history-cognizant unrolling of the optimization algorithm with dense connections across iterations for improved performance. In our approach, the gradient descent steps are calculated at a trainable combination of the outputs of all the previous regularization units. We also apply this idea to unrolling variable splitting methods with quadratic relaxation. Our results in reconstruction of the fastMRI knee dataset show that the proposed history-cognizant approach reduces residual aliasing artifacts compared to its conventional unrolled counterpart without requiring extra computational power or increasing reconstruction time.

中文翻译:

用于加速 MRI 的密集循环神经网络:优化算法的历史认知展开

加速 MRI 的逆问题通常将有关正则化重建框架中的前向编码算子的特定领域知识纳入其中。最近,人们提出了物理驱动的深度学习(DL)方法来使用神经网络进行数据驱动的正则化。这些方法通过神经网络在特定领域的数据一致性和数据驱动的正则化之间交替,展开迭代优化算法来解决逆问题目标函数。然后对整个展开的网络进行端到端训练以学习网络参数。由于梯度下降步骤的数据一致性更新简单,近端梯度下降 (PGD) 是展开物理驱动的深度学习重建方法的常用方法。然而,PGD 方法的收敛速度较慢,需要更多数量的展开迭代,从而导致训练中的内存问题和测试中较慢的重建时间。受到使用先前迭代历史的 PGD 方法的有效变体的启发,在本文中,我们提出了一种优化算法的历史认知展开,具有跨迭代的密集连接,以提高性能。在我们的方法中,梯度下降步骤是根据所有先前正则化单元的输出的可训练组合来计算的。我们还将这个想法应用于通过二次松弛展开变量分裂方法。我们重建 fastMRI 膝盖数据集的结果表明,与传统的展开对应方法相比,所提出的历史认知方法减少了残余混叠伪影,而不需要额外的计算能力或增加重建时间。
更新日期:2020-10-01
down
wechat
bug