当前位置: X-MOL 学术IEEE Trans. Comput. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Memory-efficient Learning for Large-scale Computational Imaging
IEEE Transactions on Computational Imaging ( IF 4.2 ) Pub Date : 2020-01-01 , DOI: 10.1109/tci.2020.3025735
Michael Kellman , Kevin Zhang , Eric Markley , Jon Tamir , Emrah Bostan , Michael Lustig , Laura Waller

Critical aspects of computational imaging systems, such as experimental design and image priors, can be optimized through deep networks formed by the unrolled iterations of classical physics-based reconstructions. Termed physics-based networks, they incorporate both the known physics of the system via its forward model, and the power of deep learning via data-driven training. However, for realistic large-scale physics-based networks, computing gradients via backpropagation is infeasible due to the memory limitations of graphics processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network's layers to enable physics-based learning for large-scale computational imaging systems. We demonstrate our method on a compressed sensing example, as well as two large-scale real-world systems: 3D multi-channel magnetic resonance imaging and super-resolution optical microscopy.

中文翻译:

大规模计算成像的高效记忆学习

计算成像系统的关键方面,例如实验设计和图像先验,可以通过经典物理重建的展开迭代形成的深度网络进行优化。称为基于物理的网络,它们通过前向模型结合了系统的已知物理特性,并通过数据驱动的训练结合了深度学习的力量。然而,对于现实的大规模基于物理的网络,由于图形处理单元的内存限制,通过反向传播计算梯度是不可行的。在这项工作中,我们提出了一种内存高效的学习程序,该程序利用网络层的可逆性来实现大规模计算成像系统的基于物理的学习。我们在压缩感知示例以及两个大型现实世界系统上演示了我们的方法:
更新日期:2020-01-01
down
wechat
bug