当前位置: X-MOL 学术IEEE Trans. Comput. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CNF+CT: Context Network Fusion of Cascade Trained Convolutional Neural Networks for Image Super-Resolution
IEEE Transactions on Computational Imaging ( IF 5.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/tci.2019.2956874
Haoyu Ren , Mostafa El-Khamy , Jungwon Lee

A novel cascade learning framework to incrementally train deeper and more accurate convolutional neural networks is introduced. The proposed cascade learning facilitates the training of deep efficient networks with plain convolutional neural network (CNN) architectures, as well as with residual network (ResNet) architectures. This is demonstrated on the problem of image super-resolution (SR). We show that cascade-trained (CT) SR CNNs and CT-ResNets can achieve state-of-the-art results with a smaller number of network parameters. To further improve the network's efficiency, we propose a cascade trimming strategy that progressively reduces the network size, proceeding by trimming a group of layers at a time, while preserving the network's discriminative ability. We propose context network fusion (CNF) as a method to combine features from an ensemble of networks through context fusion layers. We show that CNF of an ensemble of CT SR networks can result in a network with better efficiency and accuracy than that of other fusion methods. CNF can also be trained by the proposed edge-aware loss function to obtain sharper edges and improve the perceptual image quality. Experiments on benchmark datasets show that our proposed deep convolutional networks achieve state-of-the-art accuracy and are much faster than existing deep super-resolution networks.

中文翻译:

CNF+CT:用于图像超分辨率的级联训练卷积神经网络的上下文网络融合

引入了一种新颖的级联学习框架,以逐步训练更深入、更准确的卷积神经网络。所提出的级联学习有助于训练具有普通卷积神经网络 (CNN) 架构以及残差网络 (ResNet) 架构的深度高效网络。这在图像超分辨率 (SR) 问题上得到了证明。我们表明,级联训练 (CT) SR CNN 和 CT-ResNet 可以使用较少数量的网络参数实现最先进的结果。为了进一步提高网络的效率,我们提出了一种级联修剪策略,该策略通过一次修剪一组层来逐步减小网络大小,同时保留网络的判别能力。我们提出上下文网络融合(CNF)作为一种通过上下文融合层组合来自网络集合的特征的方法。我们表明,与其他融合方法相比,CT SR 网络集合的 CNF 可以产生具有更高效率和准确性的网络。CNF 也可以通过提出的边缘感知损失函数进行训练,以获得更清晰的边缘并提高感知图像质量。在基准数据集上的实验表明,我们提出的深度卷积网络达到了最先进的精度,并且比现有的深度超分辨率网络快得多。CNF 也可以通过提出的边缘感知损失函数进行训练,以获得更清晰的边缘并提高感知图像质量。在基准数据集上的实验表明,我们提出的深度卷积网络达到了最先进的精度,并且比现有的深度超分辨率网络快得多。CNF 也可以通过提出的边缘感知损失函数进行训练,以获得更清晰的边缘并提高感知图像质量。在基准数据集上的实验表明,我们提出的深度卷积网络达到了最先进的精度,并且比现有的深度超分辨率网络快得多。
更新日期:2020-01-01
down
wechat
bug