当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Densely connected network with improved pyramidal bottleneck residual units for super-resolution
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2020-12-24 , DOI: 10.1016/j.jvcir.2020.102963
Feilong Cao , Baijie Chen

Recent studies have shown that super-resolution can be significantly improved by using deep convolution neural network. Although applying a larger number of convolution kernels can extract more features, increasing the number of feature mappings will dramatically increase the training parameters and time complexity. In order to balance the workload among all units and maintain appropriate time complexity, this paper proposes a new network structure for super-resolution. For the sake of making full use of context information, in the structure, the operations of division (S) and fusion (C) are added to the pyramidal bottleneck residual units, and the dense connected methods are used. The proposed network include a preliminary feature extraction net, seven residual units with dense connections, seven convolution layers with the size of 1×1 after each residual unit, and a deconvolution layer. The experimental results show that the proposed network has better performance than most existing methods.



中文翻译:

密集连接的网络,具有改进的金字塔形瓶颈残留单元,可实现超分辨率

最近的研究表明,通过使用深度卷积神经网络可以显着改善超分辨率。尽管应用大量卷积核可以提取更多特征,但是增加特征映射的数量将大大增加训练参数和时间复杂度。为了平衡各个单元之间的工作量并保持适当的时间复杂度,本文提出了一种新的超分辨率网络结构。为了充分利用上下文信息,在结构上,将除法运算(S)和融合运算(C)添加到金字塔形瓶颈残差单元,并使用密集连接方法。拟议的网络包括一个初步的特征提取网络,七个具有密集连接的残差单元,七个大小为的卷积层。1个×1个每个残差单元之后,还有一个反卷积层。实验结果表明,所提出的网络比大多数现有方法具有更好的性能。

更新日期:2021-01-01
down
wechat
bug