当前位置: X-MOL 学术Data Min. Knowl. Discov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Smoothed dilated convolutions for improved dense prediction
Data Mining and Knowledge Discovery ( IF 2.8 ) Pub Date : 2021-05-12 , DOI: 10.1007/s10618-021-00765-5
Zhengyang Wang , Shuiwang Ji

Dilated convolutions, also known as atrous convolutions, have been widely explored in deep convolutional neural networks (DCNNs) for various dense prediction tasks. However, dilated convolutions suffer from the gridding artifacts, which hampers the performance. In this work, we propose two simple yet effective degridding methods by studying a decomposition of dilated convolutions. Unlike existing models, which explore solutions by focusing on a block of cascaded dilated convolutional layers, our methods address the gridding artifacts by smoothing the dilated convolution itself. In addition, we point out that the two degridding approaches are intrinsically related and define separable and shared (SS) operations, which generalize the proposed methods. We further explore SS operations in view of operations on graphs and propose the SS output layer, which is able to smooth the entire DCNNs by only replacing the output layer. We evaluate our degridding methods and the SS output layer thoroughly, and visualize the smoothing effect through effective receptive field analysis. Results show that our methods degridding yield consistent improvements on the performance of dense prediction tasks, while adding negligible amounts of extra training parameters. And the SS output layer improves the performance by 3.3% and contains only 9% training parameters of the original output layer.



中文翻译:

平滑的扩张卷积可改善密集预测

在深度卷积神经网络(DCNN)中,已针对各种密集的预测任务广泛地研究了扩张卷积,也称为原子卷积。但是,膨胀的卷积会受到网格化伪影的影响,从而影响性能。在这项工作中,我们通过研究膨胀卷积的分解,提出了两种简单而有效的去网格方法。与现有模型不同,现有模型通过关注级联的扩展卷积层来探索解决方案,而我们的方法则通过平滑扩展卷积本身来解决网格化伪影。此外,我们指出这两种去网格方法本质上是相关的,并且定义了可分离和共享(SS)操作,从而对所提出的方法进行了概括。我们将根据图形上的操作进一步探索SS操作,并提出SS输出层,仅更换输出层就能平滑整个DCNN。我们将对脱毛方法和SS输出层进行全面评估,并通过有效的接收场分析可视化平滑效果。结果表明,我们的去网格方法在密集预测任务的性能上产生了一致的改进,同时增加了可忽略不计的额外训练参数。SS输出层将性能提高了3.3%,并且仅包含原始输出层的9%训练参数。结果表明,我们的去网格方法在密集预测任务的性能上产生了一致的改进,同时增加了可忽略不计的额外训练参数。SS输出层将性能提高了3.3%,并且仅包含原始输出层的9%训练参数。结果表明,我们的去网格方法在密集预测任务的性能上产生了一致的改进,同时增加了可忽略不计的额外训练参数。SS输出层将性能提高了3.3%,并且仅包含原始输出层的9%训练参数。

更新日期:2021-05-12
down
wechat
bug