当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive convolution kernel for artificial neural networks
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-01-02 , DOI: 10.1016/j.jvcir.2020.103015
F. Boray Tek , İlker Çam , Deniz Karlı

Many deep neural networks are built by using stacked convolutional layers of fixed and single size (often 3 × 3) kernels. This paper describes a method for learning the size of convolutional kernels to provide varying size kernels in a single layer. The method utilizes a differentiable, and therefore backpropagation-trainable Gaussian envelope which can grow or shrink in a base grid. Our experiments compared the proposed adaptive layers to ordinary convolution layers in a simple two-layer network, a deeper residual network, and a U-Net architecture. The results in the popular image classification datasets such as MNIST, MNIST-CLUTTERED, CIFAR-10, Fashion, and “Faces in the Wild” showed that the adaptive kernels can provide statistically significant improvements on ordinary convolution kernels. A segmentation experiment in the Oxford-Pets dataset demonstrated that replacing ordinary convolution layers in a U-shaped network with 7 × 7 adaptive layers can improve its learning performance and ability to generalize.



中文翻译:

人工神经网络的自适应卷积核

许多深度神经网络是通过使用固定大小和单个大小(通常为3×3)内核的堆叠卷积层构建的。本文介绍了一种用于学习卷积核大小以在单个层中提供大小可变的核的方法。该方法利用可微分的,因此可反向传播训练的高斯包络,该包络可以在基本网格中增长或收缩。我们的实验在简单的两层网络,更深的残差网络和U-Net架构中将建议的自适应层与普通卷积层进行了比较。流行的图像分类数据集(例如MNIST,MNIST-CLUTTERED,CIFAR-10,Fashion和“ Faces in the Wild”)中的结果表明,自适应内核可以对普通卷积内核提供统计学上显着的改进。

更新日期:2021-01-13
down
wechat
bug