当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Symmetrical filters in convolutional neural networks
International Journal of Machine Learning and Cybernetics ( IF 5.6 ) Pub Date : 2021-04-19 , DOI: 10.1007/s13042-021-01290-z
Gregory Dzhezyan , Hubert Cecotti

Symmetry is present in nature and science. In image processing, kernels for spatial filtering possess some symmetry (e.g. Sobel operators, Gaussian, Laplacian). Convolutional layers in artificial feed-forward neural networks have typically considered the kernel weights without any constraint. We propose to investigate the impact of a symmetry constraint in convolutional layers for image classification tasks, taking our inspiration from the processes involved in the primary visual cortex and common image processing techniques. The goal is to determine if it is necessary to learn each weight of the filters independently, and the extent to which it is possible to enforce symmetrical constraints on the filters throughout the training process of a convolutional neural network by modifying the weight update preformed during the backpropagation algorithm and to evaluate the change in performance. The symmetrical constraint reduces the number of free parameters in the network, and it is able to achieve near identical performance. We address the following cases: x/y-axis symmetry, point reflection, and anti-point reflection. The performance is evaluated on four databases of images representing handwritten digits. The results support the conclusion that while random weights offer more freedom to the model, the symmetry constraint provides a similar level of performance while decreasing substantially the number of free parameters in the model. Such an approach can be valuable in phase-sensitive applications that require a linear phase property throughout the feature extraction process.



中文翻译:

卷积神经网络中的对称滤波器

对称性存在于自然和科学中。在图像处理中,用于空间滤波的内核具有一定的对称性(例如,Sobel算子,高斯,拉普拉斯算子)。人工前馈神经网络中的卷积层通常考虑了内核权重而没有任何约束。我们建议从卷积层中对称约束对图像分类任务的影响进行研究,并从主要视觉皮层和常见图像处理技术所涉及的过程中汲取灵感。目的是确定是否有必要独立学习过滤器的每个权重,以及在卷积神经网络的整个训练过程中,可以通过修改反向传播算法期间执行的权重更新并评估性能变化来在滤波器上施加对称约束的程度。对称约束减少了网络中自由参数的数量,并且能够实现几乎相同的性能。我们解决以下情况:x / y轴对称,点反射和反点反射。在代表手写数字的四个图像数据库上对性能进行了评估。结果支持以下结论:尽管随机权重为模型提供了更大的自由度,但对称约束提供了相似的性能,同时大大减少了模型中的自由参数数量。

更新日期:2021-04-19
down
wechat
bug