当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Denoising Autoencoders for Overgeneralization in Neural Networks.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2019-04-09 , DOI: 10.1109/tpami.2019.2909876
Giacomo Spigler

Despite recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. This problem can lead to security problems in critical applications, and is closely linked to open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using the reconstruction error of denoising autoencoders and shows how it can correctly identify the regions of the input space close to the training distribution. The proposed solution is tested on benchmarks of ‘fooling’, open set recognition and 1-class recognition constructed from the MNIST and Fashion-MNIST datasets.

中文翻译:

对神经网络中的泛化进行自动降噪的自动编码器。

尽管最近的发展使神经网络在各种应用上都能实现令人印象深刻的性能,但是由于将整个输入空间划分为训练过程中使用的固定目标类集,因此这些模型本质上会受到泛化问题的影响。因此,即使是高度自信的,属于训练过程中未知类别甚至人类完全无法识别的新颖输入也有可能使系统将其归类为已知类别之一。此问题可能会导致关键应用程序出现安全问题,并且与开放集识别和1类识别紧密相关。本文提出了一种使用去噪自动编码器的重构误差来计算置信度分数的新颖方法,并展示了如何正确识别输入空间中靠近训练分布的区域。所提出的解决方案已根据MNIST和Fashion-MNIST数据集构建的“欺骗”,开放集识别和1类识别的基准进行了测试。
更新日期:2020-03-06
down
wechat
bug