当前位置: X-MOL 学术IEEE Access › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Latent feature decentralization loss for one-class anomaly detection
IEEE Access ( IF 3.4 ) Pub Date : 2020-01-01 , DOI: 10.1109/access.2020.3022646
Eungi Hong , Yoonsik Choe

Anomaly detection is essential for many real-world applications, such as video surveillance, disease diagnosis, and visual inspection. With the development of neural networks, many neural networks have been used for anomaly detection by learning the distribution of normal data. However, they are vulnerable to distinguishing abnormalities when the normal and abnormal images are not significantly different. To mitigate this problem, we propose a novel loss function for one-class anomaly detection: decentralization loss. The main goal of the proposed method is to cause the latent feature of the encoder to disperse over the manifold space, such that the decoder can generate images similar to those in a normal class for any input. To this end, a decentralization term designed based on the dispersion measure for latent vectors is also added to the existing mean-squared error loss. To design a general solution for various datasets, we restrict the latent space by designing a decentralization loss term-based upper bound of the dispersion measure. As intended, a model trained with the proposed decentralization loss function disperses vectors on the manifold space and generates constant images. Consequently, the reconstruction error increases when the given test image is unknown. Experiments conducted on various datasets verify that the proposed function improves detection performance improved by about 1% while reducing training time by 48%, without any structural changes in the conventional autoencoder.

中文翻译:

一类异常检测的潜在特征分散损失

异常检测对于许多实际应用至关重要,例如视频监控、疾病诊断和视觉检查。随着神经网络的发展,许多神经网络通过学习正常数据的分布来进行异常检测。然而,当正常和异常图像没有显着差异时,它们很容易区分异常。为了缓解这个问题,我们为一类异常检测提出了一种新的损失函数:去中心化损失。所提出方法的主要目标是使编码器的潜在特征分散在流形空间上,以便解码器可以为任何输入生成与正常类中的图像相似的图像。为此,基于潜在向量的分散度量设计的分散项也被添加到现有的均方误差损失中。为了为各种数据集设计通用解决方案,我们通过设计分散度量的基于分散损失项的上限来限制潜在空间。正如预期的那样,使用建议的分散损失函数训练的模型在流形空间上分散向量并生成恒定图像。因此,当给定的测试图像未知时,重建误差会增加。在各种数据集上进行的实验证实,所提出的函数将检测性能提高了约 1%,同时将训练时间减少了 48%,而传统自动编码器没有任何结构变化。我们通过设计分散度量的基于分散损失项的上限来限制潜在空间。正如预期的那样,使用建议的分散损失函数训练的模型在流形空间上分散向量并生成恒定图像。因此,当给定的测试图像未知时,重建误差会增加。在各种数据集上进行的实验证实,所提出的函数将检测性能提高了约 1%,同时将训练时间减少了 48%,而传统自动编码器没有任何结构变化。我们通过设计分散度量的基于分散损失项的上限来限制潜在空间。正如预期的那样,使用建议的分散损失函数训练的模型在流形空间上分散向量并生成恒定图像。因此,当给定的测试图像未知时,重建误差会增加。在各种数据集上进行的实验证实,所提出的函数将检测性能提高了约 1%,同时将训练时间减少了 48%,而传统自动编码器没有任何结构变化。当给定的测试图像未知时,重建误差会增加。在各种数据集上进行的实验证实,所提出的函数将检测性能提高了约 1%,同时将训练时间减少了 48%,而传统自动编码器没有任何结构变化。当给定的测试图像未知时,重建误差会增加。在各种数据集上进行的实验证实,所提出的函数将检测性能提高了约 1%,同时将训练时间减少了 48%,而传统自动编码器没有任何结构变化。
更新日期:2020-01-01
down
wechat
bug