当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Investigation of out-of-distribution detection across various models and training methodologies
Neural Networks ( IF 7.8 ) Pub Date : 2024-04-04 , DOI: 10.1016/j.neunet.2024.106288
Byung Chun Kim , Byungro Kim , Yoonsuk Hyun

Machine learning-based algorithms demonstrate impressive performance across numerous fields; however, they continue to suffer from certain limitations. Even sophisticated and precise algorithms often make erroneous predictions when implemented with datasets having different distributions compared to the training set. Out-of-distribution (OOD) detection, which distinguishes data with different distributions from that of the training set, is a critical research area necessary to overcome these limitations and create more reliable algorithms. The OOD issue, particularly concerning image data, has been extensively studied. However, recently developed OOD methods do not fulfill the expectation that OOD performance will increase as the accuracy of in-distribution classification improves. Our research presents a comprehensive study on OOD detection performance across multiple models and training methodologies to verify this phenomenon. Specifically, we explore various pre-trained models popular in the computer vision field with both old and new OOD detection methods. The experimental results highlight the performance disparity in existing OOD methods. Based on these observations, we introduce Trimmed Rank with Inverse softMax probability (TRIM), a remarkably simple yet effective method for model weights with newly developed training methods. The proposed method could serve as a potential tool for enhancing OOD detection performance owing to its promising results. The OOD performance of TRIM is highly compatible with the in-distribution accuracy model and may bridge the efforts on improving in-distribution accuracy to the ability to distinguish OOD data.

中文翻译:

研究各种模型和训练方法的分布外检测

基于机器学习的算法在众多领域展示了令人印象深刻的性能;然而,它们仍然受到某些限制。即使是复杂且精确的算法,在使用与训练集相比具有不同分布的数据集实现时也常常会做出错误的预测。分布外(OOD)检测可将具有不同分布的数据与训练集的数据区分开来,是克服这些限制并创建更可靠算法所必需的关键研究领域。 OOD 问题,特别是与图像数据相关的问题,已经得到了广泛的研究。然而,最近开发的 OOD 方法并不能满足 OOD 性能随着分布内分类准确性提高而提高的期望。我们的研究对多种模型和训练方法的 OOD 检测性能进行了全面研究,以验证这一现象。具体来说,我们使用新旧 OOD 检测方法探索计算机视觉领域流行的各种预训练模型。实验结果凸显了现有 OOD 方法的性能差异。基于这些观察,我们引入了带有逆 softMax 概率的 Trimmed Rank (TRIM),这是一种使用新开发的训练方法来计算模型权重的非常简单而有效的方法。由于其有希望的结果,所提出的方法可以作为增强 OOD 检测性能的潜在工具。 TRIM的OOD性能与分布内精度模型高度兼容,可以将提高分布内精度的努力与区分OOD数据的能力联系起来。
更新日期:2024-04-04
down
wechat
bug