当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unbiased Multiple Instance Learning for Weakly Supervised Video Anomaly Detection
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2023-03-22 , DOI: arxiv-2303.12369
Hui Lv, Zhongqi Yue, Qianru Sun, Bin Luo, Zhen Cui, Hanwang Zhang

Weakly Supervised Video Anomaly Detection (WSVAD) is challenging because the binary anomaly label is only given on the video level, but the output requires snippet-level predictions. So, Multiple Instance Learning (MIL) is prevailing in WSVAD. However, MIL is notoriously known to suffer from many false alarms because the snippet-level detector is easily biased towards the abnormal snippets with simple context, confused by the normality with the same bias, and missing the anomaly with a different pattern. To this end, we propose a new MIL framework: Unbiased MIL (UMIL), to learn unbiased anomaly features that improve WSVAD. At each MIL training iteration, we use the current detector to divide the samples into two groups with different context biases: the most confident abnormal/normal snippets and the rest ambiguous ones. Then, by seeking the invariant features across the two sample groups, we can remove the variant context biases. Extensive experiments on benchmarks UCF-Crime and TAD demonstrate the effectiveness of our UMIL. Our code is provided at https://github.com/ktr-hubrt/UMIL.

中文翻译:

用于弱监督视频异常检测的无偏多实例学习

弱监督视频异常检测 (WSVAD) 具有挑战性,因为二进制异常标签仅在视频级别给出,但输出需要片段级别的预测。因此,多实例学习 (MIL) 在 WSVAD 中盛行。然而,众所周知,MIL 会遭受许多误报,因为片段级检测器很容易偏向具有简单上下文的异常片段,被具有相同偏差的正态性混淆,并错过具有不同模式的异常。为此,我们提出了一个新的 MIL 框架:无偏 MIL (UMIL),以学习改进 WSVAD 的无偏异常特征。在每次 MIL 训练迭代中,我们使用当前检测器将样本分为具有不同上下文偏差的两组:最有信心的异常/正常片段和其余不明确的片段。然后,通过在两个样本组中寻找不变特征,我们可以消除变体上下文偏差。在基准 UCF-Crime 和 TAD 上进行的大量实验证明了我们的 UMIL 的有效性。我们的代码在 https://github.com/ktr-hubrt/UMIL 中提供。
更新日期:2023-03-23
down
wechat
bug