当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Classifier-agnostic saliency map extraction
Computer Vision and Image Understanding ( IF 4.3 ) Pub Date : 2020-04-21 , DOI: 10.1016/j.cviu.2020.102969
Konrad Zolna , Krzysztof J. Geras , Kyunghyun Cho

Currently available methods for extracting saliency maps identify parts of the input which are the most important to a specific fixed classifier. We show that this strong dependence on a given classifier hinders their performance. To address this problem, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps than prior work while being conceptually simple and easy to implement. The method sets the new state of the art result for localization task on the ImageNet data, outperforming all existing weakly-supervised localization techniques, despite not using the ground truth labels at the inference time. The code reproducing the results is available at https://github.com/kondiz/casme.



中文翻译:

分类器不可知显着图提取

当前用于提取显着性图的方法可识别输入的某些部分,这些部分对于特定的固定分类器而言最为重要。我们表明,对给定分类器的强烈依赖会阻碍其性能。为了解决这个问题,我们提出了与分类器无关的显着性图提取,该方法可以找到任何分类器都可以使用的图像的所有部分,而不仅仅是预先指定的部分。我们观察到,所提出的方法在概念上简单易实现,比以前的工作提取了更高质量的显着图。该方法为ImageNet数据上的定位任务设置了最新的技术成果,尽管在推理时未使用地面真相标签,但其性能优于所有现有的弱监督定位技术。可以在https:// github上找到再现结果的代码。

更新日期:2020-04-21
down
wechat
bug