当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
What Do Different Evaluation Metrics Tell Us About Saliency Models?
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 3-13-2018 , DOI: 10.1109/tpami.2018.2815601
Zoya Bylinskii , Tilke Judd , Aude Oliva , Antonio Torralba , Fredo Durand

How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.

中文翻译:


关于显着性模型,不同的评估指标告诉我们什么?



如何最好地评估显着性模型预测人类在图像中看向何处的能力是一个开放的研究问题。评估指标的选择取决于显着性的定义方式以及基本事实的表示方式。指标的不同之处在于如何对显着性模型进行排名,这取决于如何处理假阳性和假阴性、是否考虑了观看偏差、是否考虑了空间偏差以及如何预处理显着性图。在本文中,我们分析了 8 个不同的评估指标及其属性。借助系统实验和度量计算可视化,我们增加了显着性分数的可解释性,并提高了显着性模型评估的透明度。根据指标属性和行为的差异,我们针对特定假设和特定应用程序提出指标选择建议。
更新日期:2024-08-22
down
wechat
bug