当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model-Agnostic Explainability for Visual Search
arXiv - CS - Human-Computer Interaction Pub Date : 2021-02-28 , DOI: arxiv-2103.00370
Mark Hamilton, Scott Lundberg, Lei Zhang, Stephanie Fu, William T. Freeman

What makes two images similar? We propose new approaches to generate model-agnostic explanations for image similarity, search, and retrieval. In particular, we extend Class Activation Maps (CAMs), Additive Shapley Explanations (SHAP), and Locally Interpretable Model-Agnostic Explanations (LIME) to the domain of image retrieval and search. These approaches enable black and grey-box model introspection and can help diagnose errors and understand the rationale behind a model's similarity judgments. Furthermore, we extend these approaches to extract a full pairwise correspondence between the query and retrieved image pixels, an approach we call "joint interpretations". Formally, we show joint search interpretations arise from projecting Harsanyi dividends, and that this approach generalizes Shapley Values and The Shapley-Taylor indices. We introduce a fast kernel-based method for estimating Shapley-Taylor indices and empirically show that these game-theoretic measures yield more consistent explanations for image similarity architectures.

中文翻译:

可视搜索的模型不可知的可解释性

是什么使两个图像相似?我们提出了新的方法来为图像相似性,搜索和检索生成模型不可知的解释。尤其是,我们将类激活图(CAM),加性Shapley解释(SHAP)和局部可解释的模型不可知解释(LIME)扩展到图像检索和搜索的领域。这些方法可以实现黑盒子和灰盒模型的自省,并且可以帮助诊断错误并了解模型相似性判断背后的原理。此外,我们扩展了这些方法以提取查询和检索到的图像像素之间的完整成对对应关系,该方法称为“联合解释”。正式地,我们显示联合搜索解释源于对Harsanyi股息的预测,并且这种方法推广了Shapley值和Shapley-Taylor指数。
更新日期:2021-03-02
down
wechat
bug