Open cross-domain visual search

https://doi.org/10.1016/j.cviu.2020.103045Get rights and content
Under a Creative Commons license
open access

Highlights

  • We open visual category search to multiple domains.

  • We introduce domain-specific prototype learners to map visual inputs to a common semantic space where search occurs.

  • We propose three open scenarios to search from and within multiple domains simultaneously.

  • We compare to three well-established sketch-based search tasks in the closed setting and achieve competitive results.

Abstract

This paper addresses cross-domain visual search, where visual queries retrieve category samples from a different domain. For example, we may want to sketch an airplane and retrieve photographs of airplanes. Despite considerable progress, the search occurs in a closed setting between two pre-defined domains. In this paper, we make the step towards an open setting where multiple visual domains are available. This notably translates into a search between any pair of domains, from a combination of domains or within multiple domains. We introduce a simple – yet effective – approach. We formulate the search as a mapping from every visual domain to a common semantic space, where categories are represented by hyperspherical prototypes. Open cross-domain visual search is then performed by searching in the common semantic space, regardless of which domains are used as source or target. Domains are combined in the common space to search from or within multiple domains simultaneously. A separate training of every domain-specific mapping function enables an efficient scaling to any number of domains without affecting the search performance. We empirically illustrate our capability to perform open cross-domain visual search in three different scenarios. Our approach is competitive with respect to existing closed settings, where we obtain state-of-the-art results on several benchmarks for three sketch-based search tasks.

MSC

41A05
41A10
65D05
65D17

Cited by (0)