当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-domain representation learning by domain-migration generative adversarial network for sketch based image retrieval
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2020-05-26 , DOI: 10.1016/j.jvcir.2020.102835
Cong Bai , Jian Chen , Qing Ma , Pengyi Hao , Shengyong Chen

Sketch based image retrieval (SBIR), which uses free-hand sketches to search the images containing similar objects/scenes, is attracting more and more attentions as sketches could be got more easily with the development of touch devices. However, this task is difficult as the huge differences between sketches and images. In this paper, we propose a cross-domain representation learning framework to reduce these differences for SBIR. This framework aims to transfer sketches to images with the information learned both in the sketch domain and image domain by the proposed domain migration generative adversarial network (DMGAN). Furthermore, to reduce the representation gap between the generated images and natural images, a similarity learning network (SLN) is also proposed with the new designed loss function incorporating semantic information. Extensive experiments have been done from different aspects, including comparison with state-of-the-art methods. The results show that the proposed DMGAN and SLN really work for SBIR.



中文翻译:

基于域迁移生成对抗网络的跨域表示学习,用于基于草图的图像检索

基于草图的图像检索(SBIR)使用徒手的草图来搜索包含相似对象/场景的图像,由于随着触摸设备的发展可以更轻松地获得草图,因此吸引了越来越多的关注。但是,由于草图和图像之间存在巨大差异,因此此任务很困难。在本文中,我们提出了一种跨域表示学习框架,以减少SBIR的这些差异。该框架旨在通过拟议的域迁移生成对抗网络(DMGAN)将草图传输到图像,并在草图域和图像域中同时获取信息。此外,为了减小生成的图像和自然图像之间的表示间隙,还提出了一种具有新设计的包含语义信息的损失函数的相似性学习网络(SLN)。已从不同方面进行了广泛的实验,包括与最新方法的比较。结果表明,建议的DMGAN和SLN确实适用于SBIR。

更新日期:2020-05-26
down
wechat
bug