当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Universal Face Photo-Sketch Style Transfer via Multiview Domain Translation.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-08-19 , DOI: 10.1109/tip.2020.3016502
Chunlei Peng , Nannan Wang , Jie Li , Xinbo Gao

Face photo-sketch style transfer aims to convert a representation of a face from the photo (or sketch) domain to the sketch (respectively, photo) domain while preserving the character of the subject. It has wide-ranging applications in law enforcement, forensic investigation and digital entertainment. However, conventional face photo-sketch synthesis methods usually require training images from both the source domain and the target domain, and are limited in that they cannot be applied to universal conditions where collecting training images in the source domain that match the style of the test image is unpractical. This problem entails two major challenges: 1) designing an effective and robust domain translation model for the universal situation in which images of the source domain needed for training are unavailable, and 2) preserving the facial character while performing a transfer to the style of an entire image collection in the target domain. To this end, we present a novel universal face photo-sketch style transfer method that does not need any image from the source domain for training. The regression relationship between an input test image and the entire training image collection in the target domain is inferred via a deep domain translation framework, in which a domain-wise adaption term and a local consistency adaption term are developed. To improve the robustness of the style transfer process, we propose a multiview domain translation method that flexibly leverages a convolutional neural network representation with hand-crafted features in an optimal way. Qualitative and quantitative comparisons are provided for universal unconstrained conditions of unavailable training images from the source domain, demonstrating the effectiveness and superiority of our method for universal face photo-sketch style transfer.

中文翻译:

通过多视图域转换的通用面部照片素描样式传输。

人脸照片素描样式转换旨在将人脸的表示从照片(或素描)域转换为素描(分别是照片)域,同时保留对象的特征。它在执法,法医调查和数字娱乐方面具有广泛的应用。然而,传统的面部照片素描合成方法通常需要来自源域和目标域的训练图像,并且局限性在于它们不能应用于在源域中收集与测试样式匹配的训练图像的通用条件下。图像不切实际。这个问题带来两个主要挑战:1)针对普遍情况设计有效而健壮的领域转换模型,在这种情况下,培训所需的源域图像不可用,和2)在执行到目标域中整个图像集合样式的转换的同时保留面部特征。为此,我们提出了一种新颖的通用人脸照片素描样式转移方法,该方法不需要源域中的任何图像即可进行训练。通过深度域转换框架推断输入测试图像与目标域中的整个训练图像集合之间的回归关系,在该框架中开发了域适应项和局部一致性适应项。为了提高样式传递过程的鲁棒性,我们提出了一种多视图域转换方法,该方法以最佳方式灵活地利用具有手工特征的卷积神经网络表示。
更新日期:2020-08-28
down
wechat
bug