当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatial-Aware Texture Transformer for High-Fidelity Garment Transfer
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2021-08-30 , DOI: 10.1109/tip.2021.3107235
Ting Liu , Jianfeng Zhang , Xuecheng Nie , Yunchao Wei , Shikui Wei , Yao Zhao , Jiashi Feng

Garment transfer aims to transfer the desired garment from a model image with the desired clothing to a target person, which has attracted a great deal of attention due to its wider potential applications. However, considering the model and target persons are often given at different views, body shapes and poses, realistic garment transfer is facing the following challenges that have not been well addressed: 1) deforming the garment; 2) inferring unobserved appearance; 3) preserving fine texture details. To tackle these challenges, we propose a novel SPatial-Aware Texture Transformer (SPATT) model. Different from existing models, SPATT establishes correspondence and infers unobserved clothing appearance by leveraging the spatial prior information of a UV-space. Specifically, the source image is transformed into a partial UV texture map guided by the extracted dense pose. To better infer the unseen appearance utilizing seen region, we first propose a novel coordinate-prior map that defines the spatial relationship between the coordinates in the UV texture map, and design an algorithm to compute it. Based on the proposed coordinate-prior map, we present a novel spatial-aware texture generation network to complete the partial UV texture. In the second stage, we first transform the completed UV texture to fit the target person. To polish the details and improve realism, we introduce a refinement generative network conditioned on the warped image and source input. Compared with existing frameworks as shown experimentally, the proposed framework can generate more realistic images with better-preserved texture details. Furthermore, difficult cases where two persons have large pose and view differences can also be well handled by SPATT.

中文翻译:

用于高保真服装传输的空间感知纹理转换器

服装转移旨在将所需服装从具有所需服装的模型图像转移到目标人物,由于其更广泛的潜在应用而引起了极大的关注。然而,考虑到模型和目标人物经常以不同的视角、体型和姿势给出,真实的服装转移面临以下尚未得到很好解决的挑战:1)使服装变形;2) 推断未观察到的外观;3)保留精细的纹理细节。为了应对这些挑战,我们提出了一种新颖的空间感知纹理变换器 (SPATT) 模型。与现有模型不同,SPATT 通过利用 UV 空间的空间先验信息建立对应关系并推断未观察到的服装外观。具体来说,源图像在提取的密集位姿的指导下被转换成部分 UV 纹理图。为了更好地利用可见区域推断不可见的外观,我们首先提出了一种新的坐标先验图,它定义了 UV 纹理图中坐标之间的空间关系,并设计了一种算法来计算它。基于所提出的坐标先验图,我们提出了一种新颖的空间感知纹理生成网络来完成部分 UV 纹理。在第二阶段,我们首先将完成的 UV 纹理转换为适合目标人物。为了打磨细节并提高真实感,我们引入了一个以扭曲图像和源输入为条件的细化生成网络。与实验显示的现有框架相比,所提出的框架可以生成更逼真的图像,并保留更好的纹理细节。
更新日期:2021-09-07
down
wechat
bug