当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PoNA: Pose-Guided Non-Local Attention for Human Pose Transfer
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-10-13 , DOI: 10.1109/tip.2020.3029455
Kun Li , Jinsong Zhang , Yebin Liu , Yu-Kun Lai , Qionghai Dai

Human pose transfer, which aims at transferring the appearance of a given person to a target pose, is very challenging and important in many applications. Previous work ignores the guidance of pose features or only uses local attention mechanism, leading to implausible and blurry results. We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks. In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer. We also design pre-posed image-guided pose feature update and post-posed pose-guided image feature update to better utilize the pose and image features. Our network is simple, stable, and easy to train. Quantitative and qualitative results on Market-1501 and DeepFashion datasets show the efficacy and efficiency of our model. Compared with state-of-the-art methods, our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed. Furthermore, our generated images can help to alleviate data insufficiency for person re-identification.

中文翻译:


PoNA:用于人体姿势转移的姿势引导非局部注意力



人体姿势迁移旨在将给定人的外观迁移到目标姿势,在许多应用中非常具有挑战性且重要。以前的工作忽略了姿势特征的指导或仅使用局部注意机制,导致令人难以置信和模糊的结果。我们提出了一种新的人体姿势转移方法,使用具有简化级联块的生成对抗网络(GAN)。在每个块中,我们提出了一种姿势引导的非局部注意(PoNA)机制,具有远程依赖方案,以选择要传输的更重要的图像特征区域。我们还设计了前置图像引导的姿势特征更新和后置姿势引导的图像特征更新,以更好地利用姿势和图像特征。我们的网络简单、稳定且易于训练。 Market-1501 和 DeepFashion 数据集的定量和定性结果显示了我们模型的功效和效率。与最先进的方法相比,我们的模型生成更清晰、更真实、细节丰富的图像,同时参数更少、速度更快。此外,我们生成的图像可以帮助缓解人员重新识别的数据不足。
更新日期:2020-10-26
down
wechat
bug