1 Tianjin University 2 Cardiff University
* Corresponding author
Person image synthesis, e.g., pose transfer, is a challenging problem due to large variation and occlusion. Existing methods have difficulties predicting reasonable invisible regions and fail to decouple the shape and style of clothing, which limits their applications on person image editing. In this paper, we propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing, which is able to generate realistic person images with desired poses, textures, or semantic layouts. For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing by a parsing generator, and then generate the final image by an image generator. To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization to predict the reasonable style of clothing for invisible regions. We also propose spatial-aware normalization to retain the spatial context relationship in the source image. The results of qualitative and quantitative experiments demonstrate the superiority of our model on human pose transfer. Besides, the results of texture transfer and region editing show that our model can be applied to person image editing.
Figure 1. Method overview.
Figure 2. Results compared with state-of-the-art methods.
Figure 3. Results of texture transfer for dress (left) and pants (right) using our method.
Figure 4. Results of region editing.
Jinsong Zhang, Kun Li, Yu-Kun Lai, Jingyu Yang. "PISE: Person Image Synthesis and Editing with Decoupled GAN". in Proc. CVPR, 2021
@article{PISE,
author = {Zhang, Jinsong and Li, Kun and Yu-Kun, Lai and Jingyu, Yang},
title = {{PISE}: Person Image Synthesis and Editing with Decoupled GAN},
booktitle = {Proc. Computer Vision and Pattern Recognition (CVPR)},
year={2021},
}