CVPR 2022

High-Fidelity Human Avatars from a Single RGB Camera

 

Hao zhao1, Jinsong Zhang1, Yu-Kun Lai2, Zerong Zheng3, Yingdi Xie4, Yebin Liu3, Kun Li1*

1 Tianjin University   2 Cardiff University   3 Tsinghua University   4 VRC Inc.  

  * Corresponding author

 

[Code] [Dataset] [Paper] [Supp]

 

Abstract

In this paper, we propose a coarse-to-fine framework to reconstruct a personalized high-fidelity human avatar from a monocular video. To deal with the misalignment problem caused by the changed poses and shapes in different frames, we design a dynamic surface network to recover pose-dependent surface deformations, which help to decouple the shape and texture of the person. To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures. Our framework also enables photo-realistic novel view/pose synthesis and shape editing applications. Experimental results on both the public dataset and our collected dataset demonstrate that our method outperforms the state-of-the-art methods. The code and dataset will be available for research purposes.


Method

 

 

Fig 1. Method overview.

 


Demo

 

 


Results

 

 

 

 


Application

 

 



Technical Paper

 


Citation

Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li. "High-Fidelity Human Avatars from a Single RGB Camera". CVPR 2022.

 

@inproceedings{zhao2022avatar,
  author = {Hao Zhao and Jinsong Zhang and Yu-Kun Lai and Zerong Zheng and Yingdi Xie and Yebin Liu and Kun Li},
  title = {High-Fidelity Human Avatars from a Single RGB Camera},
  booktitle = {CVPR},
  year={2022},
}