Arxiv 2023

R²Human: Real-Time 3D Human Appearance Rendering
from a Single Image

 

Qiao Feng1#, Yuanwang Yang1#, Yu-Kun Lai2, Kun Li1*

1 Tianjin University   2 Cardiff University  

  # Equal contribution   * Corresponding author

 

[Code] [Arxiv]

 

Abstract

Reconstructing 3D human appearance from a single image is crucial for achieving holographic communication and immersive social experiences. However, this remains a challenge for existing methods, which typically rely on multi-camera setups or are limited to offline operations. In this paper, we propose R²Human, the first approach for real-time inference and rendering of photorealistic 3D human appearance from a single image. The core of our approach is to combine the strengths of implicit texture fields and explicit neural rendering with our novel representation, namely Z-map. Based on this, we present an end-to-end network that performs high-fidelity color reconstruction of visible areas and provides reliable color inference for occluded regions. To further enhance the 3D perception ability of our network, we leverage the Fourier occupancy field to reconstruct a detailed 3D geometry, which serves as a prior for the texture field generation and provides a sampling surface in the rendering stage. Experiments show that our end-to-end method achieves state-of-the-art performance on both synthetic data and challenging real-world images and even outperforms many offline methods. The source code will be available for research purposes.


Method

 

 

Fig 1. Method overview.

 


Demo

 

 


Technical Paper

 


Citation

Qiao Feng, Yuanwang Yang, Yu-Kun Lai, Kun Li. "R²Human: Real-Time 3D Human Appearance Rendering from a Single Image". arXiv preprint arXiv:2312.05826, 2023.

 

@article{feng2023r2human,
  author = {Qiao Feng and Yuanwang Yang and Yu-Kun Lai and Kun Li},
  title = {R²Human: Real-Time 3D Human Appearance Rendering from a Single Image},
  journal = {arXiv preprint arXiv:2312.05826},
  year={2023},
}