ISMAR 2024

R²Human: Real-Time 3D Human Appearance Rendering
from a Single Image

 

Yuanwang Yang1#, Qiao Feng1#, Yu-Kun Lai2, Kun Li1*

1 Tianjin University   2 Cardiff University  

  # Equal contribution   * Corresponding author

 

[Arxiv] [Code]

 

Abstract

Rendering 3D human appearance from a single image in real-time is crucial for achieving holographic communication and immersive VR/AR. Existing methods either rely on multi-camera setups or are constrained to offline operations. In this paper, we propose R²Human, the first approach for real-time inference and rendering of photorealistic 3D human appearance from a single image. The core of our approach is to combine the strengths of implicit texture fields and explicit neural rendering with our novel representation, namely Z-map. Based on this, we present an end-to-end network that performs high-fidelity color reconstruction of visible areas and provides reliable color inference for occluded regions. To further enhance the 3D perception ability of our network, we leverage the Fourier occupancy field as a prior for generating the texture field and providing a sampling surface in the rendering stage. We also propose a consistency loss and a spatial fusion strategy to ensure the multi-view coherence. Experimental results show that our method outperforms the state-of-the-art methods on both synthetic data and challenging real-world images, in real-time.


Method

 

 

Fig 1. Method overview.

 


Demo

 

 


Technical Paper

 


Citation

Yuanwang Yang, Qiao Feng, Yu-Kun Lai, Kun Li. "R²Human: Real-Time 3D Human Appearance Rendering from a Single Image". 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2024.

 

@inproceedings{yang2024r2human,
  author = {Yuanwang Yang and Qiao Feng and Yu-Kun Lai and Kun Li},
  title = {R²Human: Real-Time 3D Human Appearance Rendering from a Single Image},
  booktitle = {2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
  year={2024}
}