TVCG 2025

SpeechAct: Towards Generating Whole-body Motion from Speech

 

Jinsong Zhang1#, Minjie Zhu1#, Yuxiang Zhang2, Zerong Zheng3, Yebin Liu2, Kun Li1*

1 Tianjin University   2 Tsinghua University   3 NNKosmos Technology  

  # Equal contribution   * Corresponding author

 

[Code] [Paper]

 

 

Abstract

Whole-body motion generation from speech audio is crucial for computer graphics and immersive VR/AR. Prior methods struggle to produce natural and diverse whole-body motions from speech. In this paper, we introduce a novel method, named SpeechAct, based on a hybrid point representation and contrastive motion learning to boost realism and diversity in motion generation. Our hybrid point representation leverages the advantages of keypoint representation and surface points of 3D body model, which is easy to learn and helps to achieve smooth and natural motion generation from speech audio. We design a VQ-VAE to learn a motion codebook using our hybrid presentation, and then regress the motion from the input audio using a translation model. To boost diversity in motion generation, we propose a contrastive motion learning method according to the intuitive idea that the generated motion should be different from the motions of other audios and other speakers. We collect negative samples from other audio inputs and other speakers using our translation model. With these negative samples, we pull the current motion away from them using a contrastive loss to produce more distinctive representations. In addition, we compose a face generator to generate deterministic face motion due to the strong connection between the face movements and the speech audio. Experimental results validate the superior performance of our model. The code is available at https://github.com/Zhangjinso/SpeechAct.


Method

 

 

Fig 1. The overview of our framework.

 


Demo

 

 


Application

 

 



Technical Paper

 


Citation

Jinsong Zhang, Minjie Zhu, Yuxiang Zhang, Zerong Zheng, Yebin Liu, Kun Li, "SpeechAct: Towards Generating Whole-body Motion from Speech," in IEEE Transactions on Visualization and Computer Graphics, 2025.

 

@article{zhang2023speech,
  author = {Jinsong Zhang and Minjie Zhu and Yuxiang Zhang and Yebin Liu and Kun Li},
  title = {SpeechAct: Towards Generating Whole-body Motion from Speech},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2025},
}