Audio-Driven 3D Depth-Aware Talking-Face Video Editing
1JD.Com, Inc., {wangqili, wudajiang, huangjunshi1, lvjun}@jd.com
2The University of Hong Kong, xuzihang@connect.hku.hk
1JD.Com, Inc., {wangqili, wudajiang, huangjunshi1, lvjun}@jd.com
2The University of Hong Kong, xuzihang@connect.hku.hk
Significant progress has been made in talking-face video generation research; however, precise lip-audio synchronization and high visual quality remain challenging in editing lip shapes based on input audio. This paper introduces JoyGen, a novel two-stage framework for talking-face generation, comprising audio-driven lip motion generation and visual appearance synthesis. In the first stage, a 3D reconstruction model and an audio2motion model predict identity and expression coefficients respectively. Next, by integrating audio features with a facial depth map, we provide comprehensive supervision for precise lip-audio synchronization in facial generation. Additionally, we constructed a Chinese talking-face dataset containing 130 hours of high-quality video. JoyGen is trained on the open-source HDTF dataset and our curated dataset. Experimental results demonstrate superior lip-audio synchronization and visual quality achieved by our method.
Most publicly available Talking Face datasets focus predominantly on English-speaking scenarios. To promote applications in Chinese-speaking contexts, we have constructed a high-definition Talking Face dataset featuring Chinese-language videos. Our new collected dataset comprises approximately 1.1k videos, and a total length of approximately 130 hours.
@misc{wang2025joygenaudiodriven3ddepthaware,
title={JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video Editing},
author={Qili Wang and Dajiang Wu and Zihang Xu and Junshi Huang and Jun Lv},
year={2025},
eprint={2501.01798},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.01798},
}