Audio-Driven 3D Depth-Aware Talking-Face Video Editing

Qili Wang1 Dajiang Wu1 Zihang Xu2 Junshi Huang1 Jun Lv1

1JD.Com, Inc., {wangqili, wudajiang, huangjunshi1, lvjun}@jd.com

2The University of Hong Kong, xuzihang@connect.hku.hk

arxiv[Paper]          github[Github]         

Abstract

Significant progress has been made in talking-face video generation research; however, precise lip-audio synchronization and high visual quality remain challenging in editing lip shapes based on input audio. This paper introduces JoyGen, a novel two-stage framework for talking-face generation, comprising audio-driven lip motion generation and visual appearance synthesis. In the first stage, a 3D reconstruction model and an audio2motion model predict identity and expression coefficients respectively. Next, by integrating audio features with a facial depth map, we provide comprehensive supervision for precise lip-audio synchronization in facial generation. Additionally, we constructed a Chinese talking-face dataset containing 130 hours of high-quality video. JoyGen is trained on the open-source HDTF dataset and our curated dataset. Experimental results demonstrate superior lip-audio synchronization and visual quality achieved by our method.

Dataset

Most publicly available Talking Face datasets focus predominantly on English-speaking scenarios. To promote applications in Chinese-speaking contexts, we have constructed a high-definition Talking Face dataset featuring Chinese-language videos. Our new collected dataset comprises approximately 1.1k videos, and a total length of approximately 130 hours.

Statistics of our newly curated chinese talking-face dataset. Our dataset has an approximately equal ratio of males and females, with varying video lengths and frame rates, and includes high-resolution video frames.

Results

Audio Driven (Chinese)

Audio Driven (English)

Quantitative Results

Quantitative evaluation results on the HDTF dataset and our collected dataset
The distribution curves of LSE-D and LSE-C scores on the HDTF dataset
The distribution curves of LSE-D and LSE-C scores on our collected dataset

Qualitative Results


BibTex

@misc{wang2025joygenaudiodriven3ddepthaware,
        title={JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video Editing},
        author={Qili Wang and Dajiang Wu and Zihang Xu and Junshi Huang and Jun Lv},
        year={2025},
        eprint={2501.01798},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2501.01798},
}