[SIGGRAPH 2024] Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance

Project: sites.google.com/view/media2face
Arxiv: arxiv.org/abs/2401.15687
The synthesis of 3D facial animations from speech has garnered considerable attention. Due to the scarcity of high-quality 4D facial data and well-annotated abundant multi-modality labels, previous methods often suffer from limited realism and a lack of flexible conditioning. We address this challenge through a trilogy. We first introduce Generalized Neural Parametric Facial Asset (GNPFA), an efficient variational auto-encoder mapping facial geometry and images to a highly generalized expression latent space, decoupling expressions and identities. Then, we utilize GNPFA to extract high-quality expressions and accurate head poses from a large array of videos. This presents the M2F-D dataset, a large, diverse, and scan-level co-speech 3D facial animation dataset with well-annotated emotional and style labels. Finally, we propose Media2Face, a diffusion model in GNPFA latent space for co-speech facial animation generation, accepting rich multi-modality guidances from audio, text, and image. Extensive experiments demonstrate that our model not only achieves high fidelity in facial animation synthesis but also broadens the scope of expressiveness and style adaptability in 3D facial animation.
Qingcheng Zhao, Pengyu Long, Qixuan Zhang, Dafei Qin, Han Liang, Longwen Zhang, Yingliang Zhang, Lan Xu, Jingyi Yu,
Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance,
Proc. of SIGGRAPH 2024 Conference.

Пікірлер: 3

  • @fxguide
    @fxguideАй бұрын

    Great work really impressive.

  • @Inferencer
    @InferencerАй бұрын

    Fantastic work! can we expect a July release?

  • @sahinerdem5496
    @sahinerdem5496Ай бұрын

    there is no eye work.