Skip to main content

Audio-Driven Facial Animation with Deep Learning: A Survey.

Jiang, D., Chang, J., You, L., Bian, S., Kosk, R. and Maguire, G., 2024. Audio-Driven Facial Animation with Deep Learning: A Survey. Information, 15 (11), 675.

Full text available as:

[img]
Preview
PDF (OPEN ACCESS ARTICLE)
information-15-00675.pdf - Published Version
Available under License Creative Commons Attribution.

1MB

DOI: 10.3390/info15110675

Abstract

Audio-driven facial animation is a rapidly evolving field that aims to generate realistic facial expressions and lip movements synchronized with a given audio input. This survey provides a comprehensive review of deep learning techniques applied to audio-driven facial animation, with a focus on both audio-driven facial image animation and audio-driven facial mesh animation. These approaches employ deep learning to map audio inputs directly onto 3D facial meshes or 2D images, enabling the creation of highly realistic and synchronized animations. This survey also explores evaluation metrics, available datasets, and the challenges that remain, such as disentangling lip synchronization and emotions, generalization across speakers, and dataset limitations. Lastly, we discuss future directions, including multi-modal integration, personalized models, and facial attribute modification in animations, all of which are critical for the continued development and application of this technology.

Item Type:Article
ISSN:2078-2489
Uncontrolled Keywords:deep learning; audio processing; talking head; face generation
Group:Faculty of Media & Communication
ID Code:40586
Deposited By: Symplectic RT2
Deposited On:09 Dec 2024 15:54
Last Modified:09 Dec 2024 15:54

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -