Fan, Y., Tian, F., Tang, X. and Cheng, H., 2020. Facial expression animation through action units transfer in latent space. Computer Animation and Virtual Worlds, 31 (4-5), e1946.
Full text available as:
|
PDF
Facial expression animation through action units transfer.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 1MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1002/cav.1946
Abstract
Automatic animation synthesis has attracted much attention from the community. As most existing methods take a small number of discrete expressions rather than continuous expressions, their integrity and reality of the facial expressions is often compromised. In addition, the easy manipulation with simple inputs and unsupervised processing, although being important to the automatic facial expression animation applications, is relatively less concerned. To address these issues, we propose an unsupervised continuous automatic facial expression animation approach through action units (AU) transfer in the latent space of generative adversarial networks. The expression descriptor which is depicted with AU vector is transferred into the input image without the need of labeled pairs of images and even without their expressions and further network training. We also propose a new approach to quickly generate input image's latent code and cluster the boundaries of different AU attributes with their latent codes. Two latent code operators, vector addition and continuous interpolation, are leveraged for facial expression animation simulating align with the boundaries in the latent space. Experiments have shown that the proposed approach is effective on facial expression translation and animation synthesis.
Item Type: | Article |
---|---|
ISSN: | 1546-4261 |
Additional Information: | Funding Information Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University. Grant Number: VRLAB2020C01 National Key Research and Development Plan of China. Grant Number: 2017YFB1002804 |
Uncontrolled Keywords: | action units; facial animation; facial expression; generative adversarial networks; latent code encoding |
Group: | Faculty of Science & Technology |
ID Code: | 34628 |
Deposited By: | Symplectic RT2 |
Deposited On: | 28 Sep 2020 16:21 |
Last Modified: | 14 Mar 2022 14:24 |
Downloads
Downloads per month over past year
Repository Staff Only - |