Skip to main content

Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.

Liu, C., Chen, W., Ward, J. and Takahashi, N., 2016. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Scientific Reports, 6, 31001.

Full text available as:

srep31001.pdf - Published Version
Available under License Creative Commons Attribution.


DOI: 10.1038/srep31001


Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.

Item Type:Article
Uncontrolled Keywords:Emotion; Human behaviour; Perception
Group:Faculty of Science & Technology
ID Code:24488
Deposited By: Symplectic RT2
Deposited On:10 Aug 2016 10:31
Last Modified:14 Mar 2022 13:57


Downloads per month over past year

More statistics for this item...
Repository Staff Only -