Liu, C., Chen, W., Ward, J. and Takahashi, N., 2016. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Scientific Reports, 6, p. 31001.
Full text available as:
PDF (OPEN ACCESS ARTICLE)
srep31001.pdf - Published Version
Available under License Creative Commons Attribution.
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
|Uncontrolled Keywords:||Emotion; Human behaviour; Perception|
|Group:||Faculty of Science & Technology|
|Deposited By:||Unnamed user with email symplectic@symplectic|
|Deposited On:||10 Aug 2016 10:31|
|Last Modified:||06 Sep 2016 12:23|
Downloads per month over past year
|Repository Staff Only -|