Tian, L., Muszynski, M., Lai, C., Moore, J., Kostoulas, T., Lombardo, P., Pun, T. and Chanel, G., 2017. Recognizing Induced Emotions of Movie Audiences: Are Induced and Perceived Emotions the Same? In: 7th International Conference on Affective Computing and Intelligent Interaction (ACII2017), 23-26 October 2017, San Antonio, Texas, USA.
Full text available as:
|
PDF
ACII2017.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. 459kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
Abstract
Predicting the emotional response of movie audi- ences to affective movie content is a challenging task in affective computing. Previous work has focused on using audiovisual movie content to predict movie induced emotions. However, the relationship between the audience’s perceptions of the affective movie content (perceived emotions) and the emotions evoked in the audience (induced emotions) remains unexplored. In this work, we address the relationship between perceived and in- duced emotions in movies, and identify features and modelling approaches effective for predicting movie induced emotions. First, we extend the LIRIS-ACCEDE database by annotating perceived emotions in a crowd-sourced manner, and find that perceived and induced emotions are not always consistent. Second, we show that dialogue events and aesthetic highlights are effective predictors of movie induced emotions. In addition to movie based features, we also study physiological and be- havioural measurements of audiences. Our experiments show that induced emotion recognition can benefit from including temporal context and from including multimodal information. Our study bridges the gap between affective content analysis and induced emotion prediction.
Item Type: | Conference or Workshop Item (Lecture) |
---|---|
Group: | Faculty of Science & Technology |
ID Code: | 29836 |
Deposited By: | Symplectic RT2 |
Deposited On: | 09 Oct 2017 15:09 |
Last Modified: | 14 Mar 2022 14:07 |
Downloads
Downloads per month over past year
Repository Staff Only - |