Using Facial Gestures to Drive Narrative in VR.

Mavridou, I., Hamedi, M., Fatoorechi, M., Archer, J., Cleal, A., Balaguer-Ballester, E., Seiss, E. and Nduka, C., 2017. Using Facial Gestures to Drive Narrative in VR. In: ACM Spatial User Interfaces (SUI'17), 16-17 October 2017, Brighton, England, 152 - 152.

Full text available as:

[img]
Preview
PDF
DEMO_VR manipulations3.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

504kB

Official URL: https://dl.acm.org/citation.cfm?id=3131277

DOI: 10.1145/3131277.3134366

Abstract

We developed an exploratory VR environment, where spatial features and narratives can be manipulated in real time by the facial and head gestures of the user. We are using the Faceteq prototype, exhibited in 2017, as the interactive interface. Faceteq consists of a wearable technology that can be adjusted on commercial HMDs for measuring facial expressions and biometric responses. Faceteq project was founded with the aim to provide a human-centred additional tool for affective human-computer interaction. The proposed demo will exhibit the hardware and the functionality of the demo in real time.

Item Type:Conference or Workshop Item (Paper)
Uncontrolled Keywords:Virtual Reality; Facial Expression; Emotion; EMG; Affective Computing ; VR ; Virtual Reality ; Facial Expression ; Emotion ; EMG ; Affective Computing
Group:Faculty of Management
ID Code:29887
Deposited By: Unnamed user with email symplectic@symplectic
Deposited On:23 Oct 2017 12:07
Last Modified:23 Oct 2017 12:07

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -