Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J. J., 2017. Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation. International Journal of Human Computer Studies, 105, 68 - 80.
Full text available as:
|
PDF
accepted manuscript.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. 2MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1016/j.ijhcs.2017.04.002
Abstract
Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.
Item Type: | Article |
---|---|
ISSN: | 1071-5819 |
Uncontrolled Keywords: | Eye tracking; Mid-air gesture; 3D interaction; Spatial misperception; Multimodal interfaces; Virtual reality |
Group: | Faculty of Media & Communication |
ID Code: | 29272 |
Deposited By: | Symplectic RT2 |
Deposited On: | 30 May 2017 08:54 |
Last Modified: | 14 Mar 2022 14:04 |
Downloads
Downloads per month over past year
Repository Staff Only - |