Skip to main content

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation.

Deng, S., Jiang, N., Chang, J., Guo, S. and Zhang, J. J., 2017. Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation. International Journal of Human Computer Studies, 105, 68 - 80.

Full text available as:

[img]
Preview
PDF
accepted manuscript.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

2MB

DOI: 10.1016/j.ijhcs.2017.04.002

Abstract

Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.

Item Type:Article
ISSN:1071-5819
Uncontrolled Keywords:Eye tracking; Mid-air gesture; 3D interaction; Spatial misperception; Multimodal interfaces; Virtual reality
Group:Faculty of Media & Communication
ID Code:29272
Deposited By: Symplectic RT2
Deposited On:30 May 2017 08:54
Last Modified:14 Mar 2022 14:04

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -