Bruno, A., Tliba, M. and Coltekin, A., 2021. A deep learning saliency model for exploring viewers' dwell-time distributions over Areas Of Interest on webcam-based eye-tracking data. In: 43rd European Conference on Visual Perception (ECVP) 2021, 22-27 August 2021, Online, 224 - 225.
Full text available as:
|
PDF (Poster)
ECVP_2021_Bruno_Tliba_Coltekin.pdf - Published Version Available under License Creative Commons Attribution Non-commercial. 4MB | |
|
PDF (Powerpoint)
ECVP_2021_Bruno_Tliba_Cöltekin.pdf - Published Version Available under License Creative Commons Attribution Non-commercial. 9MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
Official URL: https://ecvp2021.org/
DOI: 10.1177/03010066211059887
Abstract
Visual saliency is a common computational method to detect attention-drawing regions in images, abiding by top-down and bottom-up processes of visual attention. Computer vision algorithms generate saliency maps, which often undergo a validation step in eye-tracking sessions with human participants in controlled labs. However, due to the covid-19 pandemic, experimental sessions have been difficult to roll out. Thus, new webcam-based tools, powered by the developments in machine learning, come into play to help track down onscreen eye movements. Claimed error rates of recent webcam eye trackers can be as low as 1.05°, comparable to sophisticated infrared-based eye-trackers, opening new paths to explore. Using webcams allows reaching a broader participant pool and collecting data over different experiments (e.g., free viewing or task-driven). In our work, we collect webcam eye-tracking data over a collection of images with 2-4 salient objects against a homogenous background. Objects within the images represent our AOIs (areas of interest). We have two main goals: a) Check how eye movements vary on AOIs across all spatial permutations of the same AOI in a given image; b) Extract correlations for a given image containing N 224 Perception 50(1S) objects between viewers’ eye movement dwell times over the N AOIs and the corresponding AOIs saliency maps. We will show relationships between viewers’ dwell time over each AOI throughout all factorial N spatial permutations and variance of AOIs’ salient pixels. Based on this relationship, eventually, object-oriented saliency models can be used to predict dwell-time distributions over AOIs for a given image.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
ISSN: | 0301-0066 |
Group: | Faculty of Science & Technology |
ID Code: | 36538 |
Deposited By: | Symplectic RT2 |
Deposited On: | 24 Jan 2022 14:22 |
Last Modified: | 14 Mar 2022 14:32 |
Downloads
Downloads per month over past year
Repository Staff Only - |