Skip to main content

Audio-Visual Resource Allocation for Bimodal Virtual Environments.

Doukakis, E., Debattista, K., Harvey, C., Bashford-Rogers, T. and Chalmers, A., 2018. Audio-Visual Resource Allocation for Bimodal Virtual Environments. Computer Graphics Forum, 37 (1), 172-183.

Full text available as:

AVRABVE_pp.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.


DOI: 10.1111/cgf.13258


Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audio-visual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics, however as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.

Item Type:Article
Additional Information:Engineering and Physical Sciences Research Council. Grant Number: EP/K014056/1
Uncontrolled Keywords:Multi-Modal ; Cross-Modal ; Bi-Modal ; Sound ; Graphics
Group:Faculty of Science & Technology
ID Code:29532
Deposited By: Symplectic RT2
Deposited On:26 Jul 2017 08:46
Last Modified:14 Mar 2022 14:06


Downloads per month over past year

More statistics for this item...
Repository Staff Only -