Hulusic, V., Debattista, K., Aggarwal, V. and Chalmers, A., 2011. Maintaining frame rate perception in interactive environments by exploiting audio-visual cross-modal interaction. The Visual Computer, 27 (1), 57 - 66.
Full text available as:
|
PDF
hulusic2011maintaining.pdf - Accepted Version Available under License Creative Commons Attribution. 806kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1007/s00371-010-0514-2
Abstract
The entertainment industry, primarily the video games industry, continues to dictate the development and performance requirements of graphics hardware and computer graphics algorithms. However, despite the enormous progress in the last few years it is still not possible to achieve some of industry’s demands, in particular high-fidelity rendering of complex scenes in real-time, on a single desktop machine. A realisation that sound/music and other senses are important to entertainment, led to an investigation of alternative methods, such as cross-modal interaction in order to try and achieve the goal of “realism in real-time”. In this paper we investigate the cross-modal interaction between vision and audition for reducing the amount of computation required to compute visuals by introducing movement related sound effects. Additionally, we look at the effect of camera movement speed on temporal visual perception. Our results indicate that slow animations are perceived as smoother than fast animations. Furthermore, introducing the sound effect of footsteps to walking animations further increased the animation smoothness perception. This has the consequence that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. The results presented are another step towards the full understanding of the auditory-visual cross-modal interaction and its importance for helping achieve “realism int real-time”.
Item Type: | Article |
---|---|
ISSN: | 0178-2789 |
Uncontrolled Keywords: | cross-modal; perception; high-fidelity rendering |
Group: | Faculty of Science & Technology |
ID Code: | 30364 |
Deposited By: | Symplectic RT2 |
Deposited On: | 15 Feb 2018 16:40 |
Last Modified: | 14 Mar 2022 14:09 |
Downloads
Downloads per month over past year
Repository Staff Only - |