Skip to main content

Multi-Modal Perception for Selective Rendering.

Harvey, C., Debattista, K., Bashford-Rogers, T. and Chalmers, A., 2016. Multi-Modal Perception for Selective Rendering. Computer Graphics Forum: the international journal of the Eurographics Association, 36 (1), 172-183.

Full text available as:

[img]
Preview
PDF
Multi-Modal Perception for Selective Rendering.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

39MB

DOI: 10.1111/cgf.12793

Abstract

A major challenge in generating high-fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high-fidelity simulation of light and sound is still unachievable in real-time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialised directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. The multi-modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi-modal virtual environments.

Item Type:Article
ISSN:1467-8659
Uncontrolled Keywords:multi-modal ; saliency ; sound ; graphics ; selective rendering
Group:Faculty of Science & Technology
ID Code:27386
Deposited By: Symplectic RT2
Deposited On:01 Mar 2017 12:17
Last Modified:14 Mar 2022 14:03

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -