Wang, P., Yuan, C., Guo, J., Yang, X., Li, H., Stephenson, I., Chang, J. and Cao, Y., 2025. Taming High-Resolution Auxiliary G-Buffers for Deep Supersampling of Rendered Content. IEEE Transactions on Visualization and Computer Graphics, 31 (12), 10609-10623.
Full text available as:
Preview |
PDF
Taming_High_Resolution_Auxiliary_G_buffers_for_Deep_Supersampling_of_Rendered_Content_nocolor.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 28MB |
|
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1109/TVCG.2025.3609456
Abstract
High-resolution images come with rich color information and texture details. Due to the rapid upgrading of display devices and rendering technologies, high-resolution real-time rendering faces the computational overhead challenge. To address this, the current mainstream solution is to render at a lower resolution and then upsample to the target resolution by supersampling techniques. However, while many prior supersampling approaches have attempted to exploit rich rendered data such as color, depth, motion vectors at low resolution, there is little discussion on how to harness high-frequency information that is readily available in the high-resolution (HR) G-buffers of modern renders. In this article, we seek to investigate how to fully leverage information from HR G-buffers to maximize the visual quality of supersampling results. We propose a neural network for real-time supersampling of rendered content, which is based on several core designs, including gated G-buffers encoder, G-buffers attended encoder and reflection-aware loss. These designs are especially made for the sake of effectively using HR G-buffers, enabling faithful recovery of a variety of high-frequency scene details from low-resolution, highly aliased inputs. Furthermore, a simple occlusion-aware blender is proposed to efficiently rectify dis-occluded features in the warped previous frame, allowing us to better exploit history information to improve temporal stability. The experiments show that our method, equipped with strong ability to harness HR G-buffer information, significantly improves the visual fidelity of high-resolution reconstructions upon previous state-of-the-art methods, even for challenging 4×4 upsampling, while still being compute-efficient.
| Item Type: | Article |
|---|---|
| ISSN: | 1077-2626 |
| Uncontrolled Keywords: | Image reconstruction; History; Rendering (computer graphics); Real-time systems; Logic gates; Electronic mail; Superresolution; Image resolution; Image color analysis; Feature extraction; Rendering; supersampling; upscaling |
| Group: | Faculty of Media, Science and Technology |
| ID Code: | 41604 |
| Deposited By: | Symplectic RT2 |
| Deposited On: | 02 Dec 2025 13:54 |
| Last Modified: | 02 Dec 2025 13:54 |
Downloads
Downloads per month over past year
| Repository Staff Only - |
Tools
Tools