Skip to main content

Semantic modeling of indoor scenes with support inference from a single photograph.

Nie, Y., Chang, J., Chaudhry, E., Guo, S., Smart, A. and Zhang, J. J., 2018. Semantic modeling of indoor scenes with support inference from a single photograph. Computer Animation and Virtual Worlds, 29 (3-4), e-1825.

Full text available as:

[img]
Preview
PDF
CASA2018_revised_Yinyu.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

6MB

DOI: 10.1002/cav.1825

Abstract

We present an automatic approach for the semantic modeling of indoor scenes based on a single photograph, instead of relying on depth sensors. Without using handcrafted features, we guide indoor scene modeling with feature maps extracted by fully convolutional networks. Three parallel fully convolutional networks are adopted to generate object instance masks, a depth map, and an edge map of the room layout. Based on these high-level features, support relationships between indoor objects can be efficiently inferred in a data-driven manner. Constrained by the support context, a global-to-local model matching strategy is followed to retrieve the whole indoor scene. We demonstrate that the proposed method can efficiently retrieve indoor objects including situations where the objects are badly occluded. This approach enables efficient semantic-based scene editing.

Item Type:Article
ISSN:1546-4261
Uncontrolled Keywords:fully convolutional network; indoor scene reconstruction; semantic modeling; support inference
Group:Faculty of Media & Communication
ID Code:30856
Deposited By: Symplectic RT2
Deposited On:15 Jun 2018 09:46
Last Modified:31 May 2023 15:44

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -