Skip to main content

Struct2Hair: A hair shape descriptor for hairstyle modeling.

Zhang, W., Nie, Y., Guo, S., Chang, J., Zhang, J. J. and Tong, R., 2022. Struct2Hair: A hair shape descriptor for hairstyle modeling. Computer Animation and Virtual Worlds, e2128. (In Press)

Full text available as:

Struct2Hair_CASA2022.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial.


DOI: 10.1002/cav.2128


In recent years, it becomes possible to extract hair information for hair reconstruction from multiple cameras or monocular camera. Using a single image as the input avoids the high cost setups and complex calibration compared to multiviewed reconstruction. Taking advantage of an extendible hairstyle database, this paper introduced Struct2Hair, a novel single-viewed hair modelling approach by extracting hair shape descriptor (HSD). The HSD is defined as the fundamental structure-aware feature, which is a combination of critical shapes in a hairstyle. A complete dataset of critical hair shapes is constructed from a known database of three-dimensional (3D) hair models. We first analyze the input two-dimensional (2D) image to extract the orientation information and 2D hair sketch automatically. The extracted information is then used to retrieve the corresponding critical shapes with optimization to build the robust HSD. Finally, the HSD constructs a weighted 3D hair orientation field to guide full-head hair model generation. Our method can preserve local geometric features of hair and retain the whole shape of the hairstyle globally owing to the HSD, which will benefit further hair editing and stylization.

Item Type:Article
Uncontrolled Keywords:data-driven modeling; hair shape descriptor; hairstyle modeling
Group:Faculty of Media & Communication
ID Code:37986
Deposited By: Symplectic RT2
Deposited On:12 Jan 2023 12:29
Last Modified:20 Nov 2023 01:08


Downloads per month over past year

More statistics for this item...
Repository Staff Only -