“DeepMVSHair: Deep Hair Modeling from Sparse Views” by Kuang, Chen, Fu, Zhou and Zheng – ACM SIGGRAPH HISTORY ARCHIVES

“DeepMVSHair: Deep Hair Modeling from Sparse Views” by Kuang, Chen, Fu, Zhou and Zheng

  • 2022 SA Technical Papers_Kuang_DeepMVSHair: Deep Hair Modeling from Sparse Views

Conference:


Type(s):


Title:

    DeepMVSHair: Deep Hair Modeling from Sparse Views

Session/Category Title:   Technical Papers Fast-Forward


Presenter(s)/Author(s):



Abstract:


    We present DeepMVSHair, the first deep learning-based method for multi-view hair strand reconstruction. The key component of our pipeline is HairMVSNet, a differentiable neural architecture which represents a spatial hair structure as a continuous 3D hair growing direction field implicitly. Specifically, given a 3D query point, we decide its occupancy (whether it is inside the hair volume) and direction from observed 2D structure features. With the query point’s pixel-aligned features from each input view, we utilize a view-aware transformer encoder to aggregate anisotropic structure features to an integrated representation, which is decoded to yield 3D occupancy and direction at the query point. HairMVSNet effectively gathers multi-view hair structure features and preserves high-frequency details based on this implicit representation. Guided by HairMVSNet, our hair-growing algorithm produces results faithful to input multi-view images. We propose a novel image-guided multi-view strand deformation algorithm to enrich modeling details further. Extensive experiments show that the results by our sparse-view method are comparable to those by state-of-the-art dense multi-view methods and significantly better than those by single-view and sparse-view methods. In addition, our method is an order of magnitude faster than previous multi-view hair modeling methods.


Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org