“Ultra-High Resolution SVBRDF Recovery from a Single Image” by Guo, Lai, Tu, Tao, Zou, et al. …
Conference:
Type(s):
Title:
- Ultra-High Resolution SVBRDF Recovery from a Single Image
Presenter(s)/Author(s):
Abstract:
Existing convolutional neural networks have achieved great success in recovering spatially-varying bidirectional reflectance distribution function (SVBRDF) maps from a single image. However, they mainly focus on handling low-resolution (e.g., 256×256) inputs. Ultra-high resolution (UHR) material maps, although widely adopted in many graphical and vision applications, are notoriously difficult to acquire by existing networks because: 1) finite computational resources set bounds for input receptive fields and output resolutions; 2) convolutional layers operate locally and lack the ability to capture long-range structural dependencies in UHR images. In this work, we propose an implicit neural reflectance model and a divide-and-conquer solution to address these two challenges simultaneously. We first crop an input UHR image into a set of low-resolution patches, each of which are processed by a local feature extractor (LFE) to extract important details. To fully exploit long-range spatial dependency and ensure global coherency, we incorporate a global feature extractor (GFE) and several coordinate-aware feature assembly (CAFA) modules into our pipeline. The GFE contains several lightweight material vision transformers (MVT) that have a global receptive field at each scale and have the ability to infer long-term relationships in the material. After decoding globally coherent feature maps assembled by CAFA modules, the proposed end-to-end method is able to generate UHR SVBRDF maps from a single image with fine spatial details and consistent global structures. Extensive experiments on both synthetic and real-world data verify the superiority of the proposed method.


