“Point2Mesh: A Self-prior for Deformable Meshes” by Hanocka, Metzer, Giryes and Cohen-Or

  • ©Rana Hanocka, Gal Metzer, Raja Giryes, and Daniel Cohen-Or

Conference:


Type:


Title:

    Point2Mesh: A Self-prior for Deformable Meshes

Session/Category Title: Geometric Deep Learning


Presenter(s)/Author(s):



Abstract:


    In this paper, we introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud. Instead of explicitly specifying a prior that encodes the expected shape properties, the prior is defined automatically using the input point cloud, which we refer to as a self-prior. The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network. We optimize the network weights to deform an initial mesh to shrink-wrap a single input point cloud. This explicitly considers the entire reconstructed shape, since shared local kernels are calculated to fit the overall object. The convolutional kernels are optimized globally across the entire shape, which inherently encourages local-scale geometric self-similarity across the shape surface. We show that shrink-wrapping a point cloud with a self-prior converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima. While the performance of traditional reconstruction approaches degrades in non-ideal conditions that are often present in real world scanning, i.e., unoriented normals, noise and missing (low density) parts, Point2Mesh is robust to non-ideal conditions. We demonstrate the performance of Point2Mesh on a large variety of shapes with varying complexity.


ACM Digital Library Publication:



Overview Page: