Given several parts of a single shape, how can we “stitch” them together so as to reconstruct the shape? Alternatively, how can we remove a fixed part from a given shape? What if each part deforms independently? Crucially, how can we solve these tasks without requiring a correspondence?
The problem of solving rigid or non-rigid shape puzzles is extremely hard (see Litany et al. [2016]). It involves several sub-problems, including shape discretization, matching, and generation, and has been at the heart of applied data sciences for decades. We propose to address this task by side-stepping the need to solve many of these sub-problems. To do so, we resort to a recent family of computational techniques revolving around the computation of the Laplacian eigenvalues of the given shapes.
It has long been known that the shape of an object determines the spectrum of the Laplacian, where geometric information of the shape itself is encoded. Recent works (see Cosmo et al. [2019], Rampini et al. [2019]) have shown that the inverse problem to recover the shape from its Laplacian spectrum is approachable in practice. We claim that solving a non-rigid shape puzzle can be fully phrased in terms of the shape spectra, where we devise a learning scheme to perform set operations directly on these spectra and then, at a second stage, reconstruct the desired shape from the estimated spectrum.
Learning to perform set operations in the space of eigenvalues enables an isometry invariant, correspondence-free formulation of the problem. The lack of a require ment for an explicit correspondence between the different parts allows us to operate with heterogeneous datasets (e.g. point clouds together with meshes) at varying discretization quality, resulting in an especially robust pipeline that can be adopted in several practical settings.