Files

Abstract

In this thesis, we advocate that Computer-Aided Engineering could benefit from a Geometric Deep Learning revolution, similarly to the way that Deep Learning revolutionized Computer Vision. To do so, we consider a variety of Computer-Aided Engineering problems, including physics simulation, design optimization, shape parameterization and shape reconstruction. For each of these problems, we develop novel algorithms that use Geometric Deep Learning to improve the capabilities of existing systems. First, we demonstrate how Geometric Deep Learning architectures can be used to learn to emulate physics simulations. Specifically, we design a neural architecture which, given as input a 3D surface mesh, directly regresses physical quantities of interest defined over the mesh surface. The key to making our approach practical is re-meshing the original shape using a polycube map, which makes it possible to perform computations on Graphic Process Units efficiently. This results in a speed up of 2 orders of magnitude with respect to physics simulators with little loss in accuracy: our main motivation is to provide lightweight performance feedback to improve interactivity in early design stages. Furthermore, being a neural network, our physics emulator is naturally differentiable with respect to input geometry parameters, allowing us to solve shape design problems through gradient-descent. The resulting algorithm outperforms state of-the-art methods by 5 to 20\% for 2D optimization tasks and, in contrast to existing methods, our approach can be further used to optimize raw 3D geometry. This could empower designers and engineers to improve the performance of a given design automatically, i.e. without requiring any specific knowledge about the physics of the problem they are trying to solve. To perform shape optimization robustly, we develop novel parametric representations for 3D surface meshes that can be used as strong priors during the optimization process. To this end, we introduce a differentiable way to produce explicit surface mesh representations from Neural Signed Distance Functions. Our key insight is that by reasoning on how implicit field perturbations impact local surface geometry, one can ultimately differentiate the 3D location of surface samples with respect to the underlying neural implicit field. This results in surface mesh parameterizations that can handle topology changes, something that is not feasible with currently available techniques. Finally, we propose a pipeline for reconstructing and editing 3D shapes from line drawings that leverages our end-to-end differentiable surface mesh representation. When integrated into a user interface that provides camera parameters for the sketches, we can exploit our latent parametrization to refine a 3D mesh so that its projections match the external contours outlined in the sketch. We show that this is crucial to make our approach robust with respect to domain gap. Furthermore, it can be used for shape refinement given only single pen strokes. This system could allow engineers and designers to translate legacy 2D sketches to real-world 3D models that can readily be used for downstream tasks such as physics simulations or fabrication, or to interact and modify 3D geometry in the most natural way possible, i.e. with a pen stroke.

Details

PDF