Font Size: a A A

Easy Geometric Modeling Based On Deep Learning

Posted on:2022-09-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:D DuFull Text:PDF
GTID:1488306323980169Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
With the steady demand for the in-depth perception of the physical world,computer-aided geometry processing has always been the center of attention in 3D com-puter graphics.Among them,generating geometric models based on defined rules is the basis for various downstream applications,such as rendering,animation,shape analysis,manufacturing,etc.Meanwhile,the rise of ubiquitous design and manufacturing con-cepts,together with the rapid development of 3D printing,has motivated users without design and manufacturing backgrounds to customize their 3D models.Although vari-ous commercialized software has been developed for modeling 3D shapes,they expect users to possess considerable expertise in 3D modeling.Moreover,designing models with the existing commercialized software can be laborious and time-consuming.To simplify the 3D modeling process,traditional geometric reconstruction methods turn to scan-based and multi-view-based reconstruction,which firstly obtain surfaces’ point clouds,then register them together,and finally reconstruct surface meshes.However,these methods rely heavily on specific hardware and controlled environment,which is not affordable for most novice users.Therefore,assisting users to create desiring shapes easily has become an urgent topic for recent 3D geometric modeling research.For novice users,the interactive modeling process is expected to be easy,intuitive,and robust.That is,the system should assist users in modeling highly detailed 3D shapes with minimal inputs.While drawing a 2D sketch using a mouse or taking a photo with a mobile phone is relatively accessible for most users,the underlying information is too coarse to specify a detailed 3D shape-traditional optimization-based methods will fail if there is a large ambiguity between the inputs and the underlying shapes.In this thesis,we designed three deep learning-based models to learn the mapping between 2D pictures and 3D shapes from massive collected data.We then carefully designed interactive modeling interfaces for each model,which significantly reduces the manual efforts to create high-quality meshes:1)Deep sketch-based modeling of organic objects.Organic objects,such as ani-mal heads,are often rich in shapes and details.To design such models,a coarse-to-fine"view-surface" joint learning mesh generation framework is proposed,which first gen-erates the coarse shapes and then enhances the local geometric details.Specifically,the input sketch’s shape features are extracted using an image encoder,then a convolutional graph neural network is utilized to guide the deformation of the template mesh to obtain the initial model.Afterward,the vertex map corresponding to the initial mesh is ob-tained by a differentiable mesh renderer,and the vertex displacement map is generated using an image synthesis network to reflect the geometric details.Finally,the vertex displacement is back-projected onto the initial mesh,and the surface details are further optimized using another graph convolutional neural network.Both viewpoint-based de-tail generation and surface-based detail refinement are performed iteratively to obtain high-quality results.Besides,we contribute to the largest dataset of the animal head as well as the corresponding sketch by far.2)Deep sketch-based modeling of man-made objects.Unlike organic models,man-made objects can be decomposed into independent parts but with complex struc-tures.In this thesis,we propose a bottom-up "part-structure"framework for generating meshes,which first generates each part based on implicit learning and then regresses each part’s scale and position through a multi-layer perceptron network.Whenever the user sketches a part stroke,the corresponding part mesh is generated and presented in real-time.After the user completes the whole sketching,the system automatically learns the structural relationships of the parts and generates the assembly results.The user can further modify the part assembly to obtain desiring results.Compared with previ-ous systems,our proposed system achieves high-quality geometry by focusing on part generation and significantly accelerates the interactive modeling process by avoiding time-consuming part placement.3)Deep single-view modeling of general shapes.Sometimes,the input sketch is extremely coarse due to the users’ limited drawing skills,preventing most existing models from generating satisfying shapes.In this paper,we propose a single-view 3D reconstruction model,which jointly learns the voxel,point cloud,and implicit field re-construction in an end-to-end manner.The proposed model not only generates accurate modeling results but also achieves improvement in inference efficiency.3D shapes are highly diversified and difficult to obtain,leading to the scarcity of datasets available.However,high-quality 3D models can be obtained efficiently with deep learning by incorporating class-specific domain knowledge.In this paper,we ver-ify the feasibility and effectiveness of 3D modeling for novices by carrying out extensive experiments.The proposed systems and corresponding datasets are made public to pro-mote the related research of deep learning-based geometric modeling.
Keywords/Search Tags:Sketch-based modeling, Image-based modeling, Interactive modeling, Animal head modeling, Assembly-based modeling, Deep learning
PDF Full Text Request
Related items