Font Size: a A A

Sketch-based 3D Model Retrieval Based On Deep Learning

Posted on:2020-04-27Degree:MasterType:Thesis
Country:ChinaCandidate:M J WangFull Text:PDF
GTID:2428330596468277Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
In recent years,with the development of virtual reality technology and the popularity of 3D printing technology,3D model data has been spurred and the data is more complex and diverse.The demand for 3D model retrieval technology has become more and more urgent.How to easily and efficiently retrieve the required 3D model is a long-standing problem in the field of computer vision.With the development of smart touch screen technology and the popularity of related devices,hand-drawn sketches can be easily obtained from devices,such as mobile phones.Compared with other 3D model retrieval methods,sketch-based 3D model retrieval has the advantages of simplicity and convenience.Therefore,sketch-based 3D model retrieval has become a research hot spot.There are obvious differences in visual perception between hand-drawn sketches and3 D models,so it is difficult to directly match the similarity between sketches and 3D models.This thesis uses the view of the 3D model as an intermediary to gradually reduce the difference between the sketch and the 3D model.At the same time,this thesis introduces deep learning mechanism and proposes a sketch-based 3D model retrieval based on deep learning.The major contributions are presented as follows:(1)The framework based on shared semantic space is proposed.The framework includes three layers of conversion:(1)Conversion of data dimension.Rendering a three-dimensional model into a two-dimensional view so that sketches and 3D models are two-dimensional representations.(2)Conversion of Visual Dimension.The sketch and the view of the 3D model are transformed into each other,so that the sketch is transformed into the view and the view is transformed into the sketch.(3)Conversion of Shared Semantic Feature Space.The purpose is to make the sketch and 3D model view in the shared semantic space,the same kind of data aggregation,different types of data are separated.(2)The construction of visual shared space based on GAN network is proposed.The dataset is expanded by the mutual transformation of sketch and 3D model view.At the same time,the visual difference between sketch and 3D model view can be reduced by thetransformation.(3)The method of constructing shared semantic space based on metric learning is proposed.Pre-training the data transformed from sketch and 3D model view.Then constructing the shared semantic space,tuning shared semantic spaces using pre-trained networks.At the same time,this thesis discusses the output dimension of shared semantic space and the issue of weight sharing.Based on the above research content,this paper selects the standard datasets SHREC2013 and SHREC2014 as BenchMark for verification.The experiment proves that the retrieval results of this thesis are significantly improved compared with the existing methods,and still have obvious advantages compared with the current best methods,proving the effectiveness of this method.
Keywords/Search Tags:3D model retrieval, Deep learning, Convolutional neural network, Shared semantic space, Metric learning
PDF Full Text Request
Related items