Font Size: a A A

Research On Data-driven 3D Hair Modeling Techniques

Posted on:2020-05-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:M ZhangFull Text:PDF
GTID:1368330572996514Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Hair plays a crucial role in depicting digital characters.In today’,s emerging VR and AR applications,modeling 3D hairs that closely resemble the real world inputs is a demanding task.Due to the wide variety of hairstyles and the intricate geometry structures,it is very challenging to model a complete and realistically looking hair.Tra-ditional manual hair modeling methods,which require professional skills and tedious labor work,are not suitable for non-professional users.Recently,image-based hair mod-eling research has received wide attention from researchers because input information is easy to obtain.The early methods rely on complex capture setups in controlled en-vironments to achieve compelling reconstruction results,but are difficult to be adopted in practical applications.State-of-the-art image-based hair modeling techniques use synthetic hairstyle models as prior structure guidance for hair reconstruction,to compensate for the lack of information on the dimensions of the image input.The light-weighted demands on the quality and quantity of input images make data-driven image-based hair modeling methods more user-friendly and more suitable for consumer-level applications However,these data-driven methods commonly require a large storage for increasing hairstyle database to cover a variety of the real-world ever-changing hairs and a lot of time in the stage of best-matching candidate searching and further refinement.In this dissertation,we propose three data-driven approaches for 3D hair modeling.We take full advantage of a limited databa:se hairstyles to efficiently create complete strand-level 3D hair models of high quality,that resemble hairs in all input images.The efficacy of our techniques are demonstrated by using a variety of complex hairstyles.Specifically,the three approaches are summarized as below:· We int.roduce a novel four-view image-based hair modeling method.Given four hair images t.aken from the front,back,left and right view’s as input,we first estimate the rough 3D shape of the hair observed in the input using a predefined database of 3D hair models,then synthesize a hair t.exture using a pat,ch-based method on the surface of the shape,from which the hair growing direction information is calculat,ed and used t.o construct a 3D direction field in the hair volume Finally,we grow hair strands from the scalp,following the direction field,to produce the 3D hair model,,which closely resembles the hair in all input images.Our met.hod docs not require that all input images are from the same hair,enabling an effective way to creat.e compelling hair models from images of considerably different hairstyles at different views· We introduce a fully automatic,data-driven approach to model the hair geometry and compute a complete strand-level 3D hair model that closely resembles the input from a single RGB-D camera.Our method heavily exploit.s the geometric cues contained in the depth channel and leverages exemplars in a 3D hair database for high-fidelity hair synthesis.The core of our method is a local-similarity based search and synthesis algorithm that simultaneously reasons about the hair geometry,strands connectivity,strand orientation,and hair structural plausibility· We introduce Hair-GANs,an architecture of generative adversarial networks,to recover the 3D hair structure from a single image.The goal of our networks is to build a parametric transformat.ion from 2D hair maps to 3D hair structure.The 3D hair structure is repre,sented as a 3D volumetric field which encodes both the occupancy and the orientation,information of the hair strands.Given a single hair image,we first align it to our defined 3D head model and extract a 2D orientation rmap and a confidence ma.p,along with a bust depth map to feed into our Hair-GANs.With our generator network,we compute the 3D volumetric field as the structure guidance for the final hair synthesis.The modeling results not only resemble the hair in the input image but also possesses many vivid details in other views.
Keywords/Search Tags:3D hair modeling, data-driven approach, patch-based synthesis algorithm, generative adversarial networks
PDF Full Text Request
Related items