Font Size: a A A

Research On Efficient Rendering Technology Based On Deep Learning

Posted on:2021-03-11Degree:MasterType:Thesis
Country:ChinaCandidate:M T LiFull Text:PDF
GTID:2428330647451048Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Rendering technology,a sub-field of computer graphics,is mainly responsible for converting 3D scenes(including geometry,material and illumination)into 2D images.To achieve photorealistic rendering,it is necessary to carefully model the scenes and solve a complex high-dimensional integral equation by Monte Carlo method.They usually consume a large number of computing resources and labor costs.Recently,the deep learning has become an important technology and brought new breakthroughs to many fields.In this thesis,we explore how to improve the efficiency of graphics rendering by deep learning based on two specific problems: image reconstruction for gradient-domain rendering and participating media appearance editing.· Image reconstruction for gradient-domain rendering.Monte Carlo rendering is flexible and general but typically suffer from high variance and slow convergence.Gradient-domain rendering alleviates this problem by additionally sampling gradients,and performing image-space Poisson reconstruction.To improve the quality and performacne of the reconstruction,we propose a novel and practical deep learning based approach in this thesis.The core of the proposed approach is replacing the Poisson solver with a multi-branch auto-encoder,which end-to-end learns a mapping from a noisy input image and its corresponding gradients to a high-quality image.Once trained,the network is fast to evaluate and does not require manual parameter tweaking.Due to the difficulty in preparing reference images for training,we train the network in a completely unsupervised manner.The loss function is defined as an energy function including a data fidelity term and a gradient fidelity term.To further reduce the noise of reconstructed image,the loss function is reinforced by adding a regularizer constructedfrom auxiliary features.We conduct comprehensive experiments to validate the effectiveness of the proposed method and the reconstruction takes far less than one second on a recent GPU(e.g.GTX 1080Ti).· Participating media appearance editing.Participating media improves the reality of virtual scene,however,it is challenging to edit the appearance of participating media.To solve this problem,we propose a style transfer based editing method in this thesis,which allows to indirectly modify the participating media through a 2D image.Unlike existing methods that require cumbersome iterative optimizations,the proposed method leverages a novel deep learning based framework,the core of which is a stylizing kernel predictor that extracts multi-scale feature maps from a 2D style images and predicts a group of stylizing kernels as a highly non-linear combination of the feature maps.Each group of stylizing kernels represents a specific style.A volume anto-encoder is jointly trained with the stylizing kernel predictor to transform a density volume to an albedo volume based on these stylizing kernels.Since the auto-encoder does not encode any style information,it can generate different albedo volumes once trained.Additionally,a multi-scale loss function including histogram loss and total variation loss is used to learn plausible color features and maintain the fidelity of the stylized volume.Through comprehensive experiments,we validate the effectiveness of the proposed method and show its superiority by comparing against state-of-the-arts.
Keywords/Search Tags:Deep Learning, Gradient-Domain Rendering, Participating Media, Image Reconstruction, Style Transfer
PDF Full Text Request
Related items