In realistic rendering,people are focusing on simulating various realistic and complex visual effects in the real world via computer algorithms.High-frequency lighting effects are an essential part of constructing realistic feelings,but rendering high-quality high-frequency lighting effects often also means sophisticated computations and extensive rendering time.Therefore,simulating high-frequency lighting effects more efficiently is always a crucial part in rendering research.As the recent development of deep learning,introducing neural network methods into the rendering has gradually become a hot topic of recent research because of the powerful nonlinear representation capabilities and the GPU-friendly inference of neural networks.In this paper,we will focus on two methods(environmental lighting shading using basis functions and Gradient-domain rendering)which are common but also difficult to efficiently render high-frequency lighting effects.Based on neural network technology,we study how to keep high-frequency lighting effects for them more efficiently.Basis functions are pervasively used in realistic rendering,not only because they are capable of compactly representing 2D spherical functions,but also because they can provide good computational properties that simplify complex light transport.However,different types of basis functions have certain limitations.With limited storage budget,none of the commonly used basis functions can keep high-frequency lighting details.Therefore,people are always eager to find out a new set of basis functions that have all the desired properties.However,finding such a set of ’perfect’ basis functions seems impossible using traditional mathematics methods.In this paper,we will design a set of neural basis functions which are implicitly represented by neural networks based on the data-driven idea.The neural basis functions are capable of all-frequency representation at a very high compression rate(0.39%).Wavelets need an order of magnitude more storage than us to reach the same quality.Meanwhile,our neural basis functions have efficient computation properties including spherical rotation,double product integral,and triple product integral.Our neural basis functions can achieve real-time performance thanks to our lightweight-designed computational networks.Gradient-domain rendering can highly improve the convergence of Monte Carlo rendering by evaluating the pixel intensity and the gradients of its neighboring pixels then performing image reconstruction.However,in gradient-domain volumetric rendering methods which simulates natural phenomena like fog,the results of traditional image reconstruction methods are often unsatisfactory and unable to reconstruct the high-frequency details hidden inside the volume.That’s because they are designed for surface-based rendering.To improve the gradientdomain volumetric rendering image reconstruction,in this paper,we will propose a new unsupervised learning gradient-domain volumetric rendering image reconstruction method based on the characteristics of volumetric rendering.We design a new multi-branch neural network structure,introduce volume features and improve the loss function used in previous work.Our method can preserve more high-frequency details in local areas while smoothing noise on global scale,and reach the best quality in comparison with existing methods.Basis functions and gradient-domain rendering are both widely used in rendering.Based on their difficulties of keeping high-frequency signals,in this paper,we will propose two efficient and practical neural-based methods that can better maintain high-frequency details in rendering. |