Font Size: a A A

An Illumination Estimation Method Based On Monocular Image For Augmented Reality System

Posted on:2019-06-02Degree:MasterType:Thesis
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:2428330545463336Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Augmented Reality(AR)is a new technology of human-computer interaction.With the help of computer technology,it adds virtual information to reality.The virtualreal fusion display enhances human sensory experience,so that users can increase their understanding of the real world.At present,Augmented Reality technology plays an important role in entertainment,medical treatment,construction industry,education,industry,military and other fields.One of its key features is the mixture of virtual reality and actual reality,that is,the rendered virtual objects and real environment achieve the matching fusion display of time-space consistency in visual effects,specifically including content,spatial geometry,time,lighting and other multiple levels of fusion.The state-of-the-art Augmented Reality technology systems focus on solving the spatial geometric consistency of virtual and real objects.However,there is still no systematic technical solution to the problem of illumination consistency,which has more significant effort on visual effects.In an ideal AR system,the virtual objects and the real objects share the same real illumination.Therefore,the correct estimation of real lighting situation will help to achieve the illumination consistency of the virtual and real objects,which can improve the reality of the virtual objects.For now,there are few researches on illumination estimation.Most of the methods for obtaining illumination parameters rely on measurement methods.There is a lack of illumination estimation methods for monocular images.Until now,the problem of illumination consistency in AR systems has not been well solved,especially the lack of complete and mature illumination estimation frameworks or systems based on monocular images.Aiming at this problem,an illumination estimation method based on a monocular image is studied in this paper.What's more,by combining with this method,an Augmented Reality system with illumination consistency is proposed at the end of this paper.Firstly,a light source detection and segmentation algorithm based on deep learning was proposed in this paper.The purpose of this algorithm is to analyze and understand the light sources existing in the real scene and segment them in the image.First of all,a full convolution neural network was constructed.By using the deconvolution layers,the feature maps of different convolution layers were fused to obtain a more robust output result.Finally,this light source analysis result was sent to a fully connected conditional random field to be optimized.The light source analysis algorithm in this paper accepts input images of any scale.The output result not only retains the spatial information of the image,but also restores more details of light sources.The prediction result of each pixel is obtained end to end,which provides a basis for the research of illumination consistency framework in Augmented Reality systems.Secondly,a monocular depth estimation method based on a multi-scale neural network is proposed.The global structure of the scene was firstly estimated by using the larger-scale network.Then the smaller-scale network was used to refine the estimated depth map by using the local characteristics of the bottom layers.At the process of training,the whole network considered the difference between estimated depth and ground true at each pixel to recover the depth information of the scene.Experimental results show that the proposed depth estimation algorithm has a smaller average estimation error and a clearer transition between different objects.It realizes a real-time estimation of the depth map from a monocular image without any prior,which is beneficial to the study of the light source 3D information reconstruction for Augmented Reality systems.Finally,an illumination consistency framework for AR systems is developed in this paper.As the two-dimensional picture lacks three-dimensional information,this part firstly reconstructed the 3D light sources in the real world through the use of 2D positions and the corresponding depth information,along with the camera inner parameters.Secondly,according to different edge types which were classified by the photometric characteristics,this paper carried out an improved iterative weighting Grey-Edge algorithm to estimate the illumination chromaticity.By combining the inverse function of camera response curve,the light brightness could be calculated.Next,the complete illumination parameters are used to illuminate the virtual object so as to realize the rendering.Finally,the illumination consistency was verified in the AR device--Microsoft Holo Lens.The illumination estimation method proposed in this paper does not need any additional markers or special photography equipment,nor does it need the geometric prior of the scene.It can realize the three-dimensional reconstruction and illumination intensity recovery of multiple and multiple kinds of light sources in the scene based on an monocular image.The time complexity of this method basically meets the requirement of real-time interaction,so that it is suitable for dynamic scenes.This provides a very convenient,practical and reliable illumination estimation method for Augmented Reality systems.The research achievement of this paper enriches the virtual-real fusion methods of Augmented Reality systems.This constructed illumination consistency framework improves the reality of virtual objects superimposed in the real world,that is,the rendering quality of virtual and real composite scene is effectively improved.
Keywords/Search Tags:Augmented Reality, Illumination Consistency, Light Source Detection, Depth Estimation
PDF Full Text Request
Related items