Font Size: a A A

The Study Of Real-time Fusion Of Multi-video And 3D Scene

Posted on:2017-12-11Degree:MasterType:Thesis
Country:ChinaCandidate:Y F ShiFull Text:PDF
GTID:2370330518990095Subject:Cartography and Geographic Information System
Abstract/Summary:PDF Full Text Request
With the development of computer hardware and software and the enhancement of 3D processing capability,virtual reality has developed rapidly in recent years,more and more technologies which are related to people's daily lives have been created,the development of virtual reality has been promoted from both virtual and reality aspect,in which video-augmented virtual environments technology is an important research field.Traditional video-augmented virtual environment technology usually embedded videos into the 3D scene simply in the form of billboard and did not take the spatial position and information of videos into consideration,which made the cameras and their videos lack of geographic information support.So that the rendering performance of fusion is very poor.Meanwhile,comparing to common 3D scene,video-augmented virtual environment need to load both 3D models and video data simultaneously,which consume more system resources.It makes the system easy to display in low quality and the frame drop phenomenon happen,affect the fluency and realistic of the scenes seriously.With the emergence and development of video GIS,the importance of spatial information of videos highlights.Based on the geospatial information,we implement the real-time fusion and display of multi-video and 3D scene,which not only has the advantage of augment virtual,but also can provide accurate and rich geographic information,make it easier to observe and understand.For the problem of scene display,we optimized the loading way of the videos and improve the speed and efficiency of scene display.We have made several research and achievements as follows:(1)Implement real-time fusion of multi-video and 3D scene.Through the internal and external parameters of camera,construction the frustum to represent the visible range of the camera.Refer to the principle of shadow map algorithm,judge the occlusion relations of buildings to compute the actual video coverage.Use projection texture mapping and shader language to map the video texture.And update the videos and scenes to implement the real-time fusion of video and 3D scene.On this basis,implement the real-time fusion of multi-video by using multi pass rendering technology.(2)Use optimization method to improve the display efficiency of the fusion scene.Construction the camera octree spatial index.Filter the camera based on the view and index in the projection space and clip the cameras and their videos which are outside the view.For the cameras intersecting with the view,rasterize the camera frustum,compute the visible pixel proportion in screen space,make the judgment to clip the cameras and their videos or not according to the proportion.(3)Develop the application system and make some comparative experiments.On the basis of theoretical research,design and develop the multi-video and 3D scene real-time fusion system,and demonstrate the fusion result in practical application scenarios.Meanwhile,make some comparative experiments to verify the effectiveness of the optimization methods.Experimental results show that,compared to the 3D model,the video data is a major factor affecting the speed and efficiency of the scene display,the more videos loaded in the scene,the lower the frame rate of the scene.To solve this problem,optimize the way the video loaded.The contrast experiments show that,the optimization using view-dependent cameras index filter and fine spatial clipping,considerably improve the display fluency and realistic of the scene,and don't affect the scene display quality.The optimization have achieved the expected goal and ensures the practicality and effectiveness of real-time fusion system.
Keywords/Search Tags:Virtual Geographic Environment, Video Fusion, Spatial Clipping, Spatial Index
PDF Full Text Request
Related items