Font Size: a A A

Research And Implementation Of Video Fusion Over Full Space

Posted on:2012-04-26Degree:MasterType:Thesis
Country:ChinaCandidate:M ChenFull Text:PDF
GTID:2298330335968611Subject:Education Technology
Abstract/Summary:PDF Full Text Request
With the development of computer graphics, digital image processing and computer vision, it becomes easy and efficient to create virtual environments. At the same time, due to the decline in production costs of devices such as interaction, display devices, Augmented Reality has been widely applied to many fields including medical treatment, machinery, annotation and visualization, robots, entertainment, military and so on.In the practical application environments of Augmented Reality,we usually develop a 3D model for a large scale scene over full space.Users can be provided with immersion,imagination and interaction by the virtual environments in their walkthrough.However,virtual environments is not equal to the real world.That is to say,in virtual environments scenes are not often updated.In order to increase the authenticity and dynamic of the environments,we can integrate videos,through which we can know what events are going in the region of the model,from real world into virtual environments.As for my experiments, the videos are captured by cameras at a fixed PTZ setting that are set by the road, buildings or in other places. The effect of fusion of video and a 3D model results in the experiences from users. So the fusion of video and 3D environments is an issue which is worthy of study.The technology of fusion of video and 3D model consists of three contents as follow:firstly,the principle and implementation for extracting straight lines from videos based on the characteristics of scene structure;Secondly,the principle and implementation for camera calibration based on 2D to 3D lines or points correspondences;Thirdly,the principle and implementation for projective texture mapping;last but not the least, the design and implementation for the prototype system on fusion of video and models. During the development of the system, Microsoft Visual C++ 6.0 is the development platform and Multigen Paradigm Vega 3.7 application programming interface, Intel(?) Open Source Computer Vision Library 1.0 and Open Graphics Library 1.0 are used. We achieve the goal that the video sequence captured by only one camera is the input of the system and the output is the display of rendering for fusion of the video sequence and 3D models.
Keywords/Search Tags:Augmented Reality, Fusion of video and models over full space, Line extraction, Camera calibration, Projective texture mapping
PDF Full Text Request
Related items