Font Size: a A A

Mixed Reality Framework Based On Visual Positioning

Posted on:2020-04-14Degree:MasterType:Thesis
Country:ChinaCandidate:X L GaoFull Text:PDF
GTID:2428330602951871Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In recent years,with the development of science and technology,various new ways of human-computer interaction have begun to appear in people's daily lives.Mixed-reality technology is expected to change the basic form of people's interaction with the outside world.This paper focuses on the mixed reality framework based on visual positioning.This paper uses the ORB-SLAM algorithm based on feature point method as the basic camera space positioning algorithm,improves the image feature points and camera pose estimation of the visual odometer,implements the three-dimensional registration of virtual objects in the mixed reality framework,and completes the virtual and real fusion rendering of the three-dimensional scene.In this paper,some commonly used SLAM algorithms and3 D rendering techniques are studied.The main research results are as follows:(1)Improved image feature point extraction and matching algorithm in ORB-SLAM system.In order to extract image feature points with uniform distribution,accurate quantity and close response in real time on the images taken by the camera,Firstly,this paper adopts the multi-threshold based FAST corner extraction algorithm,the threshold of the FAST corner point is adjusted according to the texture richness of the block image,and the setting of the multi-threshold ensures that the corresponding values between the corner points are similar,and an excessive,uniformly distributed FAST corner point is extracted.Then,this paper uses the Shi-Tomasi corner filtering algorithm based on quadtree to obtain a set of accurate corner points.In addition,the similarity measurement algorithm based on the lookup table is used to transfer the computational quantity in the image feature point matching process,and improves the real-time performance of the image feature point matching in the visual odometer.(2)Improved the camera pose estimation algorithm in the ORB-SLAM system.In ORB-SLAM,the EPnP algorithm is used to solve the camera pose estimation problem under 3D-2D.Since the EPnP algorithm calculates the coordinate value of the three-dimensional space point in the current camera coordinate system and converts the problem into the camera pose estimation problem under 3D-3D,the computational efficiency of the algorithm is affected.The RANSAC-AP3 P algorithm proposed in this paper avoids the conversion of problems,with the help of geometric positional relationshipbetween two-dimensional image point coordinates in image space and three-dimensional space point coordinates in world coordinates,restores the three-dimensional motion structure of the camera.The experimental results show that the RANSAC-AP3 P algorithm is superior to the EPnP algorithm in real-time performance,and the error of the camera pose is within the same order of magnitude.(3)Combining visual positioning algorithm and 3D rendering technology to construct a mixed reality framework.In this paper,the real-time pose of the camera is obtained by the improved ORB-SLAM algorithm to realize the virtual and real fusion rendering of the 3D scene,and a mixed reality framework based on visual positioning is constructed.The three-dimensional model of the virtual object is created in the model space,pre-placed in the world coordinate system,and transformed into the current camera coordinate system according to the real-time pose of the real camera,and projected into the two-dimensional screen space.The depth-buffer-based triangle rasterization technique is used to acquire the two-dimensional projection image of the three-dimensional model at the current perspective,and superimposed on the real scene image to present a virtual and real fusion rendering effect.
Keywords/Search Tags:Image Processing, SLAM, Mixed Reality, Visual Positioning
PDF Full Text Request
Related items