Font Size: a A A

Omni-Vision Based Simultaneous Localization And Mapping For Mobile Robot

Posted on:2009-10-17Degree:MasterType:Thesis
Country:ChinaCandidate:J Y XuFull Text:PDF
GTID:2178360242476669Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
It is necessary for a mobile robot to have the ability of exploring the unknown area, creating the environment map and locating herself in this map if she wants to be entirely autonomous. Simultaneous localization and mapping (SLAM) is the key technique in autonomous navigation field and has become a hot issue for mobile robot. Although there exists some mature solutions to SLAM using laser or sonar sensors, visual-based SLAM (vSLAM) is recognized to possess wider application field and more important research value. Visual sensors have the advantage of lower price and can take visually rich images, which combine both advanced computer vision technology and high performance of image processing.Nowadays, most vSLAM research work is based on stereo vision or monocular vision using conventional perspective cameras. However, there exists the deficiency in continuity of feature tracking and locating because of the restricted observation field of conventional perspective cameras. Omni-vision system can provide a 360 full-orientation view and has a wide range of application filed in robot navigation, video surveillance and multimedia conference. She has not only the advantages of conventional perspective cameras but also specific wide observation field, which can help her capture both rich and integrated environment information and make up for the above deficiency well.This paper explores the research filed on combination between omni-vision and vSLAM, and then put forward a systematic solution to realize vSLAM based on omni-vision, which can be called"Omni- vSLAM".There are some tough problems in this research: feature extraction from catadioptric images with large distortion, effective measurement model construction and accurate system uncertainty analysis. The systematic solution in this paper extracts color areas as visual landmarks, builds up the measurement model and locates the landmarks by analyzing the imaging principle and localization uncertainty of omni-vision. Then it updates the robot pose and map information simultaneously by extended kalman filter (EKF). Moreover, in order to solve the problems of strong non-linear distortion of catadioptric images and great computational complexity of measurement model, it adopts feature filtering and measurements equivalent transformation respectively, which enhance both the localization accuracy and the efficiency of the algorithm.Finally, this paper implements the omni-vision based vSLAM experiment on our own developed Frontier-II mobile robot. In this experiment, the robot recognizes the artificial landmarks, which have specific color, to create the map and relocate herself in succession. From the results of the experiment, it shows that the proposed systematic solution is not only feasible and effective, but also robust and reliable. In conclusion, omni-vision can make vSLAM enhance the continuity of object tracking, heighten the efficiency of map creation and improve the localization accuracy at the same time.
Keywords/Search Tags:mobile robot, Simultaneous Localization and Mapping (SLAM), uncertainty, extended kalman filter (EKF), omni-vision
PDF Full Text Request
Related items