Font Size: a A A

Omni-vision Based Simultaneous Localization And Mapping Research For Mobile Robot

Posted on:2012-05-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y B WuFull Text:PDF
GTID:1268330377459277Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
If the robot only uses odometer to predict the location of itself in unknown environments, error will be more and more larger as time goes on. Therefore, In order to accurately localize the robot, we need to use vision sensors, laser sensors or external sensors to extract landmarks from the surrounding environments, build a map based on these landmarks, use the map information to update the location of the robot. In order to get the exact location of the robot, we need to use the map of the environment to update odometer data of the robot.In order to build an accurate map of the environment the robot should know its exact location, such problem called simultaneous localization and mapping problem, including robot localization、feature extraction and map building techniques, need breakthrough in three aspects:real time, robustness and accuracy.Because vision sensor has the advantages of abundant information and short sampling period, have been widely applied to mobile robot navigation in recent years. Nowadays, vision-based simultaneous localization and mapping technology is mainly based common vision sensors. However, the observation view of common vision sensors is narrow, it can only observe60°range of environmental information in the forward direction, continuous observation and tracking capabilities on visual landmark are limited. Omni-vision sensor has a sensing range of360°, Visual landmark can stay longer within its field of view, so, enhanced the ability of continuous observation and tracking on visual landmarks. In this paper, Omni-vision based simultaneous localization and mapping method for mobile robot is studied.First, SLAM perception model of mobile robot is established. According to the imaging principle of Omni-vision system, the projection location of the visual landmarks on the ground is obtained. According to the three-dimensional measurement method of binocular stereo vision based on principle of parallax and robot pose in the two omnidirectional images, visual landmark location in the global coordinate system is obtained.Second, feature matching guidelines are improved. The original matching algorithm has a large number of error matches in feature matching process, therefore the original algorithm is improved as follows:1) If there are two or more feature points extracted from one omnidirectional image matched with the same feature point extracted from another omnidirectional image, compare the euclidean distances of feature descriptors, keep the matching points which have minimum euclidean distance, delete the other matching points;2) angular limitation and length limitation are used to remove mismatches of other conditions, that is compare absolute value of angle change of successful matching feature points in two image coordinate system with the average absolute change in angle of all matched feature points, If the absolute angle change of two matching feature points has larger difference with the average value, remove the two matching feature points; calculate the average euclidean distance of all successful matching feature points. Remove matching points which has larger difference with the average value. Experimental results show that the improved algorithm improved the matching accuracy.eliminated mismatching influence on SLAM., enhanced robustness of the SLAM system.Third, Omni-vision based feature extraction method combined with the EKF and FastSLAM algorithm, omni-vision based simultaneous localization and mapping algorithm is introduced. The presented method uses improved SURF algorithm to extract visual landmarks, According to the observation model localize the position of the landmarks, and then updates the robot position and map information by EKF or FastSLAM algorithm. The simulation experimental results show the superiority of omni-vision sensor relative to the common vision sensor. The feasibility of the algorithm is proved by the real robot experiment designed in this paper.Finally, established feature map database, there are more features in map database as time goes on. To match feature points of the image with feature map database will need a lot of time., this will create difficulties for real-time feature matching, or even bring computing Disaster. Therefore, feature map database structure by sub-maps, select a sub-map through a cost function to match with the current panoramic image. To ensure that the robot can not only get enough visual landmarks, but also to enhance the real-time of SLAM.
Keywords/Search Tags:mobile robot, extended kalman filter, simultaneous localization and mapping, particle filter, omnidirectional vision, feature extraction
PDF Full Text Request
Related items