Font Size: a A A

Vision-based Mobile Robot Self-localization And Object’s Position And Attitude Measurement

Posted on:2017-04-16Degree:MasterType:Thesis
Country:ChinaCandidate:J B FengFull Text:PDF
GTID:2308330485482557Subject:Control engineering
Abstract/Summary:PDF Full Text Request
For the past few years, vision system as a important subsystem of robot system get more and more attention. In robot system, vision system usually play a role of detection. Especially in the detection of environment and the detection of objective which interact with robot, vision system is irreplaceable. In the thesis, vision-based mobile robot self-localization and object’s position and attitude measurement are studied. The major work is as follows:Firstly, the thesis analyzes the background and significance of vision-based mobile robot self-localization and object’s position and attitude measurement, and reviews the current status of industrial mechanical arm and vision-based mobile robot self-localization and object’s position and attitude measurement. The main work and framework of this paper is given.Secondly, the thesis introduces the pinhole camera model. Then internal and external parameters of camera calibration algorithm is studied.Thirdly, the thesis introduces the process of the vision-based mobile robot’s self-localization. Camera gets the image of manual marker which coded by BCH(Bose Chaudhuri Hocquenghem). The marker is recognized with its shape and the ID of the marker is obstaned. The marker pose relative to robot is calculate by RPP(Robust Planar Pose) algorithm. With the position of the marker in world coordinate system, the position of mobile robot is estimated by Extend Kalman Filter.Fourthly, vision-based object’s position and attitude measurement is studied. The hand-eye is system is calibrated. The circle is reconstructed with the two images obstaned from two views. Two ellipse cones are defined by two cameras and the circle’s images obstaned with the two cameras. The circle is the intersection of the two ellipse cones on a degenerate quadric surface. However the error of circle plane’s normal vector calculated by this method is large. A circle reconstruction algorithm based on the constraint of circle is proposed. This algorithm exploits the process ellipse projected by circle to calculate the circle’s normal vector. The center is reconstructed by binocular stereo vision. The circle plane is reconstructed by the normal vector and circle center. The circle in space is covered from the intersection of plane and ellipse cone. The object’s position and attitude are obtained form the parameters of the circle in space.Finally, the conclusions are given with recommendation for future work.
Keywords/Search Tags:Machine Vision, Fall Detection, Feature Extraction, Multimodal Feature Fusion, Extreme Learning Machine
PDF Full Text Request
Related items