Font Size: a A A

Research On Intelligent Car Localization Based On Multi-information Fusion

Posted on:2022-05-24Degree:MasterType:Thesis
Country:ChinaCandidate:S LiFull Text:PDF
GTID:2518306512963699Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Although,Visual Slam(Simultaneous Localization and Mapping)is widely used in robot autonomous localization and mapping.However,in the face of illumination changes,texture changes,and fast motion,visual slam positioning will produce inaccurate problems.Therefore,the IMU sensor is introduced into the visual slam system to improve the accuracy and robustness of positioning.Among them,if the visual information fails,the IMU will drift rapidly.When the mobile robot moves at constant speed or pure rotation,the visual inertial odometer will produce four non observable directions and the inaccurate scale factors due to the inability to obtain effective acceleration excitation.The wheel odometer is introduced to solve the problem of weak observability when the visual inertial odometer is transplanted to the mobile robot and the problem of IMU drift caused by visual information failure.The specific research is as follows:First,the hardware platform of intelligent trolley mobile robot with single camera,IMU,wheel odometer and lidar is completed.The monocular odometer uses sparse direct method(semi direct method,SVO)to estimate the position and pose of the camera in real time,and restores the scale of the monocular camera through IMU to complete the position information output.The localization information of monocular camera,acceleration,yaw angle of IMU and speed of wheel odometer are used to realize loose coupling positioning by Extended Kalman Filter.Under the condition of illumination changes,the accuracy of monocular vision algorithm and the localization accuracy of intelligent car are proved by experiments.Secondly,the scale recovery method of Extended Kalman Filter is improved,and the Visual Inertia Odometer is completed by semi direct method for monocular vision.The accuracy of the improved algorithm is verified by Euroc Data.The coupling of Vision,IMU and Wheel Odometer is completed by Raspberry Pi side,which solves the problem of Visual assisted inertial navigation system(VINS)producing direction not observable and scale weak observable.Finally,in the indoor environment,the intelligent car with hybrid structure collects the data information of the surroundings environment,and completes the accuracy verification of the multi-information fusion algorithm.The experimental results show that the improved multi-sensor fusion algorithm can locate accurately after visual failure,which is more effective than the Extended Kalman Filter Multi-Sensor Loose Coupling algorithm.
Keywords/Search Tags:Multi-Information fusion, Vision SLAM, Pose estimation, Vision inertial system, Mobile robot localization
PDF Full Text Request
Related items