Font Size: a A A

Research On Indoor Positioning Algorithm Based On Information Fusion Of Inertial And Vision Sensors

Posted on:2020-01-26Degree:MasterType:Thesis
Country:ChinaCandidate:N N LiFull Text:PDF
GTID:2518306518964799Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the growing demand for location-based services,indoor positioning has become a focus area for scholars worldwide.The research found that multi-sensor information fusion method can improve the positioning accuracy.According to the development trend of indoor positioning and the advantages and disadvantages of visual and inertial positioning,from the perspective of different fusion structures,this paper mainly proposes the following three fusion positioning algorithms:The first algorithm uses extreme learning machine(ELM)to fuse inertial and visual information for indoor positioning.In the visual positioning method based on single-layer extreme learning machine,the image fuzzy judgment is introduced to solve the problem that the visual positioning algorithm has large error when the collected image is blurred.At the same time,the inertial information is corrected by the static feedback of vision.According to the characteristics of inertial data,a zero-speed correction method based on joint constraints is proposed to effectively control the error accumulation problem of the inertial positioning system.The algorithm uses the second layer of extreme learning machine to fuse the visual positioning results obtained by the single-layer extreme learning machine and the inertial positioning results after zero-speed correction.The fusion results obtained by the algorithm are compared with the improved inertial positioning results and visual positioning results.Experimental results show that the proposed method performs better than improved inertial positioning method and visual positioning method,both positioning accuracy and stability are improved.The second algorithm is a sub-regional indoor positioning method based on multi-layer particle swarm optimization extreme learning machine.On the basis of establishing the first-layer regression model of the particle swarm optimization extreme learning machine,the second-layer classification model and the third-layer regression model are added.The algorithm divides the whole area into normal region and corner region,and trains the corner area where large positioning error is produced separately.This method effectively reduces the positioning error of the corner area and improves the overall positioning accuracy.The comparison experiments show that the proposed algorithm can maintain good positioning accuracy and strong robustness even when there's external interference.The third algorithm is based on online sequence extreme learning machine and interval intuitionistic fuzzy set multi-attribute decision making.The algorithm adopts centralized fusion scheme.The obtained inertial and visual sensor data are preprocessed to generate feature vectors,and training sample set containing feature vectors and target output position is established.The initial positioning model is established by learning the training samples through online sequence extreme learning machine.By sequentially learning new data with different matching steps,an online sequence extreme learning machine positioning model adapted to various matching steps is established.In the positioning phase,the matching step is adaptively adjusted according to the feature point matching result,and the relative displacement between the frame images is obtained.The interval intuitionistic fuzzy set multi-attribute decision algorithm is used to determine the final displacement result of each frame,and the turning detection algorithm is introduced to reduce the positioning error during turning.The experimental results show that the algorithm has obvious advantages in time efficiency,and the positioning accuracy can meet the demands of most location-based services.
Keywords/Search Tags:Indoor Positioning, Information Fusion, Inertial Data, Visual Feature
PDF Full Text Request
Related items