Font Size: a A A

Autonomous Localization Of Robot Based On Three-dimensional Scene Perception Integrating Ultrasound And Kinect

Posted on:2019-03-20Degree:MasterType:Thesis
Country:ChinaCandidate:X FanFull Text:PDF
GTID:2348330545499386Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
For the scene perception problem of the robot in strong radiation environment,a piezoelectric transducer scene perception system for in-air detection is constructed.To solve the problem of insufficient data and low resolution of piezoelectric transducers,a scenario sensing scheme integrating ultrasound array data and Kinect is used.The local features of the target are extracted by the ultrasonic echo data,and feature matching is performed from the scene information acquired by the low-cost Kinect sensor,to achieve the robot positioning in the scene.This paper presents the research status of mobile robot positioning based on three-dimensional scene information,and then examines the sound field characteristics of two-dimensional ultrasonic arrays in air according to the research on the propagation characteristics of ultrasonic waves in air and the relationship between single array elements and two-dimensional ultrasonic arrays.An ultrasonic transducer with a diameter of 10 mm and a central frequency of 40 k Hz was selected and an ultrasonic array device for in-air detection was improved.The array platform consists of 40 transceiver transducers,wherein the transmitting array consists of 1 x 8 array elements and the receiving array consists of 4 x 8 array elements.The transmit end mainly generates the excitation signal from the transmitters triggered by the FPGA.The excitation signal reaches the transmitters through a circuit including amplification and filtering modules;the receive end circuit is mainly used to implement the voltage rise,conditioning and filtering of the echo signal.The host computer mainly completes the preprocessing of the echo signals,including local feature extraction of the data,and brings the extracted target object characteristic information into the scene feature information for matching.In terms of scene perception,the depth image of the scene is first acquired by the Kinect and point cloud data is generated,and the reconstruction of the three-dimensional scene is performed by stitching each point cloud data.Through the experiment,relatively clear three-dimensional scene data can be obtained,thereby establishing a scene information database.Parallel acceleration processing is added to the data processing,and the same batch operations of the process are processed in parallel to achieve an optimized effect.At the same time,the feature information of ultrasonic echo data is extracted,and the feature information is brought into the scene information database acquired by Kinect to obtain the relative position information of the ultrasonic array in the scene,and the feature data fusion of the two sensors is achieved.A mobile robot can be positioned by a ultrasonic array sensor and a Kinect sensor at the same time.Because the relative positional relationship between the ultrasonic array and the Kinect is known,the ultrasonic echo information can be matched with the 3D point cloud database by the data fusion stage based on local feature matching.In this way,the robot's position information is obtained.
Keywords/Search Tags:Ultrasound array, Scene awareness, Autonomous positioning, Feature matching
PDF Full Text Request
Related items