Font Size: a A A

Key Technology Research On Predictive Display-based Teleoperation In Unknown Environment

Posted on:2019-03-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:H HuFull Text:PDF
GTID:1318330542498651Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
Teleoperation is an important way to control the remote by combining a better decision-making ability of the human operator with a better performing ability of the robot.When the direct teleoperation is adopted,the human operators tend to adopt inefficient 'move-wait-and-move' strategies in place of natural continuous motion,because that the visual feedback is delayed.Especially under the larger time delay,predictive display based teleoperation(PD)is one effective solution,which supports the real-time visual feedback to help human operator make a decision.PD is the way based upon the 3D model of robot and working environment.Under structed environment,3D model of working environment can be reconstructed by computer graphics.However,we can't construct the environment model in advance without the prior knowledge of the working environment.This thesis mainly studied the PD method under unstructed or unknown environments.It constructs 3D model of working environment online by using the captured video images and then renders the predicted view based on this 3D model.This can copy with the delayed visual feedback.The main contents of research are as follows:(1)An algorithm of automatically constructing the initial map is presented.Without any human intervention,this algorithm constructs the environment model from two images that are automatically selected according to the base line of these two images.In the process of image matching and localization,the combination of overall image descriptor and local image descriptor is introduced to improve search efficiency.Meahwhile,the algorithm of camera pose estimation,which deduce from model matching based image tracking method,is combined into the prediction process of camera pose.This reduces the failure of camera tracking and increases the robustness of the system.The whole algorithm adopts two threads to make sure the real-time property of tracking and localizing.This is because the costly map reconstructing and optimizing is seperated from the tracking and localizing part which is highly senstive to real-time.Finally,one SLAM(Simultanous Localization and Mapping)system is built based on ROS(Robot Operating System)platform.The captured image from camera is sent to SLAM software through ROS message.(2)The robusted optimization method of SLAM system is proposed to work continuously in a large environment.To improve the quality of map to obtain the consistent map,the loop detection and correction method is run through matching the past similar image according to the multiple layers vocabulary tree of visual words.To increase the robustness of the system for sustainable tracking and localizing,the multiple maps based localization and mapping method is put forward.It automatically constructs a new map and tracks using new map when tracking is detected to being lost.Meanwhile,the advantage of using multiple small maps instead of one large map is to copy with the error accumulation that usually leads to an invalid map.Finally,the software is developed by using multiple maps,which includes four threads:tracking and localizing thread,single mapping thread,loop closing thread,and multiple mapping thread.(3)A sensor fusion method is proposed to optimize the pose estimation and estimate the scale of map through fusing IMU measurements into visual monocular SLAM.This sensor fusion algorithm is developed by using extend Kalman filter theory.It has the following functions:optimizing the camera pose,computing the map scale and compensate the calibration error between camera and IMU.This filter uses the camera pose estimated from SLAM as the measurement in the update step.It is linearly combined with the camera pose estimated from IMU measurements to improve the accuracy of camera pose estimation.Considering that the initial value affects the stability of the Kalman filter,the initial values estimation method of the Kalman filter based on pre-integrator theory is presented.The initial values include IMU bias,the intial scale,the direction of gravity.Another advantage of this sensor fusion method is that the camera can be tracked and localized from IMU measurements because of the lack of the visual information such as not enough features detected.(4)The surface reconstruction method for scattered point cloud is presented.This method use the 3D map points in SLAM method as the input and then extract the surface of the scene.Firstly,the tetrahedral mesh is generated using the discretization method based on 3D Delaunay rules.Secondly,the online extention update method of the tetrahedron is proposed using the event-driven model.The tetrahedron is expanded continually with the map update and extension.Finally,the smooth surface is extracted from the tetrahedral mesh.The proposed method addresses this problem as the graph structure and then obtains the surface based on graph cut theory.Meanwhile,the reconstuction results are verified under various environments.(5)The PD method is proposed by using multiple texture images that were captured from several closer positions of the predicted view.The predicted image is obtained by projecting four closet images onto the 3D model of scene and fusing together.It is used to assit the human to teleoperated a robot by using the PD method.Then,the overall architecture of PD teleopration is stated.The protype system is built using client-serverarchitecture.This platform is evaluated through the predicted view from online reconstruction in several different types of environment.Moreover,another factor of increasing the difficulty of teleoperation is analyzed,which is the number of degree of freedom.This experiment gives the further evidence that PD can be effective solution to compensate the affection of time delay.
Keywords/Search Tags:Predictive display, Teleoperation, Simultaneous localization and mapping, sensor fusion, 3D reconstruction
PDF Full Text Request
Related items