Font Size: a A A

Research Of Robot Visual Servoing Control Method With Constraints

Posted on:2013-11-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:T T WangFull Text:PDF
GTID:1228330395468211Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
The traditional image-based visual servoing (IBVS) normally uses a set of image points as the imagefeatures. The controller is designed by the proportional control law to achieve a local convergence from thecurrent visual features to the desired visual features. This method has the advantages of object model freeand robust to camera modeling and hand-eye calibration errors. It generally gives the satisfactory controlresults. However, the drawbacks of traditional IBVS are obvious, such as the local asymptotic stability, theretreat of the camera, and the image singularities, etc, especially for the constraint handling. Mostliteratures are not addressed the constraints explicitly.Actually, there are usually three kinds of system constraints need to be considered in the image-basedvisual servoing. That is:(1) Actuator velocity constraints. In order to avoid the actuator velocity beyond itsphysical constraints, planning the maximum speed of the actuator according to the mechanical structure ofthe robot.(2) Object visibility constraints. In order to keep the feature points always in the image plane,planning the image feature trajectories according to the field of view of the camera.(3) Camerathree-dimensional (3D) trajectory constraints. In order to avoid the camera’s unnecessary movements,planning the3D trajectory of the camera according to the robot workspace limitations. Neglecting theseconstraints during the research may lead to IBVS performance degradation, or even lead to visual servoingfailure.In this paper, on the study of monocular eye-in-hand robot system, some constrained image-basedvisual servoing control methods are proposed.Firstly, a parallel-distributed compensation (PDC)-based visual servoing control method is presented.The closed formulation of the transfer function of the Takagi-Sugeno (TS) fuzzy model is equivalent to thevisual servoing convex polytopic model where the weighting functions represent the membership functionsof the antecedent sets. Therefore, the PDC method is directly employed in the visual servoing controllerdesign. Taking image coordinates as the image features, the actuators limitations and the visibilityconstraints are formulated into the input and the output constraints, respectively. The visual servoingcontrol tasks are implemented by carrying out the convex optimization involving linear matrix inequalities(LMIs) off-line. The feasible solutions of the LMIs satisfy the closed-loop asymptotic stability.Secondly, a quasi-min-max model predictive control (MPC)–based visual servoing control method ispresented. Similarly, taking image coordinates as the image features, the actuators limitations and thevisibility constraints are formulated into the input and the output constraints, respectively. The controlsignals are calculated on-line by carrying out the convex optimization involving LMIs over an infiniteprediction horizon. The feasible solutions of the LMIs satisfy the Lyapunov stability conditions. Comparedto the PDC-based visual servoing method, the control signals of the quasi-min-max MPC-based method arecalculated using the receding horizon concept, which is robust to the system noises. What’s more, thecontrol signals are separated to the current one and the future ones, and the number of the LMIs is onlyrelative to that of the time varying parameters in the image Jacobian matrix for each image point. Thismethod is suitable for controlling a6degrees-of-freedom (DOF) robot system. And some intractableproblems in traditional IBVS controller can be solved by this method.Finally, taking depth information and image coordinates of the feature points as image features, amodified quasi-min-max MPC-based visual servoing control algorithm is presented. The image Jacobianmatrix relative to the image features is derived as well. Although the before-mentioned quasi-min-maxMPC-based control method has the ability of planning the camera velocity screw and the featuretrajectories in2D image plane, it could not handle the camera3D trajectories explicitly. Especially for thepure rotational movement around the optical axis, the retreat motion of the camera may even lead to visualservoing failure. The modified algorithm not only can formulate the actuators limitations into the systeminput constraints, but also formulate the object visibility constraints and the camera3D trajectoryconstraints into the output constraints. With the same good performance, the introduced depth constraintssignificantly improve the3D trajectory of the camera in this modified algorithm, which make it easy tohandle the visual servoing task of a π radians pure rotation around the optical axis.
Keywords/Search Tags:visual servoing, constraints, parallel-distributed compensation, quasi-min-max modelpredictive control, linear matrix inequalities, linear parameter varying (LPV) model, polytopicdecomposition, tensor product model transformation, depth
PDF Full Text Request
Related items