Font Size: a A A

Integration of joint coupling for visually servoing a 5-DOF hybrid robot

Posted on:2000-10-23Degree:Ph.DType:Dissertation
University:Columbia UniversityCandidate:Oh, Paul YuFull Text:PDF
GTID:1468390014965425Subject:Engineering
Abstract/Summary:
The "big picture" goal for today's roboticists is to make tomorrow's robots work unassisted and this requires an appropriate visual-servoing architecture. Much of the research in the past decade towards this has focused on designing controllers that rely exclusively on image data. By contrast most robots on the shop floor are servoed kinematically with joint data. People on the other hand appear to coordinate their body motions using both image and kinematic data.;People, like robots, have joints of both varying bandwidths and ranges of motion. People display interesting behaviors when visually tracking moving targets. First, we engage several joints (eye pan, neck, torso, etc.) when tracking with motions that suggest a kinematic joint coupling. For example, the eyes and neck typically pan in the same direction. Second, when tracking, the eyes lead (i.e. start panning before) the neck. With these behaviors, people track targets quite well despite large variations in target motions. This hints that integration of similar behaviors by combining kinematic servoing in a visually servoed robot may improve its tracking performance.;In an approach we call partitioning, both image and kinematic data are used to visually servo a 5-dof robot by defining a joint-coupling among the rotational and translational degrees-of-freedom in the underlying control architecture. Experiments qualitatively illustrate partitioning's ability to overcome limitations of conventional visually servoed tracking systems. Quantitative analysis reveals that a robot's fast bandwidth joints physically serve as lead compensators when coupled to slower joints and thus improves tracking.
Keywords/Search Tags:Joint, Visually, Tracking
Related items