Font Size: a A A

Visual Recognition Of Human Pose For The Transfer-Care Assistant Robot

Posted on:2020-11-21Degree:MasterType:Thesis
Country:ChinaCandidate:S D LiFull Text:PDF
GTID:2480306563468114Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
The world's aging population continues to rise,traditional human care is no longer adapted to the needs of today's world.With the rapid development of intelligent robots,a number of intelligent transfer-care assistant robot have emerged.At present,only a few developed countries have mastered the technology of transfer-care assistant nursing robot,while the domestic research on transfer-care assistant robot is less,and there is a lack of basic algorithm research.Based on the self-developed transfer-care assistant robot of Hebei University of Technology as the experimental platform,a two-level neural network algorithm is proposed to meet the requirements of high accuracy and short-range adaptability of the transfer-care assistant robot system for human posture detection.The algorithm realizes high accuracy and good adaptability for human posture recognition,which is helpful for the transfer-care assistant robot.It provides the basis for human-computer interaction and security.Firstly,the experimental platform of the transfer-care assistant robot is introduced.The actual needs of users are clarified,the mechanical structure of the transfer-care assistant robot is put forward,the holding part and the moving part are introduced,and the corresponding control structure is planned according to the needs of perception,action and safety of the robot.In order to fully ensure safety,further,the precision parameters of the robot are defined,the execution accuracy and recognition accuracy of the robot are analyzed,and the relationship between accuracy and safety is discussed.The relationship between them and the standard of recognition accuracy are established.Secondly,according to the requirement of human posture recognition for transfer-care assistant robot,the corresponding methods are proposed.Considering that there are many holes and noises in depth maps,the corresponding image restoration methods are proposed to improve the image quality.In order to reliably recognize the position of human joints,a two-level cascade neural network is proposed on the basis of making full use of RGB-D(RGB-depth)information.Firstly,the first-level network is used to estimate the human joint pixel coordinates in color images,and the human joint coordinate points in color images are transformed into depth maps,and the joint thermal maps are calculated.A convolutional neural network structure is proposed as the second-level network.The depth images and joint heat maps are fused into the second-level network to achieve the purpose of estimating the global coordinates of human joints.Thirdly,based on the joint coordinates predicted in the previous step,human underarm ROI(Region of Interest)is delineated,and image ROI is segmented by the law to obtain the human underarm foreground.The human underarm foreground is tracked,and the boundary points are fitted by convex hull,and then the position of the underarm point is obtained.Finally,experiments are carried out for the accuracy of human pose recognition and the experimental results are analyzed.
Keywords/Search Tags:Nursing Robot, Human Posture Recognition, RGB-D Image, Two-Stage Series Convolution Neural Network
PDF Full Text Request
Related items