Font Size: a A A

Research And Implementation Of Human-Computer Interaction System Based On Human Body Location And Tracking

Posted on:2020-06-13Degree:MasterType:Thesis
Country:ChinaCandidate:G H AnFull Text:PDF
GTID:2428330575963135Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the wide expansion of information technology and the development of virtual reality in recent years,artificial intelligence has attracted more and more scholars'attention,and artificial intelligence timing is represented by human-computer interaction technology.At present,the research on human-computer interaction has become a hot topic in the world.In human-computer interaction,voice recognition,gesture recognition,human body location,motion recognition and expression recognition are mainly represented.Considering the limitation of single recognition for interaction,this thesis integrates two aspects of human body location and gesture recognition,and designs an indoor human-computer interaction system with projection as the carrier.Among the many methods for studying human-computer interaction,Kinect-based human-computer interaction technology has become the first choice in many scenarios.This is because Kinect has a low cost,Kinect's depth and color image applicability,and many methods developed by Kinect.The focus of this thesis is to develop a human-computer interaction system.The technical support of this system is human body location tracking and static and dynamic gesture recognition methods.The method research in this paper is mainly composed of two parts,which are to study human body location and study gesture recognition.The specific work of this thesis is as follows:(1)Human body location tracking method.In this thesis,the Kinect sensor is used to detect,locate and track the depth image collected in the room.The first requirement for the implementation of this series of functions is to get a clear depth of the human body from the depth image of Kinect.In order to achieve this goal,this thesis uses a series of techniques such as support vector machine,median filtering,morphological denoising,nearest neighbor partitioning depth region and classical convex hull algorithm.First of all,this thesis uses the trained SVM to perform human body detection to determine whether the human body location starts(that is,determine whether there is someone in the room).The purpose is to determine that the detected human body exists in the room and is closest to the Kinect,and the human body detection is used as the human body location.Then,the image is first denoised,and then the determined depth image area of the detected person is segmented.This step is to obtain the depth region of the human body.Then,the improved convex hull algorithm is used to determine the outline of the person,and the convex area of the upper part is obtained to obtain the approximate area of the head.Finally,by analyzing the human head region to establish the center point of the human body in the horizontal axis direction of the depth map,the two-dimensional position of the center point on the depth image is converted into the actual two-dimensional position of the human body in the room,and in the following continuous human location in time to achieve human tracking.(2)Static and dynamic gesture recognition methods.In this thesis,the Kinect sensor is used to acquire the color image,depth image and bone points of the hand for static gesture recognition.First,the thesis determines when the gesture begins and when the gesture ends based on the location of the human body.Then,the original depth image is denoised and then the human hand depth region is segmented by the nearest neighbor method.Then,combined with the position and angle of the five fingers of the hand,the hand features are obtained.Finally,static gesture recognition is performed by the finger position.The dynamic gesture recognition obtains the human skeleton point through Kinect,and then realizes the gesture simulation mouse function by calculating the arm pointing to the straight line and the projection intersection point.(3)The realization of human-computer interaction system.This thesis designs and implements a human-computer interaction system.The two technical features of the system are human body location tracking technology and gesture recognition technology.The system first activates the operation through the human body entering the room,then locates and tracks the human body,recognizes the human body position to open the gesture recognition,replaces some functions of the mouse and the keyboard with static gestures and dynamic gestures,and then performs interactive operations on the projected content.The system is based on human body location and tracking,and uses static and dynamic gesture recognition as an operation method.After experimental verification,not only human body positioning and gesture recognition are more accurate,but also the human body positioning and gesture recognition switching is smoother and feasible.
Keywords/Search Tags:Human-computer interaction, Kinect sensor, Human body location and tracking, Gesture recognition, Projection
PDF Full Text Request
Related items