Font Size: a A A

Virtual Assembly And Display Research Based On Behavior Interaction

Posted on:2016-05-01Degree:MasterType:Thesis
Country:ChinaCandidate:L F LuoFull Text:PDF
GTID:2298330467979187Subject:Human-computer interaction projects
Abstract/Summary:PDF Full Text Request
With the continuous development of computer technology, virtual assemblytechnology has aroused more and more attention, while there is still a problem that howto make the user interact with virtual environment in a more natural way. The traditionalinteractive systems, such as mouse,keyboard, etc. conduct communication betweenhuman and computer through manual operation, and this interactive mode, due to thelack of intelligence and convenience, is unable to meet people’s requirements. With theconcept of natural human-computer interaction being proposed,people begin to study amore natural way to interact with computer. Face recognition, gesture recognition,voicerecognition, etc” as important parts of the natural human-computer interaction haveattracted great attention of people.This topic is based on human-computer interaction, computer vision,virtual realityand gesture recognition. We develop a virtual interactive presentation system usingKinect and virtual reality simulation sotfware3DVIA Studio. Through simple gesturesand physical aciton,virtual assembly and virtual display can be implemented by user.The system is real-time and interactive, allowing user to understand the product’sfunction and characteirstics more intuitively.The somatosensory interaction of this system mainly uses gesture recognitionbased on depth image and skeletal data obtained by Kinect. In this paper, the position ofirght hand bones which obtained by Kinect will be mapped to the computer screencoordinate system,by moving the irght hand instead of moving the mouse. Using anopen irght hand and ifst instead of letf mouse button input. Hand action is adopted tomaximally simulate the operation of the mouse. For other interactive action, we use therelative position between bone nodes to simulate.Gesture recognition contains two parts: gesture segmentation and feature extraction.Hand gesture segmentation uses depth image obtained by Kinect. Gray histogram,adaptive dual threshold segmentation is applied to segment the gesture, and medianfilteirng method is used to eliminate the noise in gesture image. For gesture featureextraction,first we extract palm position using corrosion operation. Then we use eightneighborhood boundary tracking algorithm to extract the gesture contours. And useGraham Scan algorithm to calculate the convex hull of the gesture contour. Atfer thatwe use clustering and K-curvature method to extract the fingertip position. Finally, theextracted palm and ifngertips will be used to calculate the number of fingers and the angle between the fingers. Thus we can classify different gestures.The work of this paper completes non-contact interaction between user and virtualscene,which is of great value of application innovation. Gesture recognition based ondepth image can identify common simple gesture under different lighting conditions.Adaptive dual-threshold segmentation method used in this paper, is more beneifcial forgesture segmentation compared with fixed threshold segmentation methods. Gesturefeature extraction methods used in this article can extract the palm and fingertipsaccurately. This study also provides the basis for further study of somatosensoryinteractive technology.
Keywords/Search Tags:Human-Computer interaction, Virtual assembly, Kinect, Gesturerecognition
PDF Full Text Request
Related items