Font Size: a A A

The Design And Implementation Of Android Dynamic Gesture For Space Separation

Posted on:2020-08-21Degree:MasterType:Thesis
Country:ChinaCandidate:Q BaoFull Text:PDF
GTID:2428330575455039Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Nowadays the demand for mobile applications is increasingly comples,more and more complex user requirements are often difficult to implement using traditional algorithms.With the development of deep learning technology,in more and more fields,people began to use deep learning technology to accomplish things that traditional algorithms are difficult to achieve.Applying deep learning technology to the development of mobile applications can often achieve very complex user requirements,and can also bring prople an unprecedented interactive experence.This thesis develops an android dynamic gesture for space separation through deep learning in the field of image processing and object recognition which is called deep convolutional neural network(cnn).This kind of operation experience breaks the limitations of people using mobile apps in the past,and allows users to complete normal operations without being in contact with the screen or inconvenient to touch the screen.It is also a research that is rarely involved in mobile applications.This thesis expounds the development background of the android dynamic gesture for space separation,and illustrates the operational advantages and conveniences of dynamic gestures in the process of using the android device.The system uses the SSD object detection network for hand detection,and then cooperates with an ECO model to classify actions.This layered design structure enables real-time dynamic gesture recognition in resource-limited hardware environment.Finally,the virtual call is recognized by the server call model and the recognition result is returned to the APP client to complete the simulation test of the algorithm link,and migrates the overall alogrithm to the low-power hardware platform.The thesis users two CNN models to do real-time dynamic gesture recognition.We define some static gestures in advance,use static gesture to train an SSD object detection model,and then use a ECO to do motion classfication.Due to the power consumption requirements of the hardware platform,the action classfication model is in dormant state for a long time because its large amount of computation,and the hand detection model can be normally actived.When the user does not perform a dynamic gesture the action classifier model is not triggered at this time,and when the user performs a dynamic gesture,the hand detection model detects that the hand,and this will temporarily wake up the action classfication model to identify dynamic gesture.This method can achieve real-time dynamic gesture recognition on a moving flat with a large limit on the amount of calculation of the model.Based on the computing power of mobile terminal chip,the project proposes to use two CNN networks for dynamic gesture recognition,which can realize real-time dynamic gesture accurate recognition under the hardware platform with limited model calculation,and brings an unprecedented operation experience to users.Users can use the phone normally under limited conditions.We finally realized the real-time action classification of 30fps dynamic gestures in low-power scenarios,and its application prospects are broad.
Keywords/Search Tags:deep convolutional neural net work, SSD object detection, ECO, dynamic gesture for space separation, mobile application
PDF Full Text Request
Related items