Font Size: a A A

Research And Deployment Of Sign Language Recognition Algorithm Based On Convolutional Neural Network

Posted on:2022-08-12Degree:MasterType:Thesis
Country:ChinaCandidate:J L ZhaoFull Text:PDF
GTID:2518306353976319Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Sign language is a kind of body language specially used for deaf mute communication.It expresses specific semantics by making specific movements and movement tracks of hands and arms.It also includes facial expression,palm position to the body and other auxiliary factors.It is a kind of communication language to read through vision.Nowadays,artificial intelligence algorithm has more and more influence in people's life.If we can use computer technology to build a sign language recognition system,which can recognize the sign language actions made by the deaf mute and translate them into the language that the normal people can understand,then we can solve the communication problem between the deaf mute and the hearing-impaired people,so that the deaf mute can better integrate into the society.At the same time,the development direction of artificial intelligence algorithm has shifted to real-time data processing,and its focus is gradually transferred from the cloud to the embedded end.The deployment of the algorithm in the data end no longer needs to rely on the computing power of the server on the cloud,thus saving the network transmission cost,not only increasing the speed,but also greatly reducing the cost.This paper studies sign language recognition algorithm based on deep learning technology,and completes the deployment of some algorithms in the self-made embedded hardware platform RK3399:First of all,for the task of static sign language recognition,high-resolution structural network is used to locate the hand area,color space is used for skin color segmentation,and a lightweight convolutional neural network is designed for sign language classification and recognition,and a complete and efficient static sign language data acquisition system is designed.The experimental verification is completed on the static sign language data set and implemented on the GPU platform.Secondly,the dynamic sign language recognition task is improved based on the r-c3 d network.The C3 d network used for feature extraction is replaced by a deeper network.The pre selection frame length and action decision threshold in time suggestion subnet and classification recognition subnet are optimized.The experimental verification is completed on the public data set THUMOS14 and continuous dynamic sign language data set,and completed on the GPU platform realization.Thirdly,for the mobile platform which can be deployed for artificial intelligence algorithm,an embedded hardware single board is made based on Rockchip3399.The scheme design,single board production,simulation of high-speed signal,system burning and startup debugging are completed.Finally,the platform is equipped with ubuntu18.04 systems and equipped with LCD,keyboard and mouse,camera and external storage and other peripherals.Finally,for the deployment of the algorithm in the mobile terminal,the model transformation is carried out based on the MNN mobile terminal framework,as well as the following work such as network loading,data processing,model reasoning,result display,etc.Finally,the deployment task of static sign language recognition algorithm in the platform is completed.
Keywords/Search Tags:Deep learning, Gesture recognition, 3D CNN, Embedded platform, Model deployment
PDF Full Text Request
Related items