| As an important research direction in field of human-computer interaction,action recognition has a wide range of applications in life.Limited by conditions such as light,viewing angle,privacy,and portability,traditional action recognition techniques based on cameras and wearable sensors are not applicable in some scenarios.However,Wi-Fi based action recognition can effectively overcome these shortcomings.At present,Wi-Fi based action recognition methods using manually defined features and traditional classification algorithms are not suitable for complex hand motion recognition tasks.Therefore,based on Wi-Fi Channel State Information(CSI),this thesis proposes a gesture recognition algorithm using deep transfer learning and a isolated sign language recognition algorithm combining CNN and LSTM.The main contents include the following three points:(1)In order to overcome the problem that the existing method extracts the incomplete hand motion segment,we propose a hand motion segment extraction algorithm based on buffer technology.we firstly divide the denoised channel state information into several windows,then calculate the average of variance of the channel state information in each window.Finally,we extract the complete hand motion segment by using the buffer technology and the threshold.(2)In order to improve the accuracy of Wi-Fi based gesture recognition,a Wi-Fi based gesture recognition algorithm using deep transfer learning is proposed.we firstly convert Wi-Fi based gesture recognition into an image classification problem,and then use the transfer learning methods to complete gesture recognition.In our experiment,the algorithm achieves 98% accuracy in the dataset collected by us,which is 5.7% higher than state-ofthe-art algorithm.(3)In order to improve the accuracy of Wi-Fi based isolated sign language recognition,we propose a Wi-Fi based isolated sign language recognition algorithm combining CNN and LSTM.In this algorithm,we construct a deep neural network which combines the multilevel image features extracted by transfer learning and sequential features extracted by LSTMs.In our experiment,the accuracy of isolated sign language recognition is improved to 98.6% by this method,which is 1.6% higher than SignFi.In conclusion,our methods improve the accuracy and robustness of hand motion recognition based on Wi-Fi,and enrich the interaction between human and smart devices. |