Font Size: a A A

Research On Activity Recognition Of Multi-modality Sensors Based On Deep Learning

Posted on:2021-05-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y D SunFull Text:PDF
GTID:2428330629480150Subject:Control engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the development of microelectronics technology and computer systems,multi-modality sensors have made long-term progress.Among them,they have been widely used in human-computer interaction and smartphone sensors,and research on activity recognition based on multi-modality sensors has also aroused concern in academia.Research on activity recognition is to explore the physical and psychological activities of the human body,and to intervene in human activities through science and technology.Nowadays,research on activity recognition has become one of the focuses that cannot be ignored in area of humancomputer interaction,medical diagnosis and smart home.With the outstanding achievements of deep learning in the fields of natural language processing,object detection and so on,research on activity recognition has also developed rapidly.The traditional activity recognition methods are to manually extract features from the raw data,which is time-consuming.In addition,the sensor signal data carries far less information than texts and images,maximizing the utilization of the given data demands cautious preprocessing and a rich fund of domain knowledge.While deep learning model can learn deep-level features through the network model automatically instead of manual extraction,which can improve the recognition accuracy and efficiency.In addition,the attention mechanism algorithm can fully use of sensor data.Therefore,the paper introduces the attention mechanism and deep learning model into the research of activity recognition of multi-modality sensors,focuses on the application of activity recognition in brain-machine interface and smartphone sensor area.The main research results are as follows:(1)For the problem of intent recognition of Electroencephalography(EEG)based on brain-computer interface,the traditional methods are time-consuming and low recognition rates.Thus,the paper analyzed and evaluated the EEG signals,then proposed a dual-channel attention network model to recognize users' activity intentions.The model learns the feature of emotion from the collected EEG signals of the users,and evaluates the effectiveness of the model on a target dataset,eegmmidb.The experimental results show that the model can provide users with effective emotion analysis and can also provide effective assistance to people with disabilities.(2)For the problem of activity recognition based on smartphone sensors,existing methods also face with the problem of low recognition rates and mobile deployment.Therefore,the paper proposed a Stacked Recurrent Self-attention network model,which can learn the deeplevel internal feature of smartphone sensor data and extract more interactive information.The recognition accuracy of the two target datasets,WISDM and UCIHAR,can reach 98.63% and 98.37%,and the core function of the self-attention mechanism is experimentally discussed to verify the superiority of the proposed model.The model also was deployed to the Android mobile platform for real-time activity prediction,which provides convenience for people's lives.(3)In terms of hyperparameter adjustment,the paper employed an orthogonal array experiment method,which is faster than the traditional methods in parameter selection and greatly reduces the workload of parameter adjustment.It was the first time that the paper applied orthogonal array parameter adjustment in deep learning.
Keywords/Search Tags:Deep learning, Attention mechanism, Activity recognition, Electroencephalography(EEG), Smartphone sensor
PDF Full Text Request
Related items