| Fall injury is the leading cause of injury and death in the elderly worldwide.According to the data from the Centers for Disease Control and Prevention of the United States,more than30 % of the elderly over 65 years old in the United States fall at least once a year,and the annual medical expenses of the elderly in the United States are over USD 30 billion.According to statistics,more than 40 million elderly people over 60 years old fall each year in China.Fall injury not only brings negative effects on the elderly’ s psychology,but also leads to an increase in medical costs and a huge burden on families and society.With the rapid development of information and communication technology and micro-electronic mechanical system technology,the embedded devices integrated with inertial mechanical sensors,cameras and other sensors are integrated into the daily life of the elderly.Real-time sensing of the activity data of the elderly,analyzing and monitoring the factors that may lead to falls in the elderly,and then preventing the injuries caused by falls in the elderly have become the trend and hotspot of falls detection technology for the elderly.The existing fall detection methods are mainly based on single modal data.Among them,fall detection technology based on inertial sensors,the human activity and fall data obtained by simulation environment are quite different from the real scene data,which is difficult to be popularized and applied.The fall detection technology based on vision is easily affected by light and occlusion,which is difficult to meet the needs of fall detection in complex scenes.In this thesis,the fall detection technology based on visual and inertial sensor fusion is studied to improve the accuracy of fall detection and the adaptability of complex environment.The main work and achievements of this thesis are as follows:(1)On the basis of analyzing the characteristics of human daily activities,a visual model of human activities based on Star RGB is established to strengthen the characteristics of human activities by highlighting the changes of human activity speed;the mechanical model of human activity based on inertial sensors is established,which provides a basis for extracting the dynamic characteristics of human fall.(2)Combining Star RGB and residual network technology,a human activity visual feature extraction network is constructed;secondly,the feature extraction network of human activity mechanics is constructed by combining discrete Fourier transform and multi-layer perceptron.Finally,the fusion of visual and dynamic features of human activity is realized by attention mechanism.(3)A human activity recognition framework based on fusion of visual and dynamic features is designed.Among them,the L-Softmax loss function and soft voting mechanism are added to the multi-layer perceptron classifier to achieve classification optimization,which improves the accuracy and robustness of human activity detection classification.In addition,a human fall detection prototype system is developed based on the above research results,and experiments are carried out on the public dataset UP-Fall.The experimental results show that the F1-Macro of the system for fall and non-fall binary activities reaches 100 %;F1-Macro for11 classes of fine-grained human activities reached 92 %.Experiments show that the system has good environmental adaptability while improving the accuracy of fall detection. |