With the advent of the aging trend and the increasing number of disabled patients,the pressure on the medical system is increasing.In this context,the widespread use of Wheelchair Robot(WR)can bring better rehabilitation support to vulnerable groups and relieve the pressure on health care staff.However,in the process of daily WR walking,the elderly are prone to collision accidents due to misoperation in complex environment,imperfect WR obstacle avoidance function and WR own failure,which will cause secondary injuries to the vulnerable groups.Therefore,it is of great significance to study the intelligent obstacle avoidance and safety control strategies of WR to ensure the safe use of WR.Firstly,in order to avoid WR collision accidents caused by human misoperation,a man-machine fuzzy intelligent obstacle avoidance method considering the operation intention is studied.Multi-channel ultrasonic sensors are mainly used to detect the obstacle distance in the environment,and to establish a fuzzy reasoning rule between the obstacle distance information and the WR safe driving direction,so that WR has the ability to safely detour when encountering obstacles during driving.Then on this basis,the concept of operation weight is introduced,and the fuzzy reasoning rules about the operation weight parameters are established,so that WR can consider the driver’s operation intention to varying degrees according to the size of the operation weight parameters in the process of obstacle avoidance driving.Finally,it is verified in the experimental environment that the obstacle avoidance success rate is 86.7%when the expected speed of WR is 0.3m/s and 80% when the expected speed of WR is0.4m/s,which proves the safety and effectiveness of the method.Secondly,in order to meet the driver’s personalized needs for WR driving state and safe driving needs,this paper proposes the reinforcement learning-fuzzy obstacle avoidance method for human operation habits.Firstly,it analyzes the relationship between individual operation habits and their subjective satisfaction with WR driving state,designs a reward function based on individual operation habits,and establishes a human-computer interaction reinforcement learning model,so that WR has the ability to adjust its driving state to meet people’s subjective satisfaction.Then,the hazard polynomial is established to restrain the WR self-learning process to ensure the security and operability in WR use.Finally,the effectiveness of the method is confirmed by a large number of experiments,which saves people the difficulty of adjusting WR internal parameters independently,saves huge learning and trial and error costs,avoids the occurrence of dangerous situations,and summarizes that the best learning rate is 0.1.Finally,considering the failure of WR own motor,the safety control method of model reference adaptive system(MRAS)is proposed.In the dynamic model of WR,the controller parameters are adjusted to enhance the anti-interference ability of the system.After simulation verification,the method of adaptive safety control can make WR follow the planned route to achieve the purpose of safety control. |