| In recent ycars,brain-computer interface(BCI)has been developed as a special form of human-computer interaction,offering new ways of helping or assisting people with disabilities to regain basic self-care skills.The steady state visual evoked potentials(SSVEP)based brain control method has received a lot of attention from scholars at home and abroad due to its high information transmission rate and low training time,and has been used in the clinical application research of disability rehabilitation robots.However,the existing visual evoked brain control method suffers from weak connection between the stimulation paradigm and the environment,lack of adaptivity in the decoding algorithm,and single control method.To address these shortcomings,this paper takes the robot as the control object and aims to realise brain-controlled grasping of the robot.The research is carried out in three aspects:improving the visual evocation paradigm,enhancing the accuracy of the decoding algorithm and improving the control strategy.To address the problem that existing visual evocation paradigms lack the ability to adapt to dynamic unstructured environments,this paper proposes a hybrid visual evocation paradigm modelled on grasping targets to improve subjects’ comprehension of targets.The paradigm combines targets in a dynamic unstructured environment based on Yolov5 recognition with a radial tessellation grid to establish correlations between stimulus targets and the dynamic environment,which further improves the applicability of the visually evoked brain control system while enhancing the subjects’ attentional focus ability.The 9-target mixed visual evoked paradigm was experimented with 15 subjects and decoded using the filter bank canonical correlation analysis(FBCCA)algorithm,with an average correct recognition rate of 90.06±6.37%for a 3s time window length and a maximum correct rate of The highest correct rate was 97.22%.The experimental results show that the proposed hybrid visual evocation paradigm can adequately evoke the corresponding EEG signals in the occipital region of the brain and has a strong signal-to-noise ratio.To address the drawback that the traditional SSVEP decoding algorithm lacks adaptivity,a multivariate variational mode decomposition(MVMD)method is proposed based on the response law of steady state hybrid visual evoked potentials(SSHVEP).A decoding method combining multivariate variational mode decomposition(MVMD)and convolutional neural networks(CNN)is proposed.The method uses the adaptive decomposition results obtained from MVMD as input to a convolutional neural network for automatic feature extraction and pattern discrimination.Decoding and analysis of the 9-target SSHVEP signals acquired from 15 subjects achieved an average correct rate of 94.61 ± 4.63%,which is a 5%improvement compared to the conventional algorithm.The average kappa coefficient reached 0.94 ± 0.05.The experimental results show that the algorithm has good robustness based on the effective improvement of the decoding accuracy of SSHVEP.A shared control strategy for robot grasping is designed to address the problems of a single control method and inefficient execution in brain-computer fusion.The strategy combines the robot’s autonomous grasping control with the subject’s asynchronous brain control method,in which the robot’s autonomous grasping control is the main focus,and the motion information of each joint is obtained by means of an inverse kinematic solution to control the robot’s motion,and the process is verified in the Gazebo simulation environment under Ubuntu;the asynchronous brain control method is used to achieve accurate grasping control of the robot The process was supplemented by a brain network to analyse the brain states and to assess the tightness of the connections of the nodes using clustering coefficients,and finally a BP neural network to identify the idle and task states,with an average correct rate of 87.55 ± 8.04%for the nine subjects.In order to further verify the effectiveness of the proposed method,an experimental platform of brain-controlled robot grasping service system based on hybrid visual evocation paradigm,decoding algorithm and control strategy was built on the basis of the research of visual evocation paradigm,including the overall structure design,hardware platform construction and software system development.The experimental scenarios were designed and arranged to verify the feasibility of the system.The results showed that the subjects could select and grasp the target objects through the designed brain-controlled robot grasping service system based on the hybrid vision evoked paradigm.The grasping success rate reached 100%,and the correct control command recognition rate reached 89.36±10.12%.The experimental results demonstrate the effectiveness of the brain-computer interaction method based on the hybrid visual evocation paradigm investigated in this paper. |