Font Size: a A A

A Study On SSVEP-BCI Portable Voice-sounding Device For Deaf-mute

Posted on:2021-05-23Degree:MasterType:Thesis
Country:ChinaCandidate:A D ZhaoFull Text:PDF
GTID:2404330629950212Subject:Master of Engineering Instrumentation Engineering
Abstract/Summary:PDF Full Text Request
The brain-computer interface(Brain Computer Interface,BCI)is a device that converts the electrical signals(Electroencephalogram,EEG)produced in people’s brain into a digital signal by means of computer analysis and processing(a special way of communicating with the outside world only through the brain and the outside world without the human muscle and peripheral nerve).Nowadays,with the rapid development of intelligent devices,people communicate in a variety of ways,but the most important thing is still language communication Flow.Normal people communicate through conversation,and deaf people communicate through sign language.However,communication between deaf and mute people and ordinary people who are not familiar with sign language is difficult,which urgently requires a tool for deaf and normal communication.The portable voice vocalization device can make the deaf-mute sound the same as the normal person,thus realizing the communication between the deaf-mute and the normal person.This paper designs a portable voice vocalization device based on visual steady-state evoked potential brain-computer interface(Steady-State Visual Evoked Potentials,SSVEP-BCI),including head-mounted visual stimulator design,portable eeg signal acquisition device design,portable signal processing unit design and mobile phone voice APP design.So as to improve the portability of the voice sound device.To realize the voice of deaf-mute in the state of motion.By using hybrid reality technology(Mixed Reality,MR),a head-mounted stimulus interface is built to increase virtual scene control while users observe the real environment.MR is the further development of virtual reality technology,which introduces virtual scene control in real environment.In order to achieve the user can observe the surrounding things at the same time voice.The imaging mode of mixed reality technology is divided into two kinds: fixed head type and fixed ground type.In order to verify the effect of the two imaging methods on the voice sound device.Design the fixed-ground experimental paradigm and the fixed-head experimental paradigm for the accuracy of the two imaging methods,information transmission rate and so on to explore.The head-mounted visual stimulator can not avoid the head movement of the subjects during the use process,resulting in head motion artifacts.For this purpose,an empirical modal decomposition combined with independent component analysis(Empirical Mode Decomposition-Independent Component Correlation Algorithm,EMD-ICA)is used to remove head motion artifacts.and the experimental results show that this method can effectively remove the head motion artifacts doped in the SSVEP signal.When classifying the features of eeg signals after removing the head motion artifacts,the typical correlation coefficient(Canonical Correlation Analysis,CCA)is selected in this paper.in this paper,the SSVEP signal solid ground experimental paradigm and the paradigm and the fixed head experimental paradigm are analyzed in the static and moving state,respectively.At sit-in state,there is no significant difference in the signal characteristics SSVEP solid-ground experimental paradigm,fixed-head experimental paradigm.At the motion state,the signal characteristics SSVEP the fixed-ground experimental paradigm are higher than the fixed-head experimental paradigm.According to the experimental results,the fixed ground type experiment was selected As the experimental paradigm of speech vocalization device,the paradigm further improves the accuracy of speech vocalization.
Keywords/Search Tags:Brain-computer interface, Steady-state visual evoked potential, Mixed reality, Assisted vocalization
PDF Full Text Request
Related items