Font Size: a A A

Exploiting Information For Continuous Prediction In EEG-Based Brain-Computer Interface

Posted on:2008-01-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:X Y ZhuFull Text:PDF
GTID:1118360212498586Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Brain-computer interface (BCI) provides a new communication and control channel that does not depend on the brain's normal output channels of peripheral nerves and muscles. It provides a radically new communication option to people with neuromuscular impairments or with other severe motor disabilities which prevent them from using conventional augmentative technologies. To develop effective learning algorithms for continuous prediction using Electroencephalogram (EEG) signal is a challenging research issue in BCI. To make continuous prediction, trials in training dataset are first divided into segments. However, the actual intention (label) at each time interval (segment) is unknown, which imposes great difficulty in classifier training. And how to accumulate the predictions at individual time intervals across time to recognize the final label of the whole trial as early and accurately as possible is also a key issue. We denote the above two problems as "Unlabeled Problem" and "Accumulative Problem".The main contributions of this thesis focus on developing novel Bayesian probabilistic models to address the above two issues in machine learning area to improve the performance of BCI system.For the unlabeled problem, we design a novel probabilistic model under Bayesian framework, which treats the uncertain labels as the hidden variables in the lower bound on the log posterior. The parameters of the model were estimated by maximizing this lower bound using an Expectation-Maximization like algorithm under MAP criterion. By handling the uncertainty of segment label properly, the proposed method can make full use of the incomplete data and improve the classification accuracy.For the accumulative problem, we have presented two accumulative classification methods. In the first method, a GMM-based accumulative classifier is proposed. The discriminative power (weight) is estimated based on the estimated GMM models and the separability of the outputs of GMM classifier at the individual time interval, respectively. In the second method, we propose an accumulative classification method in the classifier combination (stacking strategy) point of view by combining Bayesian logistic model and Fisher criterion.Furthermore, to address these two key issues together, we derive an accumulative probabilistic model under Bayesian framework. The parameters of our model are estimated by introducing two auxiliary distributions in the lower bound on the log posterior and maximizing this lower bound based on Variational Bayesian method. The proposed method unifies the estimations of discriminative power and classifier parameter into a whole process which makes these two parts cooperate with each other and thereby improves the system performance.All the above methods are evaluated on three datasets of BCI competition 2003 and 2005. The experimental results show that the averaged accuracies of our methods are among the best. The proposed methods achieve a better performance consuming less time.
Keywords/Search Tags:Brain-Computer
PDF Full Text Request
Related items