Font Size: a A A

Kernel based machine learning framework for neural decoding

Posted on:2013-05-12Degree:Ph.DType:Dissertation
University:University of FloridaCandidate:Li, LinFull Text:PDF
GTID:1458390008467099Subject:Engineering
Abstract/Summary:
Brain machine interfaces (BMI) have attracted intensive attention as a promising technology to aid disabled humans. However, the neural system is highly distributed, dynamic and complex system containing millions of neurons that are functionally interconnected. How to best interface the neural system with human engineeried technology is a critical and challenging problem. These issues motivate our research in neural decoding that is a significant step to realize useful BMI. In this dissertation, we aim to design a kernel-based machine learning framework to address a set of challenges in characterizing neural activity, decoding the information of sensory or behavioral states and controlling neural spatiotemporal patterns.;Secondly, the precise control of the firing pattern in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. In this work, we propose a multiple-input-multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a RKHS. The control scheme uses an inverse controller to approximate the neural circuit's inverse. The proposed control system takes advantage of the precise timing of the neural events using the Schoenberg kernel based decoding methodology we proposed before. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system output and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel can successfully drive the elicited responses close to the original target responses even when significant perturbations occur.;Thirdly, the spike train variability causes fluctuations in the neural decoder. Local field potentials (LFPs) are an alternate manifestations of neural activity with a more common continuous amplitude representation and longer spatiotemporal scales that can be recorded simultaneously from the same electrode array and contain complementary information about stimuli or behavior. We propose a tensor product kernel based decoder for multiscale neural activity, which allows modeling the sample from different sources individually and mapping them onto the same RKHS defined by the tensor product of the individual kernels for each source. A single linear model is adapted as done before to identify the nonlinear mapping from the multiscale neural responses to the stimuli. It enables us decoding of more complete and accurate information from heterogeneous multiscale neural activity with only with an implicit assumption of independence on their relationship. The decoding results in the rat sensory stimulation experiment show that the decoder outperforms the decoders with either single-type neural activities. In addition the multiscale decoding methodology is also used in the adaptive inverse control mode. Due to the accuracy and robustness of the decoder, the control diagram with open-loop mode is applied to control the spatiotemporal pattern of neural response in the rat somatosensory cortex with micro-stimulation in order to emulate the tactile sensation and obtained promising results.;Finally, we quantify and comparatively validate the temporal functional connectivity between neurons by measuring the statistical dependence between their firing patterns. Temporal functional connectivity provides a quantifiable representation of the transient joint information of multi-channel neural activity, which is important to completely characterize the neural state but is normally overlooked in the temporal decoding due to its complexity. The functional connectivity pattern is represented by a graph/matrix, which is again not a conventional input for machine learning algorithms. Therefore, we propose two approaches to decode the stimulation information from the neural assembly pattern. One is to use graph theory to extract topology feature vector as the model input, which makes conventional machine learning approach applicable but dismisses the information of structural details. Therefore, we also proposed a matrix kernel that is able to map the connectivity matrix into RKHS and enable kernel based machine learning approaches directly to operate on the connectivity matrix, which bypasses the information reduction induced by the feature extraction.;Our contributions can be summarized as follows. First, we propose a nonlinear adaptive spike train decoder based on the kernel least mean square (KLMS) algorithm applied directly on the space of spike trains. Instead of using a binned representation of spike trains, we transform the vector of spike times into a function in reproducing kernel Hilbert space (RKHS), where the inner product of two sets of spike times is defined by the Schoenberg kernel, which encapsulates the statistical description of the point process that generates the spike trains, and bypasses the curse of dimensionality-resolution of the other spike representations. The simulation results indicate that our decoder has advantages in both computation time and accuracy, when the application requires fine time resolution.
Keywords/Search Tags:Neural, Machine, Kernel, Decoding, Decoder, Spike, RKHS, Results
Related items