Font Size: a A A

Some rigorous results on the neural coding problem

Posted on:2004-02-08Degree:Ph.DType:Thesis
University:New York UniversityCandidate:Paninski, Liam MichaelFull Text:PDF
GTID:2468390011973967Subject:Biology
Abstract/Summary:
This thesis presents some new results on the neural coding problem—the problem of estimating the input-output relationship of the brain from finite physiological data. We have three parts:; 1. Estimation of information-theoretic quantities. We present some new results on the nonparametric estimation of entropy and mutual information. First, we analyze some of the most common estimators for these quantities, with two main negative implications: (1) information estimates using these common techniques are likely contaminated by bias in a certain data regime, even if “bias-corrected” estimators were used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods. We proceed to introduce a novel estimator with much better properties.; 2. Analysis of spike-triggered analysis techniques. We analyze the convergence properties of three such techniques. All of our results are obtained in the setting of a (possibly multidimensional) linear-nonlinear (LN) cascade model for stimulus-driven neural activity. We start by giving exact rate of convergence results for the spike-triggered average and covariance methods. These first two methods suffer from extraneous conditions on their convergence; therefore, we introduce an estimator for the LN model parameters which is designed to be consistent under completely general conditions. We provide an algorithm for the computation of this estimator, derive its rate of convergence, and demonstrate its applicability to real and simulated neural data. We close with brief analyses of three possible extensions of the results presented here.; 3. Information-theoretic design of experiments. We discuss an idea for collecting data in an efficient manner. Our point of view is Bayesian and information-theoretic: on any given trial, we want to adaptively choose the input in such a way that the mutual information between the (unknown) state of the system and the (stochastic) output is maximal, given any prior information (including data collected on any previous trials). We prove a theorem that quantifies the effectiveness of this strategy and give a few illustrative examples comparing the performance of this adaptive technique to the more usual nonadaptive experimental design.
Keywords/Search Tags:Results, Neural
Related items