Font Size: a A A

Structure Learning in Locally Constant Gaussian Graphical Models

Posted on:2015-05-07Degree:Ph.DType:Thesis
University:University of California, DavisCandidate:Ganguly, ApratimFull Text:PDF
GTID:2478390020950134Subject:Statistics
Abstract/Summary:
Occurrence of zero entries in the inverse covariance matrix of a multivariate Gaussian random variable has a one to one correspondence with conditional independence of corresponding pairs of components. A challenging aspect of sparse structure learning is the well known "small n large p" scenario. So far, several algorithms have been proposed to solve the problem. Neighborhood selection using lasso (Meinshausen- Buhlmann), block-coordinate descent algorithm to estimate the covariance matrix (Banerjee et al.), graphical lasso (Tibshirani et al.) are some of the most popular ones.;In first part of this thesis, an alternative methodology is proposed for Gaussian graphical models on manifolds where spatial information is judiciously incorporated into the estimation procedure. This is initiated by Honorio et al. (2009) who proposed an extension of the coordinate descent approach, calling it "coordinate direction descent approach", which incorporates the local constancy property of spatial neighbors. However, only an intuitive formalization is provided by Honorio et al. and no theoretical investigations. Here I propose an algorithm to deal with local geometry in Gaussian graphical models. The algorithm extended the Meinshausen-Buhlmann's idea of successive regression by a different penalty. Neighborhood information is used in the penalty term and it is called neighborhood-fused lasso algorithm. I will show by simulation and prove theoretically the asymptotic model selection consistency of the proposed method and will establish faster convergence to the ground truth than the standard rates if the assumption of local constancy holds. This modification has numerous practical application, e.g., in the analysis of MRI data, 2-dimensional spatial manifold data in order to study spatial aspects of the human brain or moving objects.;In second part of the thesis, I will discuss smoothing techniques on Riemannian manifolds using local information. Estimation of smoothed diffusion tensors from diffusion weighted magnetic resonance images (DW-MRI or DWI) of human brain is usually a two-step procedure, the first step being a regression (linear/non-linear) and the second step being a smoothing (isotropic/anisotropic). I extended the smoothing ideas on Euclidean space to non-Euclidean space by running a conjugate gradient algorithm on the manifold of positive definite matrices. This method shows empirical evidence of a better performance than the two-step method of smoothing. This is a collaborative work with Debashis Paul, Jie Peng and Owen Carmichael.
Keywords/Search Tags:Gaussian, Local, Et al, Smoothing
Related items