Font Size: a A A

Bayesian Analysis Of The Covariance Matrix

Posted on:2020-01-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:C Y SongFull Text:PDF
GTID:1360330629480825Subject:Statistics
Abstract/Summary:PDF Full Text Request
Estimating the unknown covariance matrix has been an important issue for more then half a century.It has a wide range of modern applications including astrophysics(Pope and Szapudi [1],Hamimeche and Lewis [2]),economics(Ledoit and Wolf [3]),the environmental sciences(Frei and Kunsch [4],Eguchi et al.[5]),climatology(Guillot et al.[6]),and genetics(Schafer and Strimmer [7]).Finding an unconstrained and statistically interpretable reparameterization of a covariance matrix is still an open problem in statistics [8].Particularly in the recent high-dimensional data environment,enforcing the positive-definiteness constraint could be computationally expensive.We make Bayesian analysis for the covariance matrix in multivariate normal distribution,hierarchical models and multivariate one-way ANOVA model.We propose various priors for covariance matrix.Computation for the priors and posteriors is considered,with the proposed new methods being capable of handling large dimensional covariance matrix.The main works are listed as follows.(1)Bayesian analysis for the covariance matrix of a multivariate normal distribution has received a lot of attention in the last two decades.In this paper,we propose a new class of priors for the covariance matrix,including both inverse Wishart and reference priors as special cases.The main motivation for the new class is to have available priors –both subjective and objective – that do not “force eigenvalues apart,” which is a criticism of inverse Wishart and Jeffreys priors.Extensive comparison of these ‘shrinkage priors' with inverse Wishart and Jeffreys priors is undertaken,with the new priors seeming to have considerably better performance.A number of curious facts about the new priors are also observed,such as that the posterior distribution will be proper with just one vector observations from the multivariate normal distribution – regardless of the dimension of the covariance matrix – and that useful inference about features of the covariance matrix can be possible.Finally,a new MCMC algorithm is developed for this class of priors and is shown to be computationally effective for matrices of up to 100 dimensions.(2)Hierarchical models are the workhorse of much of Bayesian analysis,yet there is uncertainty as to which priors to use for hyperparmeters.Formal approaches to objective Bayesian analysis,such as the Jeffreys-rule approach or reference prior approach,are only implementable in simple hierarchical settings.It is thus common to use less formal approaches,such as utilizing formal priors from non-hierarchical models in hierarchical settings.This can be fraught with danger,however.For instance,non-hierarchical Jeffreys-rule priors for variances or covariance matrices result in improper posterior distributions if they are used at higher levels of a hierarchical model.Berger et al.[10]approached the question of choice of hyperpriors in normal hierarchical models by looking at the frequentist notion of admissibility of resulting estimators.Hyperpriors that are ‘on the boundary of admissibility' are sensible choices for objective priors,being as diffuse as possible without resulting in inadmissible procedures.The admissibility(and propriety)properties of a number of priors were considered in the paper,but no overall conclusion was reached as to a specific prior.In this paper,we complete the story and propose a particular objective prior for use in all normal hierarchical models,based on considerations of admissibility,ease of implementation and performance.(3)Multivariate one-way ANOVA model is of substantial importance in contemporary statistical theory and application.It is heavily used in data fusion for analyzing data from different sources,allowing assessment of differences and correlations between different groups or resources.The model has an unknown overall mean and two unknown covariance matrices,the error covariance matrix and the random effects covariance matrix.We study this problem from the Bayesian perspective,using various subjective and objective prior distributions.Typically,independent prior distributions are assumed for the mean and each covariance matrix;that case is considered herein,with the primary focus being the determination of when common objective priors yield proper posteriors.The main part of the paper,however,studies a new class of dependent priors called “commutative priors,” motivated from three directions.First,there are problems where it is most natural to utilize a prior on the “signal to noise ratio”(here a matrix);second,one often tries for dimension reduction,and the commutative priors substantially reduce the dimension of the unknowns;third,the commutative priors have excellent computational features.Interestingly,the commutative prior is also a conjugate prior.Propriety and moment existence are derived for both the priors and their posteriors.Moreover,a new and computationally effective MCMC algorithm is developed for the proposed commutative priors.Simulation and real data analysis show the potential advantages of the commutative priors.
Keywords/Search Tags:Covariance matrix, Multivariate normal distribution, Hierarchical models, Multivariate one-way ANOVA model, Bayesian estimator, Objective priors, Commutative priors
PDF Full Text Request
Related items