Font Size: a A A

A comparison of alternative approximations to maximum likelihood estimation for hierarchical generalized linear models: The logistic-normal model case

Posted on:2002-05-14Degree:Ph.DType:Dissertation
University:Michigan State UniversityCandidate:Yosef, MatheosFull Text:PDF
GTID:1460390011496820Subject:Education
Abstract/Summary:
Educational data often have hierarchical structure (e.g., students are nested within clusters such as schools). Also, outcome variables can sometimes be discrete (e.g., whether a student repeats a grade). In such cases, the outcome variable is usually related to the covariates and cluster random effects using a hierarchical generalized linear model.; The (marginal) maximum likelihood (ML) estimation method is widely used to estimate the parameters of such models. To obtain the marginal likelihood formula that needs to be maximized, the random effects must be integrated out of the joint distribution of the outcome and the random effects. In many cases, the integration cannot be carried out in closed form. Several approaches have been used to approximate this integral. This dissertation compared four methods of integral approximation—two Laplace-based and two based on Gaussian numerical integration.; Analytic and numerical comparisons show that, for the univariate random effects model case, the 2nd order Laplace method (Laplace2) and the adaptive Gauss-Hermite method (AGH) with one quadrature point give the same result. The 6th order Laplace approximation method (Laplace6) has the same order of error as the AGH with 4 (to 6) quadrature points. It took much more quadrature points (8 and 14) for the ordinary Gauss-Hermite (GH) to give results similar to Laplace2 and Laplace6. The error of Laplace6 approximation was better than that of Laplace2 by at least O(n−1), where n is the cluster size.; Simulation studies using programs (HLM, MIXOR and SAS PROC NLMIXED) that implement the four methods indicate that, for a univariate random effects case, all methods perform well when the cluster size is quite large. However, Laplace2 usually gives the most biased estimates and sometimes has the largest mean-squared errors (MSE). For a small cluster size, AGH performed the best as far as speed, MSE and bias are concerned, while the ordinary GH performed the worst. For a multivariate (bivariate) random effects model case, Laplace6 performed the best (in terms of bias and MSE) with the ordinary GH following closely. The estimates of Laplace2 had the largest biases while the algorithm implementing AGH was computationally the slowest and needed specification of good starting parameter values.; Overall, Laplace2 appears to be a simple and fast method to get estimates, especially for a model with small random effects variance. Laplace6 is much more accurate and quite fast but needs derivation of cumbersome formulas. GH is quite simple but may need a fairly large number of quadrature points for an accurate estimation. This makes it computationally inefficient for a multivariate random effects case. AGH combines the advantages of GH (simplicity of formula) and high-order Laplace (accuracy with quite few quadrature points) but it can also be computationally quite inefficient as the dimension of random effects increases. The Laplace-based methods do not suffer from dimensionality problem that much.
Keywords/Search Tags:Random effects, Hierarchical, Model, Case, Quite, Method, AGH, Likelihood
Related items