Font Size: a A A

Theory Andmethods Research Of Model Averaging

Posted on:2019-06-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:F XuFull Text:PDF
GTID:1360330563490952Subject:Quantitative Economics
Abstract/Summary:PDF Full Text Request
The common practice in empirical research is how to select proper model from the space of all possible models.In the process,uncertainty in the model selection step is typically ignored,which make incorrect parameter inference and forecast accuracy drop dramatically.Researchers propose kinds of model selection approach or criterion such as stepwise regression?AIC?BIC?cross-validation?Lasso?pC etc.But this model seletion causes uncertainty raising from selection process,with underestimating true variance.Alternatively,model averaging that accounts for model uncertaintyis brought into focus gradually in statistics and econometrics.This approach can minimize uncertainty,reduce loss about useful information and avoid to select bad performance model,though combining models from the space of all possible models by appropriateweight.We can make inference based on the whole universe of candidate models.As a result,we consider not only the uncertainty associated to the parameter estimate conditional on a given model,but also the uncertainty of the parameter estimate across different models.Frequentist Model Averaging(FMA)and Bayesian Model Averaging(BMA)are two different approachs to model averaging in the literature.Despite their similarities in spirit and objectives,both techniques differ in the approach to inference.Compared with the FMA approach,there has been a huge literature on the use of BMA in statistics and more recently in economics.Thus,the BMA toolkit is larger than that of FMA.However,the FMA approach is starting to receive a lot of attention over the last decade.On the basis of predecessors' work,this dissertation elaborate the model averaging approach systematically from weight seletion?asymptotic property?finite sample investigation etc and further study the frontier issues,mainly including the following parts:First,we summarize comprehensivly BMA's estimation and inference,prior distribution on the parameter space and priors on the model space.Within the BMA approch,we can compute the posterior inclusion probability(PIP)for a given variable.This PIP is calculated as the sum of the posterior model probabilities for all models including that variable.However,implementing BMA can be difficult because of two reasons:(i)two types of priors(on parameters and models)need to be elicited for many models,and this can be a complicated task,(ii)the number of models under consideration is often so large that the computational burden of BMA can be prohibitive.Second,we introduce the dynamic model averaging,which not only allow for coefficients to change over time,but also allow for the entire forecasting model to change over time.We find that DMA approach leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coefficient models.To alleviate computation in the face of large number models,we use forgetting factor to depict the evolution of error's variance so that dual Kalman filter can update state space model instead of MCMC algorithm.Third,we investigate two main Frequentist Model Averaging approaches that are MMA(Mallows Model Averaging)and JMA(Jacknife Model Averaging).The MMA selects forecast weights by minimizing a Mallows criterion.This criterion is an asymptotically unbiased estimate of both the in-sample mean-squared error(MSE)and the out-of-sample one-step-ahead mean-squared forecast error(MSFE).Finite sample simulation shows that the MMA forecasts have lower MSFE than other feasible forecasting methods,including equal weighting,BIC seletion,AIC selection,Bate-Granger combination,predictive least squares,and Granger-Ramanathan.Similarly,the JMA estimator selects the weights by minimzing a cross-validation criterion,which improves estimation of an unknown conditional mean in the face of non-nested model uncertainty in heteroskedastic error settings.This approach is asymptotically optimal in the sense of achieving the lowest possible expected squared error.Monte Carlo simulations show that JMA can achive significant efficiency gains over existing model selection and averaging methods in the presence of heteroskedasticity.Finally we introduce the forecast combination with factor-augmented regression.The unobserved factor regressors are estimated by principle components of a large panel with N predictors over T periods.With these generated regressors,the Mallows and leave-h-out cross-validation criteria are asymptotically unbiased estimators of the one-step-ahead and multi-step-ahead mean squared mean squared forecast errors,respectively.
Keywords/Search Tags:Bayesian Model Averaging, Dynamic Model Averaging, Frequentist Model Averaging, Mallows criterion, leave-1-out cross validation
PDF Full Text Request
Related items