Font Size: a A A

Skew Normal Mixture Time Series Models

Posted on:2011-10-26Degree:DoctorType:Dissertation
Country:ChinaCandidate:W XianFull Text:PDF
GTID:1100360332457224Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Mixture time series models have received considerable attention because of their flexibil-ities in modeling. This thesis propose two kinds of mixture time series models based on skew normal distributions:skew normal mixture autoregressive model and skew normal mixture au-toregressive conditional heteroscedastic model, which are useful generalizations of mixture time series models based on normal distributions. The new models can deal with multi-modality, asymmetry, time-varying variance and other characteristics endowed by time series data. Before dong this we summarize the definitions, properties, random number generations and estimating methods for two kinds of skew normal distributions in details. For these two models, we discuss the stationary conditions and the estimating methods of parameters-the EM algorithm, and we also give the method to compute the standard errors of the estimators. We also consider the application of the latter model to VaR.In the following we introduce the main results of this thesis.The skew normal distribution of typeⅠis called directly the skew normal distribution in the literature, which was proposed by O'Hagan and Leonard (1976) and Azzalini (1985,1986), and has the density function Denote it by Y~SN(ξ,σ2,λ). Whenξ=0,σ2=1, it is denoted by Y~SN(λ).The skew normal distribution of typeⅡis called generally the epsilon-skew-normal (ESN) distribution, which was proposed by Mudholkar and Hutson (2000). The skew normal distribu-tion ESN(0,1,ε) or ESN(ε) has the density function where-1<ε<1. If X~ESN(ε), then Y=ξ+σX~ESN(ξ,σ,ε).Let the distribution function of X~SN(ξ,σ2,λ) be FSN(x|ξ,σ2,λ). The mixture autore- gressive model based on the skew normal distribution of type I is defined as whereα1+…+αK=1,αk>0,k=1,…,K.Here F(Xt|Ft-1)is the conditional cumula-tive distribution function of Xt given the past information, Ft-1 is theσ-field generated by {Xt-1,Xt-2,…}.Theorem 1 A necessary and sufficient condition for model (1) to be first-order stationary is that all roots of the equation lie inside the unit circle.Theorem 2 Suppose that model (1) satisfies the first-order stationary condition. A neces-sary and sufficient condition for model (1) to be second-order stationary is that all roots 0f the equation 1-C1z-1-…-Cpz-p=0 lie inside the unit circle, where for u,l=1,…,p-1, where A and A-1 are (p-1)×(p-1) matrices such thatWe estimate the parameters by using the EM algorithm.Letα=(α1,…,αK-1)T,σ2= (σ12,…,σK2)T,λ=(λ1,…,λK)T,βk=(βK0,βk1,…,βkpk)T,θ=(αT,β1T,…,βKT,σ2T,λT)T.Let Z=(Z1,…,Zn),Zt=(Z1t,…,ZKt)T.Zkt equal to 1 if Xt comes the kth component and equal to 0 otherwise.Model(1)can be hierarchically represented byThe complete data log-likelihood function,ignoring additive constant terms,is given by whereWe have considered two modifications of the EM algorithm:ECM (Expectation/Conditional Maximization) and ECME (Expectation/Conditional Maximization Either).Let where ThenΩ-1 is the asymptotic covariance matrix for the estimators, from which we can obtain the standard errors. In practice, we can choose L=10.Let the density function of X~ESN(ξ,σ2,ε) be fESN(x|ξ,σ2,ε). The mixture autore-gressive model based on the skew normal distribution of typeⅡis analogous to model (1) and is defined as whereα1+…+αK=1,αk>0, k=1,…, K.We also try to estimate the parameters by using the EM algorithm. Let the order statistics of observations X1,…,Xn be X(1),X(2),…,X(n). Let lk=l(X(1),…,X(n),μkt*),k=1,…,K is a sequence of random integers, such that X(lk)<μkt*0,k=1,…,K,βk0>0,βki≥0,i=1,…,qk,k=1,…,K.Analogous to model(1),we estimate the parameters by using the EM algorithm. Letα=(α1,…,αK-1)T,θk=(φk0,φk1…,φkpk)T(k=1,…,K),βk=(βk0,βk1,…,βkpk)T(k= 1,…,K),λ=(λ1,…,λK)T,θ=(αT,θ1T,β1T,…,θKT,βKT,λT)T. Let Z=(Z1,…,Zn),Zt= (Z1t,…,ZKt)T.Zkt equal to 1 if Xt comes the kth component and equal to 0 otherwise.Model(3)has hierarchical representation similar to model(1),then the log-likelihood function,ignoring additive constant terms,is given by whereThe ECM and ECME algorithms are used to estimate the parameters.The standard errors of estimators can be computed by using the method similar to model(1).Model(3)can be applied to VaR(Value-at-Risk).Replace Xt with rt,let the p-quantile of rt at time t be dkt,dmin,t=min1≤k≤K dkt,dmax,t=max1≤k≤K dkt.Let the precision beε(ε>0), we can find the needed VaR as follows:(1)Let dm=(dmin+dmax)/2.If|F(dm)-p|<ε,then VaR=dm;otherwise,go to(2).(2)If F(dm)p,then dmax=dm.Return to(1) and repeat,stop when |F(dm)-p|<ε,then the needed VaR=dm.
Keywords/Search Tags:Mixture models, Autoregressive, Autoregressive conditional heteroscedastic, Stationarity, EM estimation
PDF Full Text Request
Related items