Font Size: a A A

Statistical Inference Of Autoregressive Error-in-variables Models With Validation Data

Posted on:2015-03-23Degree:MasterType:Thesis
Country:ChinaCandidate:K LiFull Text:PDF
GTID:2250330428496110Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Consider the autoregressive error-in-variables (EV) model where{Yt; t∈Z} is the only observable random variables,{ut} is measurement error,(?)(·) is an arbitrary function, β is unknown parameters and β(-1,1),{εt} are independent identically distributed (i.i.d.) random forcing sequence with E(ε1)=0and E(ε12)=σ2.Autoregressive models are a popular choice in many disciplines for the modeling of time series. This is especially true of population dynamics where AR(1) or AR(2) models are often employed with Y equal to the log of population abundance. The character of Model (1) is that both the response and the predictor are measured with errors in the primary data, and both are dependent. In addition, without specifying any structure equation and the distributed assumption of Yt given Yt. In this case we consider the estimation and test of0.1. Estimation of model (1)We first develop a parameter estimation method with the help of validation data, the estimator of β is defined to incorporate the information contained in both surrogate variables and validation sample. The proposed estimator is proved to be consistent.In what follows, we assume that, in addition to the primary data {Yt}tN=1, n observation {Yi,Yi}N+ni=N+1are available. Let u(Yt)=E(Yt\Yt). That is, we assume Yt provide all useful information when predicting the unknown Yt. Then the model (1) can be rewritten as where et=εt+[u(Yt)-Yt]-β[u(yt-1)-Yt-1]. Clearly, E(et)=0. By employing the validation data, the u(yt-d) in (2) can be estimated by where d=0or1, K(·) is a kernel function and hn is a bandwidth tending to zero. Then estimator of β is defined as the one which minimizes Sn,N(β) given by That is, the estimator, say βn,N, minimizing Sn,N(β) is the solution to the equation By solving (3), it is easy to obtain that where andLet (?)m be the class of all continuous functions of (?) on R such that the derivatives (?)(?)(z) are uniformly bounded for0≤i≤m(m>2). In order to state our theorems, we introduce the following assumptions. All limit relations are for n'∞unless stated otherwise.(Cl){Yt}t∈N are stationary time series from model (1), and.E[Yt4|Yt]<∞.(C2) u(·)∈(?)m, m>2. (C3)(ⅰ){Yt}t∈N be a homogeneous Markov process with stationary distribution, and there exists a nonnegative integrable function M(x)and0<p<1,such that‖Ps(x,·)-π(·)‖≤psM(x), where Ps(x,·)(s∈R+)denotes the transition probability function and π is invariant measure.(ⅱ)There exists a positive constant sequence ηn,such that nP(fY(y) ηn)'0,where fY(y) is density function of Y0and fY(Y)∈(?)m(C4)(i) K(·)is a bounded kernel function of order m.(ⅱ) K(·)is a bounded variation function on R.(C5)(i) n1/2gn2ηn2(log log n)-1/2'∞;(ⅱ) n1/2hnm+1'0,(m>2);(ⅲ)hnm-1ηn-2'0(C6)N/n'λ,where λ is a nonnegative constant.(C7)E[εt4|Yt]<∝.Remark:Generally, the above conditions can be met.For condition(C5),we can let hn=a1n-1/(2+m), ηn=a2n2-m/4(2+m)(loglog n)1/2, where a1and a2are positive constants.theorem1Suppose that the conditions C1-C7hold. If β is the true value of the parameter,then we have βn,N'β.2.Test of model (1)Another important problem considered in this section is testing the null hypoth-esis H0:β=β0.The asymptotic representation and the asymptotic normality of the estimator are also derived,respectively. The following theorem gives the asymptotic representation of βn,N.theorem2Suppose that the conditions C1一C7hold.If β is the true value of the parameter,then we have where andBy the asymptotic representation theorem, we derive the following asymptotic distribution of βn,Ntheorem3Suppose that the conditions C1-C7hold. If β is the true value of the parameter, then we have where Here, A can be estimated consistently by where andMoreover, we develop test statistic for the parameters β, and prove asymptotic distribution is a standard X2. theorem4Suppose that the conditions C1-C7hold. Then under the null hypothesis H0:β=β0, the following holds where X2(1) denotes the chi-squared distribution with one degree of freedom.
Keywords/Search Tags:Asymptotic normality, Autoregression model, Error in variables, Validation data
PDF Full Text Request
Related items