Font Size: a A A

Empirical Likelihood Inference For Partial Linear Models With Dependent Errors

Posted on:2011-06-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z X YuFull Text:PDF
GTID:1100360305453562Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Partial linear model, also known as semi-parametric regression model, is a kind of important statistical model which has developed since 1980s.The main problem of studying the partial linear model is to construct the esti-mators of the unknown parameterβand the unknown function g(·). When the errorsεi's are i.i.d. random variables, the most basic method for estimatingβand g(·) is the least squares method with a penalty given by Engle et al. (1986), Green et al. (1985), Shiau et al. (1986), etc. With the further deeply research, various estimation meth-ods have been used to obtain estimators of the unknown quantities, includes the kernel method, spline method, series estimation, local linear estimation, two-stage estimation, M-estimation, etc.However, the independence assumption for the errors is not always appropriate in applications, especially for sequentially collected economic data, which often exhibit dependence in errors. Recently, partial linear regression with serially correlated errors has attracted increasing attention by statisticians.Owen (1988) first proposed the empirical likelihood method, Owen (1988,1990) studied the general nature of this method systematically. Many research results indicate that the empirical likelihood method has the good nature similar to the parameter likelihood proposed by Fisher. There are many outstanding advantages of the approach compared to the classic or modern statistical methods, such as:in addition to using empirical likelihood method to construct confidence interval has the advantages of the domain preservation, transformation invariance and the shape of the confidence region decided by the data, etc. The approach also has the advantages of Bartlett correction and there is no need to construct axis of statistics and so on. Initially, Owen (1988) deal with a class of non-parametric statistical problems. With the application of the empirical likelihood ratio, we constructed non-parametric confidence intervals and testing methods. Later, people find out that empirical likeli-hood approach is effective and adaptable, and thus such a powerful statistical tool has been extended to a wider range of statistical models.In Chapter 2, we apply the empirical likelihood method to discuss the statistical inference of the parameter in the partial linear model under the negatively associated (NA) random errors. We construct the the empirical likelihood ratio test statistic of the regression parameter in the model. With the application of the research results of the NA sequence, we discuss the statistic is asymptotically chi-square distributed at the true value of the parameter.Consider the following typical partial linear model: where (xi, ti) are nonrandom design points,βis an unknown parameter and g(·) is an unknown function defined on the closed interval I of R.{ei,i≥1} is a sequence of zero mean stationary NA random errors with Ee12=σ2.let where Wni(·)(1≤i≤n) are some weight functions defined on I. The (log) empirical likelihood ratio statistic is defined as where A satisfies We need the following assumption: where (j1,j2,…,jn) is any permutation of (1,2,…, n).Assumption 0.2 g(·) satisfies the first-order Lipschitz condition on I.Assumption 0.3 maxAssumption 0.4 (i) maxAssumption 0.5 There exists as u→+oo uniformly for k≥1, and that the spectral density f(ω) of{ei} satisfiesThe asymptotic property of -2LRn(β0) is given by Theorem 0.1:Theorem 0.1 Suppose that assumptions Assumptions 0.1-0.5 hold, thenIn Chapter 3, the empirical likelihood method is extended to partial linear models with fixed designs under m-dependent stationary errors. We construct the empirical likelihood ratio test statistic of the parameter. Since the statistic contains an unknown quantity, it is not available. We use the blockwise empirical likelihood method to construct new test statistic to solve the problem, our results show that the new statis-tic is asymptotically chi-squared distributed and that the confidence regions can be constructed accordingly.Consider the following partial linear model: where yi is the scalar response, xi is p x 1 vector of the ith fixed design point and ti is the ith fixed scalar design point,βis a vector of unknown parameters to be estimated, g(·) is an unknown function defined on the closed interval I of R,{ei,l≤i≤n} are stationary m-dependent variables, The (log) empirical likelihood ratio statistic is defined as whereλ(β)∈Rp is determined byAssume the following conditions:Assumption 0.6Assumption 0.7Assumption 0.8Assumption 0.9Assumption 0.10where (j1,j2,…,jn) is any permutation of (1,2,…, n) and‖·‖denotes the Euclidean norm.Assumption 0.11Assumption 0.12Assumption 0.13Assumption 0.14σ1\andσp denote the largest and smallest eigenvalues of A0, respectively. There exist positive constants C1 and C2 such that C1≤σp≤σ1≤C2.Assumption 0.15 g(·) satisfies the first-order Lipschitz condition on I.The first result in this chapter is as follow.Theorem 0.2 Letβ0 be the true value ofβ. Suppose that Assumptions 0.6-0.15 are true, then As we do not know A0 and A, the above result of theorem 0.2 could not be used in practice. We will use the blockwise empirical likelihood to overcome this shortcoming of the ordinary empirical likelihood.The (log) blockwise empirical likelihood ratio statistic is defined aswhere t(β)∈Rp is determined by:To obtain the large sample distribution of l(1)(β0), we also need some assumption.Assumption 0.16Assumption 0.17 As n→+∞, there exists A> 0 such thatThe second result in this chapter is as follows.Theorem 0.3 Under the conditions of Theorem 0.2 and suppose that Assumption 0.16 and Assumption 0.17 are true, thenIn chapter 4, with the application of the polynomial decomposition method of the linear process and the truncation method on the MA (∞) process, we discuss the em-pirical likelihood statistical inference of the partial linear model under MA(∞) errors. We construct the empirical likelihood ratio test statistic of the regression parameter, and our results show that the statistic is asymptotically chi-squared distributed when the parameter is the true value, the confidence regions can be constructed accordingly.A partial linear regression model can be written in the form:Here, we assume that the error process is the moving-average process of infinite order, denoted by MA(∞), which assumes the following form for{εi}: where{ei} are i.i.d. random variable withwe assume that the random errors{εi,l≤i≤n} form an MA(∞) process defined by above equation, and letNow, we consider the ordinary empirical likelihood ratio statistic forβ. Let the (log)empirical likelihood ratio statistic is defined as whereλ(β)∈Rp is determined byWe need the following assumptions:Assumption 0.18 There exits functions hj(·) defined on [0,1] such that: where (uil,…, uip)τ= ui are real vectors satisfying for some positive definite matrix B, where (j1,…,jn) is any permutation of (1,…, n) and‖·‖denotes the Euclidean norm.Moreover is a constant not depending on n.Assumption 0.19 The functions g(·) and hj(·) satisfy the Lipschitz condition of order 1, i.e., there exists a constant c1 such thatAssumption 0.20 The weight functions Wni(·) satisfyAssumption 0.21 The error process {εi}, as modeled by Eq. (1.2), satisfies the following conditions:(ii) the spectral density functionψ(ω) of{εi} is bounded away from zero and infinity, i.e., where c3 and c4 are constants.With the assumptions above, we are ready to establish the main result on the asymptotic property of the empirical (log) likelihood ratio. Theorem 0.4 Letβ0 be the true value ofβ, and suppose the Assumptions 0.18-0.21 hold, thenIn Chapter 5, with the application of martingale central limit theorem and the martingale difference inequality, we discuss the partial linear model with the errors is the ARCH(1) sequence. Under certain conditions, we discuss the the property of the empirical likelihood ratio statistic use that ARCH(1) sequence is strictly stationary and the fourth-order moment exists, and our results show that the statistics is asymptotic chi-square distributed when parameter is true value.Consider the following typical partial linear model:The error process ei is ARCH(1) model defined by the equation where ao> 0,α1≥0,{ηi}~N(0,1) andηi independent of{et,t< i}.An empirical log-likelihood ratio statistic can be defined as: whereλ=λ(β) is determined byWe give some regular conditions used below.Assumption 0.22 where (j1,j2,…, jn) is any permutation of (1,2,…, n).Assumption 0.23 g(·) satisfies the first-order Lipschitz condition on I.Assumption 0.24Assumption 0.25Assumption 0.26ηi has density function f(x) which is positive and continuous everywhere overWith the assumptions above, we are ready to establish the main result on the asymptotic property of the empirical (log) likelihood ratio.Theorem 0.5 Under the Assumptions 0.22-0.26, we haveIn Chapter 6, we focus on autoregressive model and give the semiparametric esti-mator of autoregression function, we also discuss the consistency and the asymptotic normality of the estimator.Consider the following autoregressive model: where {εi} is i.i.d sequence with mean 0 and varianceσ2, Y0 is independent ofε.In order to estimate f(x), we first prepare a crude guess of the parametric form denoted by g(x,θ), where g(x,θ) is a known function of x andθ, and denote the true valueθbyθ0>. We define an estimatorθn be the common conditional least squares(CLS) estimator based on data Y0,Y1,..., Yn, if g(Yi-1,θ) is twice continuously differentiable with respect toθa.s. in some neighborhood S ofθ0, and under a variety of conditions, the strong consistency and asymptotic normality of the CLS is given by. Klimko and Nelson (1978).Next, we aim at adjusting this initial approximation by the semiparametric form whereξ(x) is the adjustment factor, we consider we get the estimatorξ(x) ofξ(x) by minimizing the above equation with respect toξ(x), we get so we get the estimator of f(x)However, the formula above is indeed unavailable because it contains the unknown function f(x). In order to eliminate the unknown function, we approximately compute the numerator by therefore we get a nonparametric estimator ofξ(x) as finally, the autoregression estimator is obtained bywe need the following assumptions:Assumption 0.27 The sequence (Yt)t∈N is a stationary ergodic sequence of inte-grable random variables.Assumption 0.28 exist and are continuous forAssumption 0.29 where g and its partial derivatives are evaluated at.θ0 and Yt-1. Assumption 0.30 For i,j,k=1,…,p, there exist functions such thatAssumption 0.31 Define the following matrices We will assume throughout that V and W are positive definite.Assumption 0.32 The sequence (Yt)t∈N isα- mixing.Assumption 0.33 Y0 has distributionπ(·), the densityμ(·) ofπ(·) exists, is bounded continuous and strictly positive in a neighborhood of the point xAssumption 0.34 f(x) and g(x,θ) are bounded and continuous with respect to x, away from 0 in a neighborhood of the point x. We denote g0(x)= g(x,θ0).Assumption 0.35 g(x,θ) has a continuous lth derivative with respect toθ, and the derivative at the pointθ0 is uniformly bounded with respect to x.Assumption 0.36 The kernel K:R1→R+is a compactly symmetric bounded function, such that K> 0 on a set of positive Lebesgue measure.Assumption 0.37 hn=βn-1/5, whereβ> 0.We have the following results: Theorem 0.6 Assume 0.27-0.37.Theorem 0.7 Assume 0.27-0.37.Theorem 0.8 Assume 0.27-0.37. Denote then...
Keywords/Search Tags:Partial linear models, fixed design points, empirical likelihood, dependent errors, asymptotic properties, autoregressive models, semiparametric estimation
PDF Full Text Request
Related items