| The empirical likelihood method is a nonparametric technique which was introduced by Owen(1988),later,Chen,S.X. and Hall,P. used smoothed empirical likelihood method to character confidence regions for quantile.Yoon-Jae Whang(2004) used smoothed empirical likelihood method to discussed the confidence regions of regression coefficient in quantile regression models under i.i.d. samples. But for dependent samples, it is complex. In this paper, we discuss the confidence regions of regression coefficient in linear quantile regression models under strongly stationary φ-mixing samples, the result is similar to that under i.i.d. samples. The linear quantile regression model is given by: Where Yi ∈R is an observed dependent variable, Xi is an observed k vector of regressors, β0 is a k vector of constant parameters, and Ui is an unobserved error that satisfies P(Ui ≤0|Xi)=q,a.s.,(?)i≥1,where 0 < q <1.We assume that (X1 , Y1) ,…, ( Xn , Yn ) are strongly stationary φ-mixing samples which come from the collectivity of ( X , Y) ∈Rk ×R. To motivate our estimator, consider the following estimating equations: Eg(Yi,X, β0)=E[I (Y i ≤X i′β0) ? q ]Xi=0, (1.2) Where I(?) denotes the indicator function. Note that estimating equations is not smooth ,so we replace the function g with a smooth function. For this purpose,let K( ? ) denote a kernel function that is bounded and cotinuosly,compactly supportedon [-1,1]. For some r≥2, K(?) satisfies {1, 0∫? +∞∞u j K (u ) du=0,1j≤= j ≤r?1 Define G ( x ) = ∫? x∞K (u ) du, and Gh ( x ) = G ( x h ), whereln→im∞h=0. Let Zi = Zi (β0)=(Gh ( X i′β0-Yi)-q)Xi (1.3) Then, the smoothed empirical log likelihood ratio is defined by: l 1( β0)=-2: ( 0) 0minpi ∑pi zi β=∑nilog( np i), where p i ≥0, 11∑==nip i. This gives the (profile) smoothed empirical log likelihood ratio statistic: l 1( β0)= 0 01log(1 ( ) ( )niit βZβ=∑+′), (1.4) where t( β0) ∈Rk satisfies: 0 0 011 n i ( ) /(1 ( ) i( ))iZ t Zn ∑= β+′ββ=0, (1.5) Firstly,we make the following assumptions. (1).E X 1 4< ∞, (2). 1/21( )n∞φn=∑< ∞, (3). A0 q (1? ) qE ( X 1 X 1′)>0, 111 1 1 1 1 0 1 1 0 1 1 111(4). 0, .(1 ) ( ) 2 { , [( ( ) )( ( ) ) | , ]}(1)ni h h i i iiA which satifiesq q E X X E X X E G X Y q G X Y q X XA o? + β+ β+ +=>? ′+ ′′? ? ′? ?= +∑Exit (5).denote that F( ? |x) is the conditional distribution of U i when X i= x,f(?|x)is the conditional density ; F(0|x)=q for almost every x; f(u|x) exists, is bounded away from zero, and is r times continuously differentiable with respect to u . Theorem 1 If assumptions (1) (5) hold,then ( )11 0 0l β?d? →ω′A?ω, ( n →∞) where ω~Nk (0, A1 ) As we do not know A1 and A0 , the above result could not be used in practice. We will use the blockwise empirical likelihood to overcome this shortcoming of the ordinary empirical likelihood. Let t=[nα], 0<α≤1/3,g =[n/t],for the convenience, let g=n/t. Denote ( 1)1i 1 ti t j, 1,jZ iξ= t∑= ? +=…,g. We consider the following group empirical likelihood ratio: ( )0 1,...,11 1sup ( ) 1, 0, 0gg g gp p j j j j jjj jR βgp p p pξ== == ?????âˆâˆ‘= ≥∑=?????. It is easy to obtain the (log) blockwise empirical likelihood ratio statistic: l( 0 01) 2 log(1 ( ) )gβ= ∑+λ′βξj , (1.6) where λ( β0) is determined by: 1 001 ( )gij itgξ∑= +λ′βξ=, (1.7) To get Theorem 2,we give the following assumption. 111 1 1 1 1 0 1 1 0 1 110,(1 ) ( ) 2 { [( ( ) )( ( ) ) | , ]}(1)ti h h i i iiAq q E X X E X X E G X Y q G X Y q X XA o? + β+ β+ +=>? ′+ ′′? ? ′? ?= +∑(6).Exit, thenTheorem 2 Suppose that Assumptions (1)-(6) hold,then 2.l ( β0 ) ?d? →χ( k) ( n →∞). The result of theorem 2 could be used to construct a confidence region for β0: I c= { β0 : l ( β0) ≤Cα} where Cαsatisfies 2P ( χ( k)≤Cα) = 1? α( α= 0.05 or 0.01). |