Font Size: a A A

Linear Empirical Bayes Estimation For Strongly Stationary Φ- Mixing Dependent Sequence And Its Application

Posted on:2005-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:X LiangFull Text:PDF
GTID:2120360125465140Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Let (θ,X) be a two-dimensional random vector such that θpossesses a prior distrbution G, and for given θ, X have an known condition p.d.f(x|θ) with respect to a σ-finite measure μ. We shall use squared loss function L(t,θ), where t(x) is an action. The Bayes risk of t(x) is as follows: RG=E(t(x)-θ)2=∫∫(t(x)-θ)2f(x|θ)dμ(x)dG(θ) (1)The estimator in the class of linear function A+BX which minimizes (1) is called a linear Bayes estimator of θ defined as follows(refer to [6]) t|-(X)=E(θ)+Cov(θ,X)D-1(X)(X-EX) (2)If the prior distribution G is unknown, it is impossible to calculate t|-(x) by sample and hence t|-(x) cann't be used as an estimate for θ. In according to empirical Bayes thinking, some ``past data" are available, then we may possibly get enough information about G, enabling us to construct a kind of EB estimators for θ which can be used for approrimating t|-(x) and for estimating θ instead of t|-(x). H.Robbins([3],1983) proposed the method of linear empirical Bayes(LEB) estimation in 1983. Now, LEB has been discussed by many statisticians and has been applied in multiple linear regression model. But most of those discussions were conducted only for independent samples. Wei([9,10],2003) discussed the linear empirical Bayes estimation under dependent samples in 2003. In this paper, we study the linear empirical Bayes estimation and its application for strongly stationary φ-mixing sequences. Our main assumptions and results are in the following:1: Main Assumptions(a) are r.v. with a common prior distribution;(b) We use to denote present sample, , and conditionally on , they have an known p.d.f with respect to a finite measure ;(c) For all ,(d) Use , to denote past samples, and conditionally on , they have the same distributions as the present sample;(e) Suppose that and with a common distribution, is a strongly stationary mixing r.v. sequence; and conditionally on ,so does it , and both of the mixing coefficients are ;(f) and ;(g) ,, and the mixing coefficient satisfies;(h) For the following linear regression modlewhere y is a observable r.v., x is a variable, is one-dimensional regression coefficientvariable with prior distribution G, is one-dimensional unobservable r.v, and conditionally on , has a known p.d.f ,is unknown..2: Main results Put We divide into groups,put , Suppose that , A linear empirical Bayes estimators of is defined as whereTheorem 1 Suppose that the present sample and the past samples , are independent, , Thess past samples satisfy the condition described above, and . Then Theorem 2 Suppose that the present sampleand the past samples , are unindependent, , Thess past samples satisfy the condition described above, and . Then Let , We divide into groups, A linear empirical Bayes estimators for regression coefficient is defined aswhereTheorem 3 Suppose that the present sample and the past samples are independent, , Thess past samples satisfy the condition described above, and . Then Theorem 4 Suppose that the present sample and the past samples are unindependent, ,Thess past samples satisfy the condition described above, and . Then...
Keywords/Search Tags:Strongly Stationary, mixing Sequence, Linear Empirical Bayes Estimation, Asymptotic Optimal Convergence Rate, Linear Regression Modle
PDF Full Text Request
Related items