Font Size: a A A

Method Of Estimation Of Scale Parameter Under A Class Of Loss

Posted on:2008-09-26Degree:MasterType:Thesis
Country:ChinaCandidate:B XuFull Text:PDF
GTID:2120360215952865Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Method of Estimation of Scale Parameter under a Class of LossEstimation of parameter is a primary content for theory of estimation and an important branch in mathematical statistics. Generally, people deal with the problem of parameter estimation under a given loss function, the squared error loss and the absolute error loss are in common use. In recetent years, the loss with respect to the entropy came out constantly, and applied to investigate the scale parameter estimation for some special distribution extensively. For scale parameter distribution family c(x,n)θ-ve-T(x)/θ, we make a study on some problems about the scale parameter estimation under symmetric entropy loss L(θ,δ) =v(θ/δ+δ/θ-2) and q-symmetric entropy loss L(θ,δ)=θq/δq+δq/θq-2.In the first chapter, we study the scale parameter estimation problem under symmetric entropy loss. The general and exact form of the MRE estimator and that of the Bayes estimator are obtained. The admissibility and inadmissibility of a class of linear estimators of the form cT(X)+d are discussed. Finally, the minimax estimator ofθis investigated. The primary results as follows:Theorem 1.2.1 Let X=(X1,X2,…,Xn) have p.d.f. (1/(?)n)f(x/(?))and Z=(Z1,Z2,…,Zn), where Zi=Xi/Xn,i=1,…,n-1, and Zn=Xn/|Xn|. Suppose that there exists an equivariant estimatorδ0(X) forθwith finite risk. Then a MRE estimator forθunder symmetric entropy loss is given byTheorem 1.2.2 The unique Bayes estimator ofθ, sayδB(X), under symmetric entropy loss is shown bywhen the Bayes risk ofδB(X) is finite.Following Theorem 1.2.2, the exact form of Bayes estimator ofθand the exact form of the MRE estimator ofθare obtained.Theorem 1.3.1 The estimator cT(X)+d is admissible, provide 0≤c<c*, d>0.Theorem 1.3.2 The estimator cT(X)+d is admissible, provide c=c*, d>0Theorem 1.3.3 The estimator c*T(X)+d is admissible, provide v>1+∈, for some∈>0.Theorem 1.3.4 The estimatorcT(X)+d is inadmissible, when 0≤c≠c* and d=0 hold. Theorem 1.3.5 The estimatorcT(X)+d is inadmissible, when c>c* and d>0 hold.Theorem 1.4.1 The estimator is Minimax estimator ofθunder symmetric entropy loss.In the second chapter, we investigate the same problem as the first chapter under q-symmetric entropy loss at length. The primary results as follows:Theorem 2.1.1 Let X=(X1,X2,…,Xn) have p.d.f. (1/(?)n)f(x/(?)) and Z=(Z1,Z2,…,Zn), where Zi=Xi/Xn,i=1,…,n-1, and Zn=Xn/|Xn|. Suppose that there exists an equivariant estimatorδ0(X) forθwith finite risk. Then a MRE estimator forθunder q-symmetric loss is given byTheorem 2.1.2 The unique Bayes estimator ofθ, sayδB(X), under q-symmetric loss is shown bywhen the Bayes risk ofδB(X) is finite.Following Theorem 2.1.2, the exact form of Bayes estimator ofθand the exact form of the MRE estimator ofθare obtained. Theorem 2.2.1 The estimator cT(X)+d is admissible, provide 0≤c<c*,d>0.Theorem 2.2.2 The estimator cT(X)+d is admissible, provide c=c*,d>0Theorem 2.2.3 The estimator c*T(X)+d is admissible, provide v>1+∈, for some∈>0.Theorem 2.2.4 The estimator cT(X)+d is inadmissible, when 0≤c≠c* and d=0 hold.Theorem 2.2.5 The estimator cT(X)+d is inadmissible, when c>c* and d>0 hold.Theorem 2.3.1 The estimator is Minimax estimator ofθ, under q-symmetric entropy loss.In the third chapter, we discuss the invarianee of the Bayes estimator and the admissible estimator of scale parameter with the theory of intergral transformation, under symmetric entropy loss and q-symmetric entropy loss. The primary results as follows:Consider the transformationλ=h(θ) on parameter space (?)=(0,∞), where h(θ) satisfies the following conditions:(1) If (?)θ1,θ2∈(?),θ1≠θ2, then h(θ1)≠h(θ2).(2) (?)λ∈{λ∈h(θ),θ∈(?)},h-1(λ) is derivable with respect toλ. parameterθwill be transformed into parameterλby h(·), we are interested in the relationship between the estimator ofθand that ofλ.For the distribution family parameterizes withλ, like symmetric entropy loss and q-symmetric entropy loss with respect toθ, we can define symmetric entropy loss with respect toλand q-symmetric entropy loss with respect toλTheorem 3.2.1 For above function h(·), ifδ(x) is the Bayes estimator ofθwith regard to the prior distributionπ(θ) under symmetric entropy loss L(θ,δ), then h(δ(x)) is the Bayes estimator ofλwith regard to the prior distributionπh(λ) under symmetric entropy loss L(λ,δ′), whereπh(λ) is the induced distribution ofλ=h(θ).Theorem 3.2.2 For above function h(·), ifδ(x) is the admissible estimator ofθunder symmetric entropy loss L(θ,δ), then h(δ(x)) is the admissible estimator ofλunder symmetric entropy loss L(λ,δ′).Theorem 3.3.1 For above function h(·), ifδ(x) is the Bayes estimator ofθwith regard to the prior distributionπ(θ) under q-symmetric entropy loss L(θ,δ), then h(δ(x)) is the Bayes estimator ofλwith regard to the prior distributionπh(λ) under q-symmetric entropy loss L(λ,δ′), whereπh(λ) is the induced distribution ofλ=h(θ).Theorem 3.a.2 For above function h(·), ifδ(x) is the admissible estimator ofθunder q-symmetric loss L(θ,δ), then h(δ(x)) is the admissible estimator ofλunder q-symmetric entropy loss L(λ,δ′).
Keywords/Search Tags:Estimation
PDF Full Text Request
Related items