Font Size: a A A

A Jacknife Bayesian Estimation Method

Posted on:2009-06-03Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhaoFull Text:PDF
GTID:2120360242480801Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Jackknife method was introduced by American statistician Quenouille in 1949, to deduce the deviations. It wasn't arisen the people's attention at that time, until 1958, it was perfected by American statistician Tukey. It was used as a method to estimate robust field under the suggestion of Tukey. There was no confidence interval while used standard statistic, or it was difficult to applied, jackknife method could be used to build an approximate confidence interval. Jackknife method was aboard used in application field these years, for example it was used to estimate the volume of forest, and it also was used to estimate the dispersion index of animal populations, and so on. Traditional estimate methods base on large sample theory, moment estimate and maximum likelihood estimate were always used. Because statistical distribution is very complex and always is un-normal, if small sample method was used, it would cause estimated value deviation and the estimate region incorrect. Jackknife method is a nonparametric estimate method; it is not restricted by the type of statistic parameters. It could diminish estimated deviation and give out approximately confidence interval for many parameters. So Jackknife estimate method overcomes the limits of traditional method, decreases the number of sample, cuts down the workload, keep the precision unchanged.Let X1, X2,…, Xn is a random sample of total sample X, X1, X2,…, Xn is independence same distribution random variable. And ifθis a parameter of X's distribution function F(x;θ). There is a traditional method (such as Bayes estimate) to estimateθ. Under this method, own aθ's estimate valueθ=θ(X1,X2,…,Xn). Base onθ, we can construct a jackknife estimator: 1st X1, X2,…, Xn is divided into g set, every set's length is h ,so n = gh ;2nd take out i set among them, use the remainder n - h set as subsample, own aθ.and marked asθi;3rd falsity calculateθi = gθ- (g- 1)θ-i =θ+ (g- 1)(θ-θ-i);4th theθ's jackknife estimate value is earned by calculated the arithmetic mean value of falsityJ(θ)=1/g sum from i=1 to nθi=gθ-(g-1)·1/g sum from i=1 to nθ-i,J(θ) was namedθ's J-estimator.There are two schools in statistics: frequency school (also named classics school) and Bayesian school.Information of sample is the base of classics school's stat. They conclude overall distribution or overall characteristic. Through this, two information were earned, overall information and sample information. While Bayesian school think, beside these two information, prior information should also be used by statistical inference. Prior information and sample information were combined and used in extrapolation, these form the undecision-making Bayesian analysis. If the result information was also be used, namely the loss function was introduced into the theory of statistical decision, to measure the benefit , decide the result good or bad. Those form the Bayesian decision.The basic opinion of Bayesian school is: any unknown parameterθcan be regarded as random variable, can be described by a probability distribution, this distribution named as prior distribution: after samples are earned, they are combinedthat overall distribution, samples, prior distribution, through Bayesian formula,a new distribution aboutθis earned that is called posteriori distribution. Any statistical decision aboutθshould all base on the posteriori distribution ofθ. Theorem 1 Assigned prior distributionπ(θ) and square loss L(θ,δ) = (δ-θ)2,θ's Bayesian estimationδ(x) is the mean value of posteriori distributionπ(θ|X) , namelyδ(x) = E(θ|x).This thesis apply jackknife theory into Bayesian estimation. Under square loss condition,θ's Bayes estimation has these conclusion :Theorem 2 Any assigned prior distribution Be(a, b) and square loss function L(θ,δ)=(δ-θ)2 , parameterθin 0-1 distribution successful probability , its Jackknife-Bayesian estimation isJ(θB)=a(a+b)/(n+a+b)(n+a+b-1)+n2+2na+2nb-n-a-b/(n+a+b)(n+a+b-1)X.Theorem 3 Any assigned prior distribution IΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2, parameterθin exponential distribution function F(x,θ) = 1 -e-r/θ(x > 0), its Jackknife-Bayesian estimation isJ(θB)=(a-1)β/(n+a-1)(n+a-2)+n2+na-3-α+1/(n+a-1)(n+a-2)X.Theorem 4 Any assigned prior distributionΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Poisson distribution, its Jackknife-Bayesian estimation isJ(θB)=αβ/(β+n)(β+n-1)+(n2+2nβ-n-β)/(β+n)(β+n-1)X.Theorem 5 Any assigned prior distribution N(μ0,σ2) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Gaussian distribution N(θ,σ02), its Jackknife-Bayesian estimation isJ(θB)=σ04μ0/(nσ2+σ02)(nσ2+σ02-σ2)+(n2σ2-nσ2+nσ02-σ02)σ2/(nσ2+σ02)(nσ2+σ02-σ2)X.Theorem 6 Any assigned prior distribution Be(a, b) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin 0-1 distribution successful probability , its Jackknife-Bayesian estimation is asymptotic unbiasedness. Theorem 7 Any assigned prior distribution Be(a,b) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin 0-1 distribution successful probability , its Jackknife-Bayesian estimation is asymptotically normal distribution.heorem 8 Any assigned prior distribution IΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2, parameterθin exponential distribution function F(x,θ) = 1 -ex/θ (x > 0), its Jackknife-Bayesian estimation is asymptotic unbiasedness.Theorem 9 Any assigned prior distribution IΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2, parameter & in exponential distribution function F(x,θ) = 1 -e-x/θ(x > 0). its Jackknife-Bayesian estimation is symptotically normal distribution.Theorem 10 Any assigned prior distributionΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Poisson distribution, its Jackknife-Bayesian estimation is asymptotic unbiasedness.Theorem 11 Any assigned prior distributionΓ(α,β) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Poisson distribution, its Jackknife-Bayesian estimation is asymptotically normal distribution.Theorem 12 Any assigned prior distribution N(μ0,σ2) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Gaussian distribution N(θ,σ02), its Jackknife-Bayesian estimation is asymptotic unbiasedness.Theorem 13 Any assigned prior distribution N(μ0,σ2) and square loss function L(θ,δ) = (δ-θ)2 , parameterθin Gaussian distribution N(θ,σ02), its Jackknife-Bayesian estimation is asymptotically normal distribution.
Keywords/Search Tags:Estimation
PDF Full Text Request
Related items