Font Size: a A A

Limit Theory For Sub-linear Expectations And Its Appiications

Posted on:2015-01-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:J ChenFull Text:PDF
GTID:1260330431955175Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Ever since the definition of capacity was introduced by Choquet [13], it has been a heated scientific subject worldwide. Since in many applica-tion fields, such as finance, economics and robust statistics, the traditional additive probability measures fail to provide adequate or good information to describe or interpret the uncertain phenomena accurately. Therefore, in some areas, the assumption that given a precise probability for random vari-ables as an apparently quite natural property has been abandoned, in favor of non-additive/imprecise probabilities (such as lower-upper probabilities) or nonlinear expectations (for example Choquet expectations, sublinear expec-tations, lower-upper expectations). Indeed, as early as1954, Keynes [24] had discovered this problem, then constructed a theory of imprecise prob-ability. Meanwhile, capacities, the non-additive probability measures, seem to be a powerful tool to model the uncertainty when the assumption of addi-tivity is suspect.(e.g., Augustin [1], Maccherroni and Marinacci [27], Doob [17], Schmeidler [33]). Later, motivated by mathematical finance and ro-bust statistics, the frequentist properties of random variables for non-additive (lower and upper) probabilities have attracted more and more attentions.As we known, the law of large numbers (LLN) plays an important role in probability theory and mathematical statistics. Meanwhile, in the literature, various generalizations of (strong) LLNs for non-additive probabilities have been established. Among them turn into two groups:one is called nonad-ditive probability group in which people like to use nonadditive (imprecise) probability to accommodate the frequentist properties of random variables for non-additive probability, the other is called nonlinear expectations group in which people like to use nonadditive (lower and upper) expectations to accommodate the frequentist properties of random variables for non-additive expectations. Although both groups in the framework of linear probability theory are equivalent, they are totally different in the nonlinear case in the sense that a nonlinear expectation usually could not be determined uniquely by the corresponding nonlinear probability (see for example,[7],[12]). In nonadditive probability group, the earlier papers by Dow and Werlang [18] and Walley and Fine [36], while the more recent results by Cooman and Miranda [15], Epstein and Schneider [19], Marinacci [28], Maccheroni and Marinacci [27], Chen and Wu [10], Chen, Wu and Li [11] and Teran [34]. Under various assumptions on nonadditive probabilities, they proved that the frequentist obtained from a large number of trials is no longer close to an expected value but an interval of possible expected value under lower probability as more trials are performed.In nonadditive expectation group, Peng is the first to introduce the notions g-expectation and G-expectation. Inspired by g-expectation, Peng [29,30,31] introduces a new notion of identical and independently distributed (â…¡D) random variables under sub-linear expectations, which is called Peng independence. Under some assumptions on nonadditive expectations, he shows LLNs by using partial differential equations (PDEs).Comparing the results obtained by the two groups with the classical LLN, one could find that the weakening of axiomatic properties of probability and expectation has been balanced by the incorporation of the extra technical assumptions on the state space and/or nonadditive probabilities and/or random variables.A natural question is arising:Can we extend the LLN to the sub-linear expectations case by traditional pure probabilistic (Linderberg-Feller’s) method, without using the characteristic functions or PDEs? The answer is affirmative. In this paper, we first extend the classical LLN to the Choquet expectation case with event independence under capacity. Then we extend LLN to the sub-linear expectation case with convolutionary random vari- ables. In both cases, we establish the equivalence theorem to connect the "distribution theorem" with the "Weak LLN under capacity". The whole proving method we proposed here only depends on some elementary prob-abilistic techniques, such as Taylor expansions and the basic properties of sub-linear expectations, without using any artificial tools like characteristic functions or PDEs. In this sense, our LLN is a natural extension of the classical LLN. Further, compared to other literature, our non-additive ver-sion of LLN has weaken the assumptions of the theorems, for example, we weaken the moment condition of the random variables, as well as replace the independence condition by the weaker notion of φ-convolution on ran-dom variables. Meanwhile, Ellsberg model satisfies our theorem assumptions.This paper is divided into4chapters. In chapter1, under the event inde-pendence for capacity, we establish law of large numbers for Choquet expec-tations induced by2-alternating capacities. In Chapter2, we first introduce the notion of convolutionary random variables under sub-linear expectations, then prove the LLNs under both sub-linear expectations and capacities. In Chapter3, we give three application examples of our LLNs, especially the Ellsberg model(Urn model with ambiguity). The corresponding estimation of the convergence error of our LLNs is given in Chapter4.Notations: Suppose that Ω is a state space and (?) is a σ-field. Function X:Ωâ†'R is called a random variable on measurable space (Ω,(?)), if X is F-measurable. Let H be a subset of all random variables on measurable space (Ω,F).Suppose that Cb(R) is the set of all bounded and continuous real-valued functions on R and Ckb(R) is the set of bounded and k-time continuously differentiable functions with bounded derivatives of all orders less than or equal to k. C+b(R) is all the non-negative monotonic functions in Cb(R)For given finite constants μ and μ, set Dn:={y:=(y1,y2,…,yn):yi∈[μ,μ], i=1,2,…,n}. (I) In Chapter1, we study the law of large numbers under Choquet expectations.As we known, the definitions and properties of capacity/Choquet ex-pectations are quite similar to those of probability/linear expectations, thus, the Choquet theory can be viewed as a bridge well connecting the traditional probability theory with the new-arising capacity theory. From1999to2005, Maccherroni and Marinacci [27][28] introduce event independence under ca-pacity, which is in accord with the probability case. Due to this similarity, we extend the classical LLN starting from the Choquet expectation case, then consider the more general and complex case:the sub-linear expectation case.Reviewing the literature of LLN under Choquet expectations, different authors propose different assumptions of LLN. However, the main pattern of proving LLN can be divided into2modes:one is the indirect method, such as Chareka [3], he turned the non-additive Choquet integral into the additive Lebesgue-Stieltjes integral, then by using the existing properties of Lebesgue-Stieltjes integral he derived the LLN under Choquet expectations. The other is the direct method, such as Li and Chen [26], they obtained LLN by proving Chebyshev inequality and Borel-Cantelli lemma under capacity, still following the proving pattern of traditional LLN. These inspire us with a rising question:With event independence under capacity, can we extend the LLN to Choquet expectations by pure probabilistic (Linderberg-Feller’s) way? The answer is affirmative.Recall that the key to prove classical LLN are the additivity of proba-bilities/expectations as well as the moment conditions of random variables. However, the Choquet expectations happen to be the non-additive ones. To overcome this non-additive problem, we adopt the2-alternating capacity, thus the Choquet expectations induced by it turn to be sub-additive ones, under which we consider the sequence{Xi}∞i=1of independent and identically distributed (â…¡D) random variables converges in distribution. Moreover, our moment condition on random variables is weaker than other literature. Lemma1.3.1Let V be a2-alternating capacity defined on (?),and Cv,Cv be the induced upper,lower Choquet expectation respectively.Let {Xi}i=1∞be a sequence of independent random variables.Then for any monotonic φ∈Cb(R) and any constant yi∈R,Lemma1.3.2Let V be a2-alternating capacity and Cv,Cv be the induced upper,lower Choquet expectation respectively.Let {Xi}i=1∞be a sequence of identical distributed random variables with Cv[Xi]=μ and Cv[Xi]=μ satisfying that for i≥1, Cv[|Xi|]<∞.Then,for each function φ∈Cb2(R),there exists a positive constant bn(∈)with bn(∈)â†'0,as nâ†'∞,such that (â… )∑i=1n supx∈R{Cv[φ(x+Xi/n)]-φ(x))≤supx∈R G(φ’(x),μ,μ)+bn(∈).(â…¡)∑i=1n,infx∈R{Cv[φ(x+Xi/n)]-φ(x)}≥infx∈R G(φ’(x),μ,μ)-bn(∈). Where G(x,μ,μ):=x+μ-x-μ.Lemma1.3.3Let G(x,y,z) be the function defined in Lemma1.3.2,that is G(x,y,z):=x+y-x-z. Then for any monotonic φ∈Cb(R),(â… )infy∈Dn supx∈R G(φ’(x),μ-1/n∑i=1n yi,μ-1/n∑i=1n yi)=0. (â…¡)infy∈Dn infx∈R G(φ’(x),μ-1/n∑ni=1yi,μ-1/n∑ni=1yi)=0.Our new LLNs under Choquet expectations are stated as follows. Theorem1.4.1(Limit Distribution Theorem)Let V be a2-alternating capacity defined on (?),and Cv,Cv be the induced upper, lower Choquet expectation respectively.Let {Xi}∞i=1be a sequence of â…¡D random variables on (Ω,(?))with Cv[Xi]=μ,Cv[Xi]=μ.ASSume that for i≥1, Cv[|Xi|]<∞Set Sn:=∑ni=1Xi. Then for each monotonic function φ∈Cb(R),(â… )(?)Cv[φ(Sn/n)]=supμ≤x≤μ φ(x)ï¼›(â…¡)(?)Cv[φ(Sn/n)]=infμ≤x≤μ φ(x).The limit distribution theorem indicates that the limit distribution of the empirical average of random variables is the maximal distribution.Theorem1.4.2(Weak LLN under capacity)Let V be a2-alternating capacity defined on (?),and Cv,Cv be the induced upper,lower Choquet ex-pectation respectively.Let v(A):=Cv[IA],(?)A∈(?).Let{Xi}∞i=1be a sequence of â…¡D random variables with Cv[Xi]=μ,Cv[Xi]=μ.Assume that for i≥1, Cv[|Xi|]<∞. Set Sn:=∑ni=1Xi. If for any function φ∈C+b(R),any (?)>0,thenTheorem1.4.3(Equivalence Theorem)Let V be a2-alternating capacity defined on (?),and Cv,Cv be the induced upper,lower Choquet expectation respectively.Given function φ∈C+b(R),Let{Xi)∞i=1be a sequence of â…¡D random variables with Cv[Xi]=μ,Cv[Xi]=μ.Assume that for i≥1, Cv[|Xi|]<∞. Let Sn:=∑ni=1Xi.Then the followings are equivalent. (A) For any (?)>0, let v(A):=Cv[IA],(?)A∈F, then (B) For any φ∈Cb(R),The equivalence theorem states that, if the convergent result is true for any monotonic function φ∈Cb(R), then it is still true for any φ∈Cb(R)Theorem1.4.4(LLN under Choquet expectations) Let V be a2-alternating capacity defined on (?), and Cv, Cv be the induced upper, lower Choquet expectation respectively. Let{Xi}∞i=1be a sequence of IID random variables with Cv[Xi]=μ, Cv[Xi]=μ. Assume that for i≥1, CV[|Xi|]<∞. Set Sn:=∑ni=1Xi.Then for each function φ∈Cb(R),Remark1.4.5Further, the condition of identical distribution in above the-orems can be weaken to "finite common first moment condition", that is, for1≤i≤n, Cv[Xi]=Cv[X1], Cv[Xi]=Cv[X1]; Cv[|Xi|]=Cv[|X1|], Cv[|Xi|]=Cv[|X1|]<∞.(â…¡) In Chapter2, we prove the LLNs under sub-linear ex-pectations by using pure probabilistic method, without using the characteristic functions or PDEs. It is a nature extension of the tradition LLNs.We obtain four principal results in this chapter.(1) We explore the limit distribution of general Ellsberg-type model mentioned above and show that its limit distribution is a maximal distribution.(2) We extend Ellsberg-type model to more general case, and obtain a sufficient condition on ran-dom variables and sub-linear expectations, under which the empirical average of random variables has the same limit distribution as Ellsberg-type model does.(3) With a new notion of φ-convolution on sub-linear expectations and random variables, we show both maximal distribution and weak LLNs are equivalent.(4) We compare our results with which appearing in the men-tioned articles.Lemma2.3.1Let E be a sub-linear expectation and ε be its conjugate ex-pectation. Given a function φ∈Cb(R). Let{Xi}∞i=1be a sequence of φ-convolutionary random variables under E. Then for any constant yi∈R, i=1,2,…,n,If sub-linear E is the upper expectation operator generated by a set P of probability measures, that is E[·]:=(?)Eq[·], then Lemma2.3.1can be restated as follows.Lemma2.3.2Let P be a set of probability measures and{Xi}∞I=1be a sequence of independent random variables for each Q∈P. EQ is the linear expectation with respect to probability Q. Then for any constant yi∈R., i=1,2,…,n,and any function φ∈Cb(R), whereThe following lemma is a key to the proof of our LLN.Condition(0.2) can be viewed as the Linderberg condition under sub-linear expectation E.Lemma2.3.3Let E be a sub-linear expectation and ε be its conjugate ex-pectation.Suppose that{Xi}i=1∞is a sequence of random variables with finite common first moment and E[Xi]=μ,ε[Xi]=μ,i≥1.Moreover,for any (?)>0,if then for any monotonic function φ∈Cb2(R),we have (â… )limnâ†'∞infy∈Dn∑i=1n supx∈R{E[φ(x+Xi-yi/n)]-φ(x)}=0ï¼›(â…¡)limn-â†'∞infy∈Dn∑i=1n infx∈R{E[φ(x+Xi-yi/n)]-φ(x))=0.(â…¢)Moreover,if E[·] and ε[·] are the upper and lower expectations on theset P of probability measures such thatthen,for any monotonic function φ∈Cb2(R), We are now ready to prove the LLN for sub-linear expectations and ca-pacities. Similar to the structure of Chapter1, We first present the Ellsberg-LLN(Theorem2.4.1),then the sublinear LLN(Theorem2.4.2).Theorem2.4.1(Ellsberg-LLN) Given a set P of probability measures, let E, ε be the upper and lower expectations of EQ over P respectively. Assume that for any Q∈P,{Xi}∞i=1is a sequence of independent random variables under Q, and{Xi}ni=1has the finite common first moment μ:=E[Xi] and μ:=ε[Xi] such that condition (0.2) holds. Set Sn:=∑ni=1Xi.Furthermore, if (â… ) For any monotonic φ∈Cb(R), then (â…¡)For φ∈C+b(R). Set V(A):=(?)Q(A), v(A):=(?)Q(A),(?)A∈F, then for any (?)>0,Next, we extend the Ellsberg-LLN with event independence to the LLN under sub-linear expectation E with φ-convolutionary random variables. It is worthwhile to note that the limit distribution of the empirical average under E is still a maximal distribution.Theorem2.4.2(LLN for sub-linear expectations) Let E be a sub-linear expectation and ε be its conjugate expectation. Assume that{Xi}∞i=1is a sequence of random variables with finite common first moment-E[Xi]=μ (â… )Given monotonic φ∈Cb(R),if {Xi}∞i=1is a sequence of φ-convolutionary random varables under E,then(â…¡)If for any φ∈Cb+b(R),{Xi}∞i=1is a sequence of φ-convolutionary randorn variables under E,then for any (?)>0,and for v(A):=ε[IA],(?)A∈F, we haveSince the function φ in both LLNs is limited to the monotonic φ∈Cb(R),we need the following equivalence theorem to extend the monotonic φ∈Cb(R) to all φ∈Cb(R).Theorem2.4.3(Equivalence Theorem under E)Let E be a sub-linear expectation and εbe its conjugate expectation.Given function φ∈Cb+(R), assume that {Xi}∞i=1is a sequence of φ-convolutionary random variables with finite common first moment E[Xi]=μ and ε[Xi]=μ.If condition (0,2)(A2) For any (?)>0,v(A):=ε[IA],(?)A∈F,(B2)For any φ∈Cb(R),Then we restate the Ellsberg-LLN and LLN under E respectively.Theorem2.4.4Same condition as Theorem2.4.1.Then for any function φ∈Cb(R), Theorem2.4.5Same conditions as Theorem2.4.2. Given any function φ∈Cb(R),if{Xi}∞i=1is α sequence of φ-convolutionary random variables under E.thenRemark2.4.6If{Xi}∞i=1has higher moments than1,that is E[|Xi|β]<∞for β>1.Note that condition (0.2) for both Theorem2.4.5and Lemma2.3.3holds if supi≥1E[|Xi|β]<∞.Thus by modifying the proof of Lemma3.9in Peng [32],condition φ∈Cb(R)Theorem2.4.5can be extended to the case where φ is continuous with the growth condition φ(x)≤(1+|x|α-1).(â…¢)In Chapter3,we study the application examples of LLN.Example3.1(Urns Model with ambiguity) Suppose that there exists α countable infinity urns,ordered and indexed by the set N:={1,2,…}.An agent is told that the i-th urn contains100i (100times i) balls with either red or black color,and the number of red balls in the i-th urn is from25i to50i.The agent is told nothing about these urns beyond this information. Only ONE ball will be draun sequently from each urn.As usual,let Xi be the number of red balls for the i-th draw.Then by Theoren2.4.4, the average number of red balls in n experiments obey the following distribution,Example3.2(Pricing of a European Option)Let{Bt}t≥0be a Brown-ian motion on a probability space (Ω,F,P).If{St}t≥0is the price of a stock evolving via geometric Brownian: dSt=μStdt+σStdBt. Then pricing a European option in incomplete markets sometimes is trans-mitted to calculate option prices which arranges between upper expectation eμ+σk and lower expectation eμ-σk on its future payoff (St):=(Sr-L)+. Then by Theorem2.4.5, the option price obeys the following rules,(IV) In Chapter4, we estimate the convergent error of LLNs.Theorem4.1Let E be a sub-linear expectation and S be its conjugate ex-pectation. Assume that{Xi}∞i=1is a sequence of random variables with finite sup1≤i≤n E[|Xi|2]<∞, then where μ:=|μ|∨|μ|.When the moment condition is weaken to sup1≤i≤n E[|Xi|1+a]<∞,0<α<1, see Theorem4.2for the corresponding estimations.
Keywords/Search Tags:Choquet expectations, Sub-linear expectations, Law of largenumbers, Maximal distribution, Convolution, Event independence
PDF Full Text Request
Related items