In1933, Andrey Kolmogorov published his book Foundation of the Theory of Prob-ability (in German, Grundbegriffe der Wahrscheinlichkeitsrechnung), which established the modern axiomatic foundations of probability theory. Given a probability measure P on a measurable space (Ω,F), the expectation Ep[X] of a F-measurable random variable X is defined as the integral fΩXdP. Obviously, Ep[·] is a linear functional due to the linearity of probability measure P. However, a great number of uncertainty phenomena can not be well modeled by such linear probability or expectation.A very interesting problem is to develop nonlinear expectation and related condition-al expectations. The notion of capacity and Choquet expectation (or Choquet integral) was introduced by Choquet [17], which has been widely used in the potential theory (e.g., Choquet [17], Doob [34]) and decision theory (e.g., Schmeidler [108], Gilboa and Schmeidler [48]). But to the best of our knowledge, the notion of conditional Choquet expectation has not been well understood and it was hardly used to deal with dynamic problems in economics. The other important nonlinear expectation called g-expectation was introduced via BSDE in Peng [85]. It is an ideal framework for the valuation of randomness and risk in the case of the uncertainty of probability models (e.g., Chen and Epstein [12], Fritelli and Rossaza Gianin [38], Peng [88]). However, one important limitation of g-expectation is that the involved uncertain probability measures have to be absolutely continuous with respect to a reference probability measure, e.g., a Wiener measure. But for the well-known problem of volatility model uncertainty in finance, there is an uncountable number of unknown probabilities which are essentially singular from each other. Avellaneda et al.[3] and Lyons [73] studied this volatility uncertainty problem for the situation of state-dependent options. The situation of path-dependence is more challenging and needed to create a new framework more general than the classical notion of probability.Such types of fully nonlinear expectations for situations of path-dependence were constructed by Peng in [88,90] where two very different approaches were introduced to solve the involved problem of dynamic consistency. The first one is a generalized dynamic programming principle for the path-dependent situation. The second one is about to use the notion of nonlinear monotone semigroups of Nisio’s type (see Nisio [78,79]), called nonlinear Markov chain, to develop a nonlinear version of Kolmogorov consistency theorem in order to construct nonlinear expectation spaces which plays the important role as in the classical probability theory.The most typical example of the above-mentioned fully nonlinear expectation is the G-expectation which was first introduced by Peng [92] in2006. In fact, G-expectation is also a typical example of the sublinear expectation which keeps the well properties of linear expectations except linearity. The notion of distribution and independence plays an important role in the whole theory. One pioneering work of Peng is to define such distribution and independence directly by the sublinear expectation E[·] but not by capacity which seems as a natural way to generalize them. Based on these new notions, Peng introduced the most important distribution called G-normal distribution which can also be characterized by the so-called G-heat equation. The notion of G-expectation and G-Brownian motion can be regarded as a nonlinear generalization of Wiener measure and classical Brownian motion. These notions and the corresponding limit theorems (law of large numbers and central limit theorem) as well as stochastic calculus of Ito’s type with respect to G-Brownian motion were introduced and systematically developed in Peng [92-102]. Recently, many authors gave a number of generalizations of Peng’s initial works. As for law of larger numbers, Chen [11], Chen and Wu [14], Chen et al.[15] study the strong law of large numbers which generalizes the "weak" law of large numbers in Peng [93,96]. After Peng first established central limit theorem in sublinear expectation space under independent and identically distributed (i.i.d. for short) assumptions in [93], many authors generalized this result without the identically distributed assumption but still keeping the independent assumption, see Li and Shi [66], Hu and Zhang [51], Hu [50], Hu and Zhou [58] etc. About the Ito’s calculus under the framework of G-expectation theory, especially, for Ito’s formula, was generalized by Gao [39] and Zhang et al.[123], etc. The further development of sublinear expectation theory and G-expectation theory can be found in Bai and Buckdahn [6], Bai and Lin [7], Chen and Hu [13], Denis et al.[30], Dolinsky et al.[33], Epstein and Ji [35], Gao [42], Gao and Jiang [41,42], Gao and Xu [43,44], Hu [52,53], Hu et al.[54,55], Hu and Peng [56,57], Lin [71], Lin [72], Nutz [80], Nutz and van Handel [81], Nutz and Zhang [82], Peng et al.[103], Soner et al.[112], Song [113-116], Xu and Zhang [122], etc.The starting point of this thesis is a little different from Peng’s initial work. We will take the sublinear expectation Ev[·] as an upper expectation of a set V of probability measures P defined on some measurable space (Ω,B(Ω)), which allows us to study the properties of Ep[·] conveniently by the existing properties of linear expectation Ep[·]. And we study the notion of independence from a new view which defined via classical conditional expectations. These formulations allow us to generalize the corresponding limit theorems and Ito’s calculus in Peng [92-100] and other authors’work to our setting. Chapter1to Chapter3of this thesis will focus on these topics. Chapter4is the high-light of this thesis. We study some properties of sublinear expectation and G-expectation, including the strict comparison theorem, additivity, Wasserstein distance, duality, dom-ination and optimal transportation, although these properties are well-known and some of them are even obvious in the classical probability theory. These results are non-trivial extensions of the classical ones and will be used in Chapter5and6.As an application of the G-expectation theory, we introduce the notion of continuous martingale of maximal variation (CMMV for short) and the problem of maximal variation of martingales in Chapter5. Roughly speaking, the problem of maximal variation of martingales is that, given a real-valued function M defined onâ–³(Rd) and a probability measure/μ∈△(Rd), we aim to maximize a functional called M-variation, over the set of all Rd-valued martingales of length n whose terminal distribution is Blackwell dominated by μ. This problem generalizes the problem of maximal L1-variation introduced in Mertens and Zamir [77]. The most general form has been studied in De Meyer [26] for one-dimensional case and then generalized by Gensbittel [46] for multi-dimensional case. We will give a new and simple proof for it based on the results in Chapter1to Chapter4. Both of the two papers [26] and [46] only studied the centered case, i.e., the functional M defined on the set of probability measures with zero-mean, we will generalize them to the non-centered case which turns out to be very useful when we study the games with transaction costs in the next chapter. In Chapter6, we study a general class of repeated games with incomplete information on one side a la Aumman and Maschler [4], which was first introduced by De Meyer [25] as financial exchange game and then generalized by Gensbittel [45,47] in multi-dimensional context. These games have the closed relation with the notion of CMMV and the maximal variation of martingales in Chapter5. We also systematically study two particular game models, and obtain the explicit solution of Nash equilibrium for them. These models show that CMMV is a very robust dynamic in the stock market. We point out that the contents in Chapter5and Chapter6is just a first attempt to study the game theory from the view of G-expectation. There are many interesting problems to study in the future.This thesis consists of six chapters. In the following, we list the main results in this thesis:(â… ) In Chapter1, we study the random walks under uncertainty and the corresponding limit theorems.We generalized the classical Bernoulli random walk and simple random walk to the uncertainty case."Uncertainty" means that the underlying probability measure is not unique, but correspond, is a set of probability measures. Let (Ω, B(Ω)) be a measurable space, V be the set of probability measures defined on (Ω,B(Ω)). Given a random variable X, following Peng [93], the distribution function of X under V is a functional from Cb,Lip(R) to R defined by where Cb,Lip(R) is the space of all bounded and Lipschitz functions on R. For simplicity of notation, we write EÏ[·] instead of sup P∈Ï[·]. The notion of independence under Ï denned in Peng [93] as following:Definition1.4Let {Xi}i=1∞be a sequence of random variables on (Ω,B(Ω,)).{Xi}i=1∞is said to be independent under Ï, if for each n∈N, Xn is independent of (X1,…, Xn-1) under V, which is defined by (?)φ∈Cb,Lip(Rn), We give a new definition of independence via classical conditional expectations.Definition1.5Let {Xi}i=1n be a sequence of random variables on (Ω,B(Ω)). Given a set V of probability measures on (Ω,B(Ω)),{Xi}i=1∞is said to be weak independent under Ï, if for each n∈N, Xn is weak independent of (X1,…, Xn-1) under Ï, which is defined by (1)(?)P∈Ï,(?)φ∈Cb,Lip(R), EP[φ(Xn)|X1,…,Xn-1]≤EÏ[φ(Xn)], P-a.s.,(2)(?)φ∈Cb,Lip(R), there exists PâˆˆÏ depending on φ, such that EP[φ(Xn)[X1,…,Xn-1]=EÏ[φ(Xn)], P-a.s. The following theorem give the relations of weak independence and Peng’s indepen-dence.Theorem1.7Let {Xi}i=1∞be a sequence of weak independent random variables under Ï. We define Ï byÏ={P:(?)φ∈Cb,Lip(R),(?)n∈N, EP[φ{Xn)|X1,…,Xn-x]≤EÏ[φ(Xn)], P-a.s.}. Then {Xi}i=1∞is independent under Ï as in the Definition1.4, and EÏ[φ(Xi)]=EÏ[φ(Xi)],(?)i∈N,(?)φ∈Cb,Lip(R).We can prove the law of larger numbers for Bernoulli random walk under i.i.d. assumption, and then we generalize this theorem without the i.i.d. assumption. The following theorem is the general version.Theorem1.9Let {Xk}k=1∞be a sequence of random variables on measurable space (Ω, B(Ω)) and V a set of all probability measures P defined on (Ω,B(Ω)) such that (?)n∈N, μ≤EP[Xn|X1,…,Xn-1}≤μ, and EP[|Xn|q|X1,…,Xn-1]≤Kq P-a.s., where μ,μ, K, q are constants and q>1. Then we have (â…°) For each μ∈[μ,μ], there exists Pμ∈Ï, such that (â…±) For each P e Ï,(â…²) For each φ∈Cb,Lip(R), We also consider the central limit theorems for simple random walk. The notion of G-normal distribution play an important role in the central limit theorem. In this chapter, the G-normal distribution is defined as the solution of G-heat equation. Definition1.10We call ξ is G-normal distributed, denoted by ξ~N(0,[σ2,σ2]), where0≤σ≤σ, if the distribution of ξ is given by Fξ(φ)=uφ(0,1),(?)φ∈Cb,Lip(R), where uφ{t,x) is the unique viscosity solution of the following G-heat equation:(?)tu-G((?)xx2u)=0u|t=1=φ where G(α)=1/2σ-2α+-1/2σ2α-We only list two central limit theorems. In fact, they are equivalent in some sense. The second one in multi-dimensional case will be used in Chapter5. We denote the distribution function of G-normal distribution ξ by Eg[φ(ξ)]:=Fξ(φ).Theorem1.14Let {Xi}i=1∞be a sequence of random variables on measurable space (Ω, B(Ω)). Let Ï be the set of all probability measures on (Ω, B(Ω)) such that, VP G V,(?)i∈N,(1) EP[Xi|X1,…,Xi-1}=0,(2)σ2≤EP[Xi2|X1,…,Xi-1}≤σ2,(3)EP[|Xoi|q|X1…,Xi-1]≤Kq. We denote Sn=Σi=1n Xi.If q>2and0<σ≤σ≤K<∞, then (?)φ∈Cb,Lip(R), where ξ~N(Q,[σ2,σ2]).Let Mnq(Σ, K) be the set of n-stage Rd-valued martingales on some probability space (Ω,B(Ω), P) satisfying the following conditions:(â…°) EP[Sn]=0,(â…±) EP[(Sk+1-Sk)(Sk+1-Sk)T|S1,…,Sk]∈Σ,0≤k≤n-1, where Σ is a bounded,convex and closed subset of S+(d).(â…²) EP[||Sk+1-Sk||q]≤K,0≤k≤n-1. Let Vn[φ]:=supS∈Mnq(Σ,K) EP[φ(Sn/(?)n)].Theorem1.17We assume that q>2. Let ξ be a G-normal distribution N (0,Σ) under G-expectation EG, then (?)φ∈C(Rd) with growth condition|φ(x)|≤C(1+|x|p), where1≤p≤q,In the last section of Chapter1, we give the approximation of G-Brownian motion by simple random walk. Let {Sn}n=1∞be a discrete-time process,the continuous-time representation of S is given by St=Sn+(t-n)(Sn+1-Sn), n≤t<n+1. Theorem1.19Let {Sn}n=1∞be a simple random walk with variance uncertainty under Ï. We defineThen Wt(n) weakly converges to G-Brownian motion, namely, for each k∈N and0≤t1<t2<…<tk, we have where (Bt)t≥is a G-Brownian motion with EG[B12]=EÏ[S12] and EG[-B12]=EÏ[-S12].(â…¡) In Chapter2, we study the limit theorems on the sublinear expectation space.Let Ï be a set of probability measures on measurable space (Ω,B(Ω)). The sublinear expectation EÏ[·], the upper probability V(·) and lower probability v(·) are respectively defined byWe introduce the notion of product independence and sum independence, which is weaker than Peng’s independence in Peng [93]. Definition2.14Suppose that X1,X2,…, Xn is a sequence of real measurable random variables on (Ω,B(Ω)).(â…°) Xn is said to be product independent of(X1,…, Xn-1) if for each nonnegative bound-ed Lipschitz function φk,=1,…, n, (â…±) Xn is said to be sum independent of (X1,…, Xn-1) if for each φ∈Cb,Lip(R),The following law of large numbers is an extension of which in Peng [93,96,98], Chen [11], Chen and Wu [14], and Chen et al.[15]. Theorem2.17Let {Xk}k=1∞be a sequence of random variables satisfying: supk≥1EÏ[|Xk|q]<∞, for some q>1, and EÏ[Xk]=μ,-EÏ[-Xk]≡μ,k=1,2,Set Sn=Σk=1n Xk.(â…°) If {Xk}k=1∞is product independent, then (â…±) If{Xk}k=1∞is product and sum independent, then (â…²) If {Xk}k=1∞is sum independent, and V(·) is upper continuous, i.e., ThenAbout the central limit theorem on sublinear expectation space, Peng [93] first prove it under "i.i.d." assumptions, and then generalized by Li and Shi [66], Hu and Zhang [51], Hu [50], Hu and Zhou [58] without the assumption of identical distribution. However, all of these results require the independence of random variables. We consider one weaker condition called m-dependent, and prove the corresponding central limit theorem. This result was accepted by Acta Mathematicae Applicatae Sinica, English Series. Definition2.31The sequence {Xi}i=1∞is called m-dependent if there exits an integer m such that for every n and every j≥m+1,(Xn+m+1,…,Xn+j) is independent of (X1,…,Xn). In particular, if m=0, then {Xi}i=1∞is an independent sequence.Theorem2.32Let {Xi}i=1∞be a sequence of m-dependent variables, such that and EÏ[|Xi|2+α]≤M for i=1,2,…, where α>0and M is a constant. Let Sn=Σi=1n Xi, then we have where ξ~N(0;[σ2,σ2]).(â…¢) In Chapter3, we study the Ito’s calculus without quasi-continuity, and we obtain the general form of Ito’s formula and the solvability of stochastic differential equation with local Lipschitz coefficients.All the existing research on the Ito’s calculus with respect to G-Brownian motion is based on the stochastic process space MGp(0,T),p≥1(see Peng [92,95,97,98,100], Gao [39], Zhang et al.[123]), which generalized by the random variables with quasi-continuity. But the notion of stopping time, which is the important notion in classical stochastic analysis, does not satisfy such quasi-continuity. It is difficult to consider the problem of stopping time on the space MGp(0,T). And the existing Ito’s formula requires C2-function satisfying some growth condition. In order to overcome such difficulty, in this chapter, we introduce a larger space of stochastic process M*p(0, T), which is generalized by the random variables without quasi-continuity. Then we can define Ito’s integral on this larger space M*p(0, T), and we can consider the Ito’s integral on stopping time interval, which allows us to have a Ito’s integral for a "locally integrable" space Mωp(0, T). This new formulation permits us to obtain Ito’s formula for a general C1,2-function, which essentially generalizes the previous results of Peng [92,95,97,98,100] as well as those of Gao [39] and Zhang et al.[123]. This result was published in Stochastic Process and Their Applications121(7)1492-1508with Prof. Peng Shige.Theorem3.41Let Φ∈C1.2([0,T]×R) andwhere αv,ηvij∈Mω1(0,T), βvj∈Mω2(0,T). Then for each t∈[0,T], we have, quasi- surely, In the last section of this chapter, we consider the following stochastic differential equations driven by a d-dimensional G-Brownian motion: where b(·,·), hij(·,·), σj(·,·):[0, T]×R→R are continuous functions and X0is a con-stant. We introduce the following condition:(H1) Bounded condition: for any s∈[0, T](H2) Lipschitz condition: for any x,y∈R and s∈[0, T], max{|b(s,x)-b(s,y)|,|hij(s,x)-hij(s,y)|,|σj(s,x)-σj(s,y)|}≤K|x-y|.(H3) Locally Lipschitz condition: for all x, y∈R satisfying|x|,|y|≤R and any s∈[0,T], max{|b(s,x)-b(s,y)|,|hij(s,x)-hij(s,y)|,|σj(s,x)-σj(s,y)|}≤KR|x-y|.(H4) Growth condition: for any xeE and any5G [0,T], xb(s,x)≤K(1+x2), xhij(s,x)≤K(1+x2),|σj(s,x)|2≤K(1+x2). The first theorem consider the existence and uniqueness of the solution in the s-pace M*2(0, T). The second theorem study the solvability of SDE with locally Lipschitz coefficients.Theorem3.44Let conditions (H1) and (H2) hold. Then there exists a unique contin-uous process X G M*2(0,T) satisfying (1). Theorem3.45Let the condition (H3) and (H4) hold. Then there is a unique continuous adapted solution X of SDE (1).(â…£) In Chapter4, we study the properties of sublinear expectation and G-expectation, including the strict comparison theorem, additivity, Wasserstein distance, duality, domination and optimal transportation.In order to study such properties, we divide this chapter into five sections. In Section4.1, we study the strict comparison theorem. We only list two important theorems in this section.Theorem4.4Let X, Y∈Lc1(Ω) and X≤Y q.s. If thenEÏ[X]<EÏ[Y].Theorem4.9Let σ>0and X, Y∈Lip(Ω) with the forms X=φ(Bt1,Bt2-Bt1,…,Btn-Btn-1) and Y=φ(Bt1,Bt2-Bt1,…,Btn-Btn-1), where φ(x)≤φ(x),(?)x∈Rn. Then Eg[X]<EG[Y] if and only if there exists x0∈Rn such that φ(x0)<φ(x0).In Section4.2, we study the additivity of G-expectation. Let ξ be a G-normal distribution N(0,[σ2,σ2]), where0<σ<σ. The following two theorems is the main theorems in this section.Theorem4.13We assume that φ,φ∈Cb,Lip(R). Then EG[φ(ξ)+φ(ξ)]=EG[φ(ξ)]+Eg[φ(ξ)] if and only if (?)xxu(t,x)(?)xxu(t,x)≥0,(?)(t,x)∈(0,1)×R.Theorem4.16If there exists x0and θ>0such that φ,φ∈C2((x0-θ,x0+θ)) and φ"(x0)φ"(x0)<0,then we haveIn Section4.3, we compare the G-expectation with Choquet expectation and provide some interesting examples. We show that the G-expectation is always dominated by the corresponding Choquet expectation. The following theorems are the main theorems in this section.Theorem4.26G-expectation EG[·] can be represented by Choquet expectation EC[·](i.e.,(?)X∈LG1(Ω), EG[X]=EC[X]) if and only if EG is linear (i.e.,σ=σ).In Section4.4, we generalize the classical Wasserstein distance and related properties to the sublinear expectation space. Let Ï1and Ï2be two nonempty weakly compact and convex sets of probability measures. We define the Hausdorff-Wasserstein distance between Ï1and Ï2as follows: where Wp(P1, P2) is the classical Wasserstein distance between two probability measures P1and P2.We first give the Kantorovich-Rubinstein duality formula for the sublinear case. Theorem4.30W1(Ï1,Ï2)=sup {|EÏ1[φ]-EÏ2[φ]|}.||φ||Lip<1||φ||Lip≤1means the Lipschitz constant of Lipschitz function φ is at most1.As well known that weak convergence of probability measures is equivalent to con-vergence in Wasserstein distance, we give a non-trivial extension of this result to the setting of sublinear expectations.Definition4.31A sequence {EÏn}n=1∞of sublinear expectations is said to be weakly con-vergent to EÏ, or equivalently,{Ïn}n=1∞weakly converges to Ï, if for each φ∈Cb,Lip(Ω), limn→∞EÏn[φ]=EÏ[φ].Theorem4.35Let {Ïn}n=1∞be a sequence of convex and weakly compact sets of proba-bility measures satisfying supn EÏn[|X|1+α]<∞for some α>0and Ï be a convex and weakly compact set of probability measures. Then the following statements are equivalent.(â…°) Ïn weakly converges to Ï.(â…±)W1(Ïn,Ï)→0.In Section4.5, we introduce the notion of duality, domination and optimal trans-portation from classical case to the sublinear expectation space. We first introduce the notion of Fenchel duality and related properties, which will be used in Chapter6when we study the dual game.We also introduce the classical notion of Blackwell domination and we obtain a dominated theorem for sublinear expectations.Theorem4.47If Ï1and Ï2are two convex and weakly compact sets of probability measures on (Ω,B(Ω)). Then the following statements are equivalent.(â…°) EÏ1[·] is dominated by EÏ2[·]. (â…±) Ï1(?)Ï2.In the end of this section, we study the Kantorovich optimal transportation under sublinear expectation space. Let Ω1and Ω2be two complete separable metric spaces, Î (μ, v) be the set of probability measures defined on (Ω1(?)Ω2,B{Ω1(?)Ω2)) such that the marginal probability measures on Ω1and Ω2are μ and v respectively. Let Ï1and Ï2be two weakly compact and convex sets of probability measures defined on (Ω1, B(Ω1) and (Ω2,B(Ω2)) respectively. For a continuous function c on Ω1(?)Ω2,Φc denotes the set of all bounded continuous function pairs (φ,φ) satisfying φ(ω1)+φ(ω2)≥c(ω1,ω2),(?)ω1∈Ω1and (?)ω2∈Ω2.Theorem4.49Let c: Ω1(?)Ω2→R be a continuous function, then we haveWe also consider the following maximal covariance problem: Given a probability measure μ and Ï a set of probability measures. The maximal covariance function C(μ,Ï) is defined byWe give the main theorem in this section, which will be useful in Chapter5.Theorem4.52Let {Ïn} and Ï be the weakly compact and convex sets of probability measures, μ be a arbitrary probability measure. If W2(Ïn,Ï)→0, then we have(â…¤) In Chapter5, we introduce the notion of CMMV and the problem of maximal variation for martingales.We first introduce the notion of CMMV which studied in De Meyer [26], and then generalize it to the framework of G-expectation. The main purpose of this chapter is to solve the following problem of maximal variation of martingales.Let Mn{μ) be the set of pairs (F, X) where F:=(Fq)q=1,…,n is a filtration on a probability space (Ω, B(Ω), P), and X=(Xq)q=1,…,n is an F-martingale whose terminal value Xn is Blackwell dominated by μ. For a function M:â–³2(Rd)→R, we can define M-variation VnM(F,X) as Then the maximal M-variation VM(μ) is denned byFor one-dimensional case, we have the following theorem. Theorem5.11If M satisfies:(â…°) Positive homogeneity:(?)X∈L02(R),(?)α>0: M[αX]=αM[X].(â…±) Lipschitz continuity: There exists p∈[1,2) and K∈R such that for all X, Y∈L02(R);|M[X]-M[Y]|≤K||X-Y||Lp.(â…²) Transportation invariance for constant: M [X+β]=M[X]+M[β],(?)β∈R. Then for all μ∈△2(R), we have(3) If Ï>0and if, for all n,(Fn,Xn)∈Mn(μ) satisfies VnM(Fn,Xn)=VnM(μ), then the continuous time representation Î n of Xn defined as Î tn:=X[nt]n, converges in finite-dimensional distribution to the CMMV Πμ.As for the multi-dimensional case, we first introduce a auxiliary function r defined by where cov(μ) denotes the covariance matrix of μ∈△02(Rd). Then we can define a set Γ as in Definition5.12.We assume that function M:â–³2(Rd)→R satisfies the following hypotheses:(H1) M≥0and has no degenerate directions:(?)x∈Rd, there exists μ∈△02(Rd) such that(H2) M is K-Lipschitz for the Wasserstein distance of order p for some p∈[1,2).(H3) M is positively homogenous:(?)X∈L2(Rd) and (?)λ>0, M[λX]=λM[X].(H4) M is concave onâ–³02(Rd). (H5) r is quasiconvex, i.e.,(?)α∈R,{Y∈L2(Rd)|r(cov(Y))≤α} is convex in L2(Rd).(H6) M[X+β}=M[X]+M[β],(?)β∈Rd.Then we have the following theorem.Theorem5.14Under hypotheses (H1)-(H6), we have(â…¥) In Chapter6, we study a general class of repeated games with incomplete information on one side a la Aumann and Maschler [4].This game was first introduced by De Meyer [25] as financial exchange game and then generalized by Gensbittel [45,47] in multi-dimensional context. The difference with Aumann-Maschler’s model is that the state space and sets of actions of both players are allowed to be infinite. This chapter contains four sections.In Section6.1, we study the linear game proposed in [45] and generalize the Cav(u) theorem in [45] fromâ–³(P) toâ–³1(Rd), where P is a compact and convex subset of Rd. Let Vn(μ) and Vn(μ) denote the maximal and minimal payoffs for player1and player2respectively in the repeated game Γn(μ)(more details can be found in Section6.1). Let u(μ) and u(μ) denote the corresponding payoffs in the one-shot non-revealing game. Then we have the following Cav(u) theorem. Theorem6.5For all μ∈△1(Rd), we haveFurthermore, if we assume that (?)μ∈△2(Rd), the game Γ1(μ) always has the value V1(μ), i.e., V1(μ)=V1(μ)=V1(μ).Then we have the more precise Cav(u) theorem.Theorem6.9If V1satisfies the following hypotheses:(â…°) There exists μ0∈△2(Rd) such that V1(μ0)>0.(â…±) V1([L+β)=V1([L])+V1([β]),(?)β∈Rd.(â…²) For any α∈R,{X∈L2(Rd):supv∈△02(Rd):cov(v)=cov(X) V1(v)≤α} is convex in L2(Rd). Then we have, for all μ∈△2(Rd), where ξ~N(0, Γ), Γ is given in Section5.3in Chapter5where V1is instead of M.In Section6.2, we study a typical example of linear game called financial exchange game introduced by De Meyer [25]. We generalize the natural exchange mechanism in [25].In the game Γn(μ)(see Section6.2), the hypotheses on the natural exchange mech-anism are the following:(H1) Existence of the value:(?)μ∈△∞(R), the game Γ1(μ) has a value.(H2) Bounded exchanges:(?)i,j:|Aij|≤K, where K is a constant.(H3) Invariance with respect to the scale:(?)α>0,(?)L∈L2(R): V1([αL])=αV1([L]).(H4) Transportation invariance with respect to the risk-less part of the risky asset:(?)β∈R:V1([L+β)=V1{[L])+V1{[β]).(H5) Positive value of information:(?)L∈L2(R): V1([L])>0.The following theorem gives the asymptotic characterizations of the value function and price process based on the results in Chapter5.Theorem6.15If (H1)-(H5) is satisfied, then for all μ∈△2(R),(â…°)limn→∞1/nVn(μ)=V1([E(μ)]).(â…²) If p>0and if, for all n,(Fn,Xn)∈Mn(μ) satisfies VnV1(Fn,Xn)=Vn(μ),then the continuous time representation Î n of Xn defined as Î tn:=X[nt]n converges in finite-dimensional distribution to the CMMV Πμ.In Section6.3, we systematically study a game model with transaction costs, which does not satisfy the natural exchange mechanism in [25] but satisfies our general natural exchange mechanism in Section6.2. The explicit solution of Nash equilibrium of this game was obtained by the dual method, and we show that the price process posted by uninformed player converges to CMMV in finite distribution. This result confirms that CMMV is a very robust dynamic in stock market.In Section6.4, we study a game model where both players are not risk-neutral, which generalize the results in De Meyer [26] while the uninformed player is risk-aversion but the informed player is risk-neutral. The results are very surprising:the Nash equilibrium in the game does not depend on the risk attitude of informed player, i.e., the equilibrium in the game where the informed player is risk-natural is also an equilibrium in the game when the informed player is risk-aversion or risk-seeking. |