Font Size: a A A

Nonlinear Expectations And Their Applications In Finance

Posted on:2010-11-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:W WangFull Text:PDF
GTID:1100360278974278Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Along with the fast development in the financial market nowadays, risk management has been getting more and more attentions. In 2008, financial crisis hit the world. The crisis beginning with real estate, banking and credit in the United States had a global reach, affecting a wide range of financial and economic activities. A question that is receiving increasing attention is how should the risk in the financial market be measured. To answer this question, one suggestion that is gaining popularity is to use coherent measures (see e.g., Artzner et al. [2, 3]) and convex risk measures (see e.g., F(?)llmer and Schied [37, 38, 39], and Prittelli and Rosazza Gianin [40, 41]). In 1997, Peng [69] introduced the notion of g-expectation via a backward stochastic differential equation (BSDE) of which the generator is a given function g.g-expectations could construct coherent risk measures and convex risk measures (see [84]). Jiang [51] further gave necessary and sufficient conditions under which the static risk measure pg induced by a g-expectation is a coherent (convex) risk measure. In 2006, Peng [74] introduced a new nonlinear expectation-G-expectation which is generated by a nonlinear parabolic partial differential equation with an infinitesimal generator G. As compared with the framework of g-expectation, the theory of G-expectation is intrinsic in the sense that it is not based on a given probability space. The risk measure defined via a G-expectation is a coherent risk measure.Due to the important applications of g-expectations and G-expectations in finance, a considerable amount of works have been devoted to studying g-expectation and G-expectation theory. The theory of g-expectation has been developed for more than ten years, and there are a lot of studies on g-expectations in both fundamental theories and applications (see, e.g., [10], [13], [14], [15], [16], [17], [36], [44], [48], [49], [51], [70] and [87]). G-expectation is a more recent and novel theory. Since Peng's initial paper about G-expectations [74], this interesting theory has developed at an accelerating pace. Peng obtained the law of large numbers and the central limit theorem for G-expectations (see [76] and [77]). Some other properties of G-expectations were introduced in [28], [75] and [78].This doctoral thesis studies some fundamental problems about nonlinear expectations and their applications in risk measure and nonlinear price system. This thesis consists four chapters. In the following, we list the main results of this thesis.(Ⅰ) In Chapter 1, we study some basic inequalities for g-martingale, including two kinds of maximal inequalities for g-martingale, Kolmogorov'sinequality for g-martingale and Doob's g-martingale inequality.Pardoux and Peng [60] reported that the following backward stochastic differential equation (BSDE)exists a unique adapted solution (yt,zt)t∈[0,T] under (H1): the Lipschitz condition and (H2): the square integrable condition on g. The function g is the generator of the BSDE (0.0.5). If g also satisfies (H3): g(t,y,0)≡0, the solution of BSDE yt denoted byεg[ξ|Ft] is called conditional g-expectation ofξ; y0 denoted byεg[ξ] is called g-expectation ofξ. A nonlinear expectation could induce a non-additive probability by setting Pg(A) =εg[IA], where IA is the indicator function of the set A.First, we deal with two kinds of maximal inequalities for g-martingales as follows.Theorem 1.3.2 Let a function g satisfy (H1), (H3) and (H4): (?). Let X = (Xt)0≤t≤T be a right-continuous g-supermartingale. Then for every integerλ> 0, we haveWe writeε-μ[·] instead ofεg[·] when g(t, y, z) = -μ|z|, and writeεμ[·] instead ofεg[·] when g(t,y,z) =μ|z|, whereμis a given nonnegative constant.Theorem 1.3.5 Let a function g satisfy (H1) and (H3). Let X = (Xt)0≤t≤Tbe a right-continuous g-supermartingale with (?) .Then forλ> 0, we haveWhen the generator g satisfies the conditions of both Theorem 1.3.2 and Theorem 1.3.5, the inequality in Theorem 1.3.2 is more accurate than that in Theorem 1.3.5. However, when the function g does not satisfy (H4) in Theorem 1.3.2, we can still apply inequalities in Theorem 1.3.5 to get the maximal inequalities for g-martingales.Then, we establish Kolmogorov's inequality for g-martingale and Doob's g-martingale inequality.Theorem 1.4.3 (Kolmogorov's inequality for g-martingale) Let a function g satisfy (H1) and (H3). Suppose moreover that g is independent of y and g is super-homogeneous in z. Let X = (Xt)0≤t≤T be a right-continuous g-martingale with (?). Then for eachλ> 0, we have Theorem 1.5.2 (Doob's g-martingale inequality) Let a function g satisfy (H1), (H3) and (H4) and be independent of y. Suppose moreover that g(t,λz)≥λg(t,z) for each (t,z,λ)∈[0,T]×Rn×R+. Let X = (Xt)0≤t≤T be a right-continuous non-negative g-submartingale. Then for each integerλ> 0, we have(Ⅱ) In Chapter 2, we study Jensen's inequality under G-expectation and obtain several necessary and sufficient conditions of Jensen's inequality for G-expectation. We also find its important applications in G-martingale theory.We briefly recall the G-framework which will be needed in what follows. Consider the spaceΩ= C0(R+) of real-valued continuous paths (ωt)t∈R+ withω0 = 0. Let H be a vector lattice of real functions defined onΩcontaining 1. A functional E[·] : H→R is a sublinear expectation.Then the triple (Ω, H, E) is called a sublinear expectation space. We now introduce the canonical space. Set Bt(ω) =ωt for all t≥0 andω∈Ω. For each fixed T≥0, we consider the following space of random variables:where Cl.Lip(Rn) denotes the space of all functionsφsatisfying |φ(x)-φ(y)|≤C(1 + |x|m +|y|m)|x - y|, (?)x, y∈Rn for some C> 0 and∈N depending onφ. It is clear that Lip(Ft) (?) Lip(FT) for t≤T. We further define(?).Letσ0∈(0,1] and define a function G as: G(α) = (?)(α+-σ02α-),α∈R. A random variableξin a sublinear expectation space (Ω, H, E) is called G-normal distributed if for eachφ∈Cl.Lip(R), the function u defined by is the unique solution of the following parabolic partial differential equation:A sublinear expectation E[·] called G-expectation can be constructed on Lip(F), such thatξsatisfies G-normal distribution under the G-expectation and for each 0≤t1<,...,< tm <∞, we have(?),where (?). Under G-expectation, the canonical process B = (Bt)t≥0 is called G-Brownian motion. The topological completion of Lip(FT) (resp. Lip(F)) under the Banach norm E[| ? |] is denoted by LG1(FT) (resp. LG1(F)). E[·] can be extended uniquely to a sublinear expectation on LG1(F).Jensen's inequality plays an important role in the classical probability theory and martingale theory. For a convex function h defined on R and integrable random variables X and h(X), Jensen's inequality can be stated as:and the inequality is clearly reversed if h is concave.We found a counterexample to indicate that, for a simple concave function, Jensen's inequality failed for G-expectation. This suggests a natural question: When does Jensen's inequality hold for G-expectation?We first consider Jensen's inequality for G-expectation when h is convex.Theorem 2.3.2 Let h be a continuous function on R. Then the following two conditions are equivalent:(i) the function h is convex;(ii) the following Jensen's inequality for G-expectation holds: h(E[X])≤E[h(X)], for all X∈LG1(F) and h(X)∈LG1(F). Is the above inequality reversed when h is concave, just as the classical case? Namely, does Jensen's inequality E[h(X)]≤h(E[X]), (0.0.6)holds for all concave functions? A counterexample shows that the answer is no.Next, we try to give some necessary and sufficient conditions under which the above Jensen's inequality holds. We could regard G-expectation E[·], the concave function h and the random variable X as three parameters in Jensen's inequality (0.0.6). Thus we give necessary and sufficient conditions on E[·], h and X, respectively, so as to make Jensen's inequality (0.0.6) hold. We illustrate them in the following three theorems.Theorem 2.4.1 For each X∈LG1(F) and every concave function h : R→R, the following two statements are equivalent:(i) E[·] is a linear expectation;(ii) Jensen's inequality for G-expectation holds: for each X∈LG1(F) and every concave function h, if h(X)∈LG1(F), then E[h(X)]≤h(E[X]).Theorem 2.4.5 Let h be a differentiable function on R. Then the following two conditions are equivalent:(i) the function h is a non-decreasing concave function;(ii) the following Jensen's inequality for G-expectation holds: E[h(X)]≤h(E[X]), for all X∈LG1(F) and h(X)∈LG1(F).Theorem 2.4.9 Let h be a concave function on R. Then the following two conditions are equivalent: (i) the random variable X∈LG1(F) has no mean-uncertainty, i.e., X statisfies E[-X] = -E[X];(ii) the following Jensen's inequality for G-expectation holds: E[h(X)]≤h(E[X]),for all X∈LG1(F) and h(X)∈LG1(F).The properties of conditional G-expectations are similar to those of G-expectations. We can naturally extend these results for G-expectations in this section to the similar results for conditional G-expectations.As an application of Jensen's inequality for conditional G-expectation in G-martingale theory, we have the following theorem.Theorem 2.5.2 (1) Two random processes X and Y are G-martingales, then X + Y is a G-supermartingale.(2) A process X=(Xt)t≥0 is a G-martingale. Let h be a convex function such that h(Xt)∈LG1(Ft) for each t≥0. Then (h(Xt))t≥0 is a G-submartingale.(3)A process X is a G-martingale. Let h be a nondecreasing concave function such that h(Xt)∈LG1(Ft) for each t≥0. Then (h(Xt))t≥0 is a G-supermartingale.We give two interesting examples of G-martingales which are totally different from the classical martingale theory.Example 2.5.5 A stochastic process (?) is a G-martingale, where B is G-Brownian motion. The function h(x) =-ex,x∈R, is a concave function. However, (h(Mt))t≥0 is a G-submartingale. Example 2.5.6 It is easy to check that the stochastic process (?) is a G-martingale, where c is a constant. A function h(x) = -x2, x∈R, is a concave function. While, (h(Mt))t≥0 is a G-martingale or a G-submartingale, which depends on the value of the parameter c.(Ⅲ) In Chapter 3, we study Jensen's inequality for the nonlinear semigroup from two points of view. In the previous chapter, we have considered Jensen's inequality for G-expectation which is constructed via a nonlinear semigroup. This nonlinear semigroup is generated by a nonlinear parabolic partial differential equation whose generator is a given function G satisfying several properties. In this chapter, we study Jensen's inequality for a general semigroup generated by a nonlinear parabolic partial differential equation with a general generator F. The function F just satisfies the following two simple assumptions to ensure that the nonlinear parabolic partial differential equation with the generator F exists a unique viscosity solution.(A1) F(p, Y)≤F(p, X) whenever Y≤X;(A2) there is a functionω: [0,∞)→[0,∞) that satisfiesω(0+) = 0 such that F(α(x - y), X) - F(α(x - y), Y)≤ω(α|x - y|2 + |x- y|) whenever x, y∈Rd,α> 0 and X, Y∈Sd such thatLet F∈C(Rd×Sd) satisfy (A1)-(A2). For eachφ(·)∈Cl.Lip(Rd), we solve the following nonlinear parabolic partial differential equation (PDE)where (?). PDE (0.0.7) exists a unique viscosity solution (see Crandall et al. [21]). Then we defineIt is easy to check that (TtF)t≥0 is a nonlinear semigroup defined on Cl.Lip(Rd).This general semigroup could construct a considerable amount of filtration-consistent nonlinear expectations. It is intrinsic to consider Jensen's inequality for semigroups defined via the equality (0.0.8). Firstly, we give a necessary and sufficient condition on the generator F under which the Jensen's inequality holds true for each convex function h. Similar condition is given when h is a concave function.Theorem 3.2.1 Let F satisfy (A1) and (A2). Then the following two conditions are equivalent:(i) F is a super-homogeneous generator, i.e.,(ii) for eachφ∈Cl.Lip(Rd) and a convex function h : R→R satisfying (?), we haveTheorem 3.2.2 Let F satisfy (A1) and (A2). Then the following two conditions are equivalent:(i) F is a sub-homogeneous generator, i.e.,(ii) for eachφ∈Cl.Lip(Rd) and a concave function h : R→R satisfying (?), we haveTheorem 3.2.3 Under the assumptions (A1) and (A2), let F(p,A)(p,A)∈Rd×Sdis convex about p and A. Suppose moreover that F(0,0) = 0. Then the following two conditions are equivalent:(i) for eachφ∈Cl.Lip(Rd) and a convex function h:R→R satisfying (?), we have (ii) F(p, A) = (?), where Q(?)Rd×Sd+.Theorem 3.2.5 Under the assumptions (A1) and (A2), let F(p,A)(p,A)∈Rd×Sdis independent on p and convex about A, i.e., F(p,A)≡F(A). Supposemoreover that F(0) = 0. Then,(i) for eachφ∈Cl.Lip(Rd) and a convex function h:R→R satisfying (?),where (?).(ii) When d = 1, we have F(a) = k1|a| +k2a, where k1, k2≥0 and k2≥k1.Secondly, we study this problem through a different perspective: for each fixed function F, we try to give an explicit characterization to the function h satisfying the generalized Jensen's inequality. We obtain the following results:Definition 3.3.1 A C2-function h : R→R is called F-convex if the following condition holds for each (y, z, A)∈R×Rd×Sd:A C2-function h : R→R is called F-concave if the inequality (0.0.9) isreversed.Theorem 3.3.2 The following two conditions are equivalent:(i) the function h is F-convex;(ii) the following Jensen's inequality holds:for eachφ∈Cl.Lip(Rd) and a C2-function h satisfying (?).Theorem 3.3.5 The following two conditions are equivalent:(i) the function h is F-concave; (ii) the following Jensen inequality holds:for eachφ∈Cl.Lip(Rd) and a C2-function h satisfying (?).Thirdly, we consider G-convex function defined by Peng in [78]. We research the relation G-convex (resp. G-concave) function and the classical convex (resp. concave) function.Theorem 3.4.3 Let a function h: R→R is a C2-function. The following two conditions are equivalent:(i) the function h is G-convex;(ii) the function h is convex.Theorem 3.4.6 Let a function h: R→R is a C2-function. The following two conditions are equivalent:(i) the function h is G-concave;(ii) h is a nondecreasing concave function.Finally, we introduce an application of G-convex function about G-martingales.Theorem 3.4.12 Suppose (Xt)t≥0 is a G-martingale. There exists a function h such that h(Xt)∈LG1(F), t≥0. Then the following two statements are equivalent:(i) the function h is G-convex;(ii) (h(Xt))t≥0 is a G-submartingale.(Ⅳ) In Chapter 4, we construct a filtration-consistent nonlinear expectation-F-expectation (?)[·] via nonlinear Markov chains which are generated by fully nonlinear parabolic partial differential equations discussed in Chapter 3 with the infinitesimal generator F under assumptions (A1), (A2) and (A3): F(0,0) = 0. We first study the properties of F-expectation, and then apply these properties to risk measure.First, we obtain the properties of monotonicity, constant-preserving and translation invariance of F-expectations. Some properties of F-expectation are determined by the properties of their generator F. We prove that F-expectation has the properties of sub-additivity, positive homogeneity and convexity if and only if the F-expectation's generator F satisfies sub-additivity, positive homogeneity and convexity, respectively.Then, we apply these properties of F-expectation to risk measure.Definition 4.4.1 Let F satisfy the usual assumption (A1)-(A3). Set (?) and (?) as follows:ThenρF is called the static risk measure induced by F-expectation, andρtF is called the dynamic risk measure induced by conditional F-expectation.Theorem 4.4.2 Given the set of risk (?), let (A1)-(A3) hold for F andρF is the static risk measure induced by F-expectation. Then the following statements are equivalent:(i)ρF is a coherent risk measure;(ii) F-expectation (?)[·] is positively homogeneous and sub-additive;(iii) F is positively homogeneous and sub-additive.Theorem 4.4.3 Under the same conditions in Theorem 4.4.2, the following statements are equivalent:(i)ρF is a convex risk measure;(ii) (?)[·] is convex;(iii) F is a convex function.Theorem 4.5.5 Given the set of risk (?), let (A1)-(A3) hold for F and (ρtF)t≥0 is the dynamic risk measure induced by conditional F-expectation. Then, for each t∈[0, T], the following statements are equivalent: (i)ρtF is a dynamic coherent risk measure;(ii) conditional F-expectation E[·|Ft] is positively homogeneous and subadditive;(iii) F is positively homogeneous and sub-additive.Theorem 4.5.6 Under the same conditions in Theorem 4.5.5, the following statements are equivalent:(i)ρtF is a dynamic convex risk measure;(ii) (?)[·|Ft] is convex;(iii) F is a convex function.Theorem 4.5.9 (ρtF)t∈[0,T] is the dynamic risk measure induced by conditional F-expectation. Then it satisfies the following properties:(i) Recursiveness: for any X∈Lip0(Ft),(ii) Time consistency: for any X, Y∈Lip0(Ft),...
Keywords/Search Tags:g-expectation, g-martingale, Maximal inequality, G-expectation, Jensen's inequality, G-martingale, nonlinear semigroup, F-expectation, risk measure
PDF Full Text Request
Related items