Font Size: a A A

Theory Of Functional FBSDE And Optimization Under G-expectation

Posted on:2015-02-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:S Z YangFull Text:PDF
GTID:1260330431455226Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Backward stochastic differential equations and forward-backward stochastic differ-ential equations (FBSDEs) have been widely recognized that they provide useful tools in many fields, especially mathematical finance and the stochastic control theory (see [22],[27],[71],[111],[112] and the references therein).A state dependent fully coupled FBSDE is formulated as: There have been three main methods to solve FBSDE (10), i.e., the Method of Con-traction Mapping (see [2] and [95]), the Four Step Scheme (see [68]) and the Method of Continuation (see [52],[88] and [113]). In [70], Ma et al. studied the wellposedness of the FBSDEs in a general non-Markovian framework. They find a unified scheme which combines all existing methodology in the literature, and overcome some fundamental difficulties that have been long-standing problems for non-Markovian FBSDEs.It is well known that quasilinear parabolic partial differential equations are related to Markovian FBSDEs (see [86],[92] and [95]), which generalizes the classical Feynman-Kac formula. Recently a new framework of functional ltd calculus was introduced by Dupire [27] and later developed by Cont and Fourni [16],[17],[18]. Inspired by Dupire’s work, Peng and Wang [96] obtained a so-called functional Feynman-Kac formula for classical solutions of path-dependent partial differential equation (P-PDE) in terms of non-Markovian BSDEs. Furthermore, under a special condition, Peng [84] proved that the viscosity solution of the second order fully nonlinear P-PDE is unique. Ekren, Touzi, and Zhang ([36],[34],[35]) gave another definition of the viscosity solution of the fully nonlinear P-PDE and obtained the uniqueness result of viscosity solutions. In section1of chapter1, we study the following functioned fully coupled FBSDE: where Xs:=X(t)0≤t≤s.As mentioned above, Hu and Peng [52] initiated the continuation method in which the key issue is a certain monotonicity condition. But unfortunately, the Lipschitz and monotonicity conditions in [52] and [97] do not work for equation (11). Here the main difficulty is that the coefficients of (11) depend on the path of the solution X(t)0<t<T-In this section, we revise the continuation method and propose a new type of Lipschitz and monotonicity conditions. These new conditions involve an integral term with respect to the path of X(t)1≤t≤T.Thus, we call them the integral Lipschitz and monotonicity conditions. The readers may see Assumption1.1and1.2for more details. In particular, we present two examples to illustrate that our assumptions are not restrictive. Under the integral Lipschitz and monotonicity conditions, the continuation method can go through and it leads to the existence and uniqueness of the solution to equation (11).We explore the relationship between the solution of functional fully coupled FBSDE (11) and the classical solution of the following related P-PDE: whereWe prove that if the solution u of the above P-PDE has some smooth and regular properties, then we can solve the related equation (11) and consequently, the P-PDE has a unique solution.Linear Backward Stochastic Differential Equations (BSDE) was introduced by Bis-mut [3]. The existence and uniqueness theorem for nonlinear BSDEs was established by Pardoux and Peng [91]. Then Peng [86] and Pardoux and Peng [92] gave a relation-ship between Markovian forword-backward systems and systems of quasilinear parabolic PDEs, which generalized the classical Feynman-Kac formula. Peng [89] pointed out that for non-Markovian forword-backward systems, it was an open problem to find the corresponding " PDE".In the section2of chapter1, we study the relationship between solution-s of non-Markovian fully coupled forword-backward systems and classical solutions of path-dependent PDEs. More precisely, the non-Markovian forword-backward system is described by the following fully coupled forword-backward SDE:We first give the definition of classical solution, within the framework of functional Ito calculus, for the path-dependent PDE. Then under mild hypotheses, we establish some estimates and regularity results for the solution of the above system with respect to paths. Finally, we show that the solution of (13) is related to the classical solution of the following path-dependent PDE whereIn many real world applications, the systems can only be modeled by stochastic systems whose evolutions depend on the past history of the states. So in section3of chapter1, we study a stochastic optimal control prob-lem in which the system is described by the following stochastic functional differential equation: The cost functional is For the initial datum-γt∈A, our optimal control problem is to find an admissible control u(·)∈u[t,T](see Definition1.21) so as to minimize the cost functional J. In this case, the value function V:Aâ†'R is defined to beWe obtain the following path-dependent HJB equation where We prove that the value function is the viscosity solution of the path-dependent HJB equation. In addition, the stochastic verification theorem for the smooth case is also proved.It is well known that dynamic programming with related HJB equations is a pow-erful approach to solving optimal control problems (see [88],[43],[110],[114] and [87]). Different from the HJB equations derived for stochastic delay systems (see [23],[25],[64] and [65]) and dynamic programming principle for functional stochastic systems (see [72]), we establish the dynamic programming principle and derive the HJB equation in a new framework of functional Ito calculus.Since1983, Crandall and Lions [21] developed the notion of viscosity solution. The finite dimension optimal stochastic control problem has been studied well, more see [20]. But In many real world applications, the systems can only be modeled by stochastic systems whose evolutions depend on the past history of the states, and the related optimal stochastic control problem become an infinite dimension problem.Mohammed in [64] and [65] studied functional stochastic differential equations. Chang et al.[23] studied the optimal stochastic control problem which driven by s-tochastic functional differential equations with bounded memory. But they maked a mistake for using the Ekeland variational principle. More see [57],[60].Furthermore, Under a special condition, Peng [84] proved that the viscosity solution of second order fully nonlinear path-dependent PDE is unique. Ekren, Touzi, and Zhang ([36],[39],[40]) directly work with an abstract fully nonlinear path-dependent PDE, and use a complicated definition of super-and sub-jets in their notion of viscosity solution, in particular their definitions involve the unnatural and advanced notion of nonlinear expectation. Under Dupire’s functional ltd calculus, Tang and Zhang [109] studied the optimal stochastic control problem for a path dependent stochastic system under a recursive path-dependent cost functional, and there is some mistake in the proof of uniqueness.In the section4of chapter1:We try to give a weak derivative in contin-uous space. Inspired by Dupire’s derivatives, we should give a samll perturlation space in the definition of Frechet derivative. Following this view, we choose the Sobolev space W1’2as perturlation space, and present a new weak Frechet derivatives in continuous paths space.In the section5of chapter1:We present a new weak Frechet derivative in continuous paths space. Denote continuous paths space as C. This new derivatives don’t need to consider the cadlag space, which Dupire’s derivatives need deal with. Under this new framwork, we have the related functional ltd formula of semi-martingale. Then we study the optimal stochastic control problem which driven by stochastic functional differential equations with bounded memory. For the new weak Frechet derivatives in C only considers the perturbation of the element of Soblove space (W1’2), and the restriction of Ekeland variational principle, we limit the definition of the viscosity solution into TV1’2. Following the new definition of viscosity solution, we verify that the value function of an optimal stochastic control problem should be the unique solution of the associated HJB equation from the dynamic programming principle for the optimal stochastic control problem.In the mathematical Finance, we focus on the compute of probability of default. Under the assumption of linear probability (expectation) space, we use log normal dis-tribution to describe the return of stock, and we could easily calculus probability of default by normal distribution. For general case, there is not only one probability. We need introduce volatility uncertainty (including much more probabilities) in the market.A nonlinear expectation (probability) G-expectation was established by Peng in recent years, which could be equivalent to a set of probabilities (see [30]). In the theory of G-expectation, the G-normal distribution and G-Brownian motion were introduced and the corresponding stochastic calculus of Ito’s type were established (see [78],[80],[81]). In Markovian case, the G-expectation is associated with fully nonlinear PDEs, and is applied among economic and financial models with volatility uncertainty (see [90]).In the section1of chapter2, the numerical property of G-heat equation is considered. The next equation is used to compute the nonlinear probability ([78]):where φ(x)=1{x<0}, r∈R.We show that u(t,x):=E[√(x+√X)],(t, x)∈[0.∞) x Rd, is the viscosity solution of the equation (14), where E is the nonlinear expectation.Following the work of [90],[93],[100], we prove that the the fully implicit discretiza-tion convergence to the viscosity solution of the G-heat equation.Under the same maximum volatility, we compare the nonlinear probability u(1,0) and linear probability u(1,0) of the next two equations: and By calculation, we have It is well known that the nonlinear backward stochastic differential equation (BSDE) was first introduced by Pardoux and Peng [91]. Independently, Duffie and Epstein [28] presented a stochastic differential recursive utility which corresponds to the solution of a particular BSDE. Then the BSDE point of view gives a simple formulation of recursive utilities (see [38]).Since then, the classical stochastic optimal control problem is generalized to a so called "stochastic recursive optimal control problem" in which the cost functional is described by the solution of BSDE. Peng [87] obtained the Hamilton-Jacobi-Bellman equation for this kind of problem and proved that the value function is its viscosity solution. In [88], Peng generalized his results and originally introduced the notion of stochastic backward semigroups which allows him to prove the dynamic programming principle in a very straightforward way. This backward semigroup approach is proved to be a useful tool for the stochastic optimal control problems. For instance, Wu and Yu [110] adopted this approach to study one kind of stochastic recursive optimal control problem with the cost functional described by the solution of a reflected BSDE. It is also introduced in the theory of stochastic differential games by Buckdahn and Li in [6]. We emphasize that Buckdahn et al.[7] obtained an existence result of the stochastic recursive optimal control problem.Motivated by measuring risk and other financial problems with uncertainty, Peng [78] introduced the notion of sublinear expectation space, which is a generalization of probability space. As a typical case, Peng studied a fully nonlinear expectation, called G-expectation E[·](see [82] and the references therein), and the corresponding time-conditional expectation Et Et[·] on a space of random variables completed under the norm E[|·|p]1/p. Under this G-expectation framework (G-framework for short) a new type of Brownian motion called G-Brownian motion was constructed. The stochastic cal-culus with respect to the G-Brownian motion has been established. The existence and uniqueness of solution of a SDE driven by G-Brownian motion can be proved in a way parallel to that in the classical SDE theory. But the solvability of BSDE driven by G-Brownian motion becomes a challenging problem. For a recent account and development of G-expectation theory and its applications we refer the reader to [76,77,83,106,73,32,33,94,102,103].Let us mention that there are other recent advances and their applications in s-tochastic calculus that do not require a probability space framework. Denis and Martini [31] developed quasi-sure stochastic analysis, but they did not have conditional expec-tation. This topic was further examined by Denis et al.[30] and Soner et al.[107]. It is worthing to point out that Soner et al.[108] have obtained a deep result of exis-tence and uniqueness theorem for a new type of fully nonlinear BSDE, called2BSDE. Various stochastic control (game) problems are investigated in [72,75,98,67] and the applications in finance are studied in [69,74].In the section2of chapter2, we study the stochastic differential recursive utility under G-expectation.Recently Hu et. al studied the following BSDE driven by G-Brownian motion in [50] and [49]: They proved that there exists a unique triple of processes (Y, Z, K) within our G-framework which solves the above BSDE under a standard Lipschitz conditions on f(s, y, z) and g(s, y. z) in (y, z). The decreasing G-martingale K is aggregated and the so-lution is time consistent. Some important properties of the BSDE driven by G-Brownian motion such as comparison theorem and Girsanov transformation were given in [49].We study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a BSDE driven by G-Brownian motion. In more details, the state equation is governed by the following controlled SDE driven by G-Brownian motion The objective functional is introduced by the solution Ytt,x,u of the following BSDE driven by G-Brownian motion at time t:We define the value function of our stochastic recursive optimal control problem as follows: where the control set is in the G-framework. It is well known that dynamic programming and related HJB equations is a powerful approach to solving optimal control problems (see [43],[114] and [87]). The objective is to establish the dynamic programming principle and investigate the value function in G-framework. The main result is that V is deterministic continuous viscosity solution of the following HJB equation whereRecently a similar problem was studied in Zhang [115]. The forward-backward equa-tions in [115] are simpler:the forward equation is time-homogeneous and the backward equation does not include the terms Z and K.Over the past twenty years, backward stochastic differential equations are widely used in mathematical finance, stochastic control and other fields. By analogy with the equations in continuous time, Cohen and Elliott [13] considered the backward stochastic difference equations (BSDEs) on spaces related to discrete time, finite state processes. As entities in their own right, not as approximations to the continuous ones in [8,5,66], they established fundamental results including the comparison theorem etc. For deeper discussion, the readers may refer to [11,12,13,14,15,19].So in chapter3, we develop a new generalized Girsanov transformation in this new discrete time and finite processes. In stochastic calculus, the Doleans-Dade exponential of a semimartingale X is defined to be the solution of the stochastic differential equation with initial condition Y0=1and exponentiating gives the solution In the discrete time case, Follmer showed the following version of the Doleans-Dade stochastic exponential in [42]:If P is a probability measure equivalent to P, then the martingale can be represented as where A is a P-martingale with Ao and At+1-At>-1P-a.s.In this section, we generalize Follmer’s result to study the following linear BSDE in [13]: Motivated by obtaining the explicit solution of the above equation, we develop the fol-lowing generalized Girsanov transformation. Consider the following one-step equation on the probability space ((Ω,FT,{Ft)0≤t≤T,P)) where a is an adapted process. Denote a new measure Q by We prove that Q and P are equivalent probability measures on (Ω, Ft) and Y is a martingale on (Ω,FT,{Ft)0≤t≤T,Q)-By the Girsanov transformation, we show the price dynamics of certain securities in the complete financial market.
Keywords/Search Tags:Functional FBSDE, G-expectation, G-normal distribution
PDF Full Text Request
Related items