Font Size: a A A

Partial Differential Equations And Stochastic Optimal Control Problems Of Forward-Backward Systems

Posted on:2010-08-10Degree:DoctorType:Dissertation
Country:ChinaCandidate:F ZhangFull Text:PDF
GTID:1100360302983781Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Since the fundamental work of Pardoux and Peng [58], backward stochastic differential equations (BSDEs) and forward-backward stochastic differential equations (FBSDEs) have received considerable research attention due to their nice structure and wideapplicability in a number of different areas, such as stochastic control, partial differential equations, mathematical finance, to mention only a few. This thesis is dedicated tothe study of FBSDEs in both finite and infinite dimensions, partial differential equations (PDEs) and stochastic optimal control and stochastic differential game problemsarising from forward-backward stochastic systems.For one partially coupled FBSDE with continuous monotone coefficients studiedby Antonelli and Hamadene [2], the uniqueness of the solution is not attainable. Weuse the idea of Jia and Yu [44] to obtain the equivalence between the uniqueness of thesolution and its continuous dependence on the initial value of the forward componentand the terminal value of the backward one. This result extends that of [44].One kind of fully coupled FBSDE in infinite dimensions is considered. As a continuation of Guatteri [36], we study the regularity of the solution in the Malliavin spaces.We prove that the Malliavin derivative of the solution solves a linear forward-backwardsystem. By investigating the relation between the Malliavin derivative and the Gateauxderivative of the solution, we obtain two versions of the solution process Z, which isexpected to help to study the corresponding partial differential equation system. However, we are only able to treat two special cases: the case when YT is independent ofXT, i.e. YT =ξ, and the case whenσis independent of y, z, i.e.σ=σ(t, x).FBSDEs can be used to give a probabilistic interpretation of certain quasi-linearparabolic PDEs. If its coefficients are not smooth enough, the PDE has to be considered in a weak way. We consider the solution of a generalized quasi-linear parabolic PDEin the sense of Sobolev weak solutions. We prove that under Lipschitz and monotonicity conditions, the PDE admits a unique Sobolev weak solution u which correspondsto the solution of a partially coupled FBSDE. This result extends that of [56]. It'sworth pointing out that the coefficient b in the PDE contains the solution variable u,and accordingly, the forward component of the corresponding FBSDE is coupled withthe backward one. Backward doubly stochastic differential equations (BDSDEs) canprovide a probabilistic interpretation of certain quasi-linear parabolic stochastic partialdifferential equations (SPDEs). If the coefficients of the SPDE are not smooth enough,we also have to consider its weak solution. We study a quasi-linear parabolic SPDE, inwhich the coefficient f is assumed to be locally monotone in the variable y and locallyLipschitz in the variable z. We conclude that the SPDE admits a unique Sobolev weaksolution which corresponds to the solution of a BDSDE. This result extends that of[10]. These two results develop the theory of weak solutions for quasi-linear parabolicPDEs and SPDEs.Continuous stochastic control theory has found many applications in engineeringand mathematical finance. However, it lacks some reality, since the controller has tointervene at every time instant. Sometimes the system state has to be changed instantaneously and in this case the theory of impulse control becomes necessary and efficient.We study three kinds of stochastic optimal control problems of forward-backward systems in which the control variable consists of two components: a continuous controland an impulse control. We seek necessary conditions, of the Pontryagin maximumprinciple type, satisfied by the optimal controls. Additional conditions are also given,under which the necessary optimality conditions turn out to be sufficient. It's the firstattempt to study stochastic optimal control problems of forward-backward systemsinvolving impulses, to our best knowledge. Deterministic differential game problemsinvolving impulses have been studied by many people, such as Yong [76], Shaiju andDharmatti [67]. However, stochastic differential game problems involving impulses seemmissing. We consider a zero-sum stochastic game problem on infinite horizon, in whichthe maximizer uses continuous controls and the minimizer takes impulse controls. Bythe dynamic programming principle, it's proved that the lower value function V- andupper value function V+ are viscosity solutions of the corresponding quasi-variationalinequalities (QVI). By proving that the QVI admits at most one viscosity solution, weconclude that V- = V+, and thus the stochastic differential game admits a value. Wealso obtain a verification theorem which provides an optimal strategy for this game. Our results enrich the theories of stochastic optimal control, stochastic game and impulsecontrol.The thesis consists of five chapters. In the following, we list the main results.Chapter 1: We introduce problems studied from Chapter 2 to Chapter 5.Chapter 2: We study one partially coupled FBSDE with continuous monotonecoefficients. We obtain the equivalence between the uniqueness and continuous dependence of the solution. The comparison theorems of SDEs and FBSDEs, and aapproximation result of continuous function by a sequence of Lipschitz functions playan important role.Theorem 2.1.2. Under Assumption 2.1.1, the following two statements for FBSDE(2.1) are equivalent.(i) Uniqueness: FBSDE (2.1) admits a unique solution.(ii) Continuous dependence on (x,ξ): For (?) and (?), and E|ξq-ξ|2→0 as q→∞, thenas p, q→∞, where (?) and (?) are any solutions ofFBSDE (2.1) corresponding to (x,ξ) and (xp,ξq) respectively.Chapter 3: We consider one kind of fully coupled FBSDE in infinite dimensions.By studying the regularity of the solution in the Malliavin spaces, we prove that theMalliavin derivative of the solution solves a linear forward-backward system.Theorem 3.2.5. Let Assumptions 3.1.1 and 3.1.2 hold. Let (?)which belongs to D1,2(K). Then there exists a positive constant T0≤T* such that forall T≤T0, the unique mild solution (X,Y,Z) of FBSDE (3.1) on [0,T] enjoys thefollowing properties:(i)X∈L1,2(H), Y∈L1,2(K), Z∈L1,2(L2(Ξ, K)).(ii) There exists a version of (DX,DY,DZ) such that for a.a.s∈[0,T), {DsXt,t∈(s, T]} is a predictable continuous process in L2(Ξ, H) satisfyingand the process (?) belongs to Moreover, for a.a. s and t with s < t, a.s.,where we set (?).By investigating the relation between the Malliavin derivative and the Gateauxderivative of the solution, we obtain two versions of the solution variable Z.Theorem 3.3.2. There are two versions of the solution process Z:Chapter 4: We provide probabilistic interpretations for certain quasi-linear parabolicPDEs and SPDEs. We first consider the solution of a generalized quasi-linear parabolicPDE in the sense of Sobolev weak solutions.Theorem 4.1.6. Let Assumption 4.1.1 hold. Then(i) PDE (4.2) admits a local Sobolev weak solution u such that for a.e. s∈[t, T], x∈Rd,where (?) is the unique local solution of FBSDE (4.1).(ii) The Sobolev weak solution of PDE (4.2) is unique in the class of Lipschitz functions.We study a quasi-linear parabolic SPDE, in which the coefficient f is assumed tobe locally monotone in y and locally Lipschitz in z.Theorem 4.2.19. Let Assumption 4.2.18 and (4.11) hold. Then SPDE (4.14) admitsa unique Sobolev weak solution u. Moreover, for a.e. s∈[t,T], x∈Rd, a.s.where (Yt,x,Zt,x) is the unique solution of BDSDE (4.16). Chapter 5: We consider stochastic optimal control and stochastic differentialgame problems involving impulses. Firstly study three kinds of stochastic optimalcontrol problems of forward-backward systems in which the control variable consistsof two components: a continuous control and an impulse control. We seek necessaryand sufficient conditions satisfied by the optimal controls. A comparison for these threeproblems is also given.In the first stochastic optimal control problem, the domain of the continuous controls and the domain of the impulse controls are convex. We have the following stochastic maximum principle.Theorem 5.2.4. Let (u,ξ) be an optimal control of the stochastic optimal controlproblem (5.3)-(5.4), (?) be the corresponding trajectory, and (?)solution of the adjoint equation. Then we havewhere (?)is the Hamiltoniandefined byThe following is the sufficient optimality result.Theorem 5.2.6. Let Assumption 5.2.1 hold. Assume that the functionsφ,γ,η→l(t,η) and (x, y, z, v)→H(t, x, y, z, v, p, q, k) are convex. Moreover, forΛ∈Rm×n and(?) has the following particular form:Let (?) be the solution of the adjoint equation associated with (?)Then (u,ξ) is an optimal control of the stochastic optimal control problem (5.3)-(5.4)if it satisfies (5.7) and (5.8).In the second stochastic optimal control problem, it is assumed that the domainof the impulse controls is convex and the domain of the continuous controls need notconvex. In this case the control variable does not enter the diffusion coefficientσ. Theorem 5.3.7. Let (u,ξ) be an optimal control of stochastic optimal control problem(5.13)-(5.14), (?) the corresponding trajectory and (?) solutionof the adjoint equation. Then we havewhere (?) is the Hamiltoniandefined byTheorem 5.3.9. Let Assumptions 5.3.3 and 5.3.8 hold. Assume that the functionsφ,γ,η→l(t,η) and (x, y, z, v)→H(t, x, y, z, v, p, q, k) are convex. Moreover, forΛ∈Rm×n and (?) has the following particular form:Let (?) be the solution of the adjoint equation associated with (?).Then (u,ξ) is an optimal control of the stochastic optimal control problem (5.13)-(5.14)if it satisfies (5.19) and (5.20).In the third stochastic optimal control problem, we assume that the domain ofthe impulse controls is convex, the domain of the continuous controls is not necessarilyso and the continuous control variable is allowed to enter the diffusion coefficientσ.In this case, the maximum principle is hardly to derive by means of spike variation.Under strong assumptions we solve this problem with the method of relaxed controls.It's worth pointing out that, the domain of the relaxed controls has a nice structure ofconvexity and the relaxed control problem is a generalization of the continuous controlproblem. We first consider the corresponding stochastic optimal relaxed-impulse controlproblem. Then the results for the original optimal control problem follow easily.Theorem 5.4.3. Let Assumptions 5.4.1 and 5.4.2 hold. Then for the optimal control (u,ξ) of the stochastic optimal control problem (5.24)-(5.25), we havewhere (?) is the Hamiltoniandefined byTheorem 5.4.4. Let Assumptions 5.4.1 and 5.4.2 hold. Assume that the functionsφ,γ,η→l(t,η) and (x, y, z, v)→H(t, x, y, z, v, k,P,Q) are convex. Moreover, forΛ∈Rm×n and (?) has the following particular form:Let (?) be the solution of the adjoint equation associated with (?.Then (u,ξ) is an optimal control of the stochastic optimal control problem (5.24)-(5.25)if it satisfies (5.26) and (5.27).In the last section, we consider a stochastic game problem involving impulses. Bythe dynamic programming principle, we prove that this game admits a value.Theorem 5.6.4. Let Assumptions 5.6.1 and 5.6.2 hold. Then V- = V+ is the uniqueviscosity solution of QVI (5.36) in BUC(Rn). Thus, the game admits a value.We obtain a verification theorem which provides an optimal strategy for this game.Theorem 5.6.12. Let Assumptions 5.6.1 and 5.6.2 hold. Let v∈BUC(Rn) be a classical solution of the QVI. If the QVI-control (u*(?),ξ*(?)) associated with v is admissible,then v is the value function of our stochastic differential game. Thus, (u*(?),ξ*(?))constitutes an optimal strategy of the game.
Keywords/Search Tags:Forward-backward stochastic differential equation, Infinite dimensions, Malliavin Calculus, Partial differential equation, Stochastic partial differential equation, Sobolev weak solution, Impulse control, Stochastic optimal control, Maxi-mum principle
PDF Full Text Request
Related items