Font Size: a A A

Backward Stochastic Differential Equations With Singular Perturbed Markov Chain And Applications

Posted on:2015-02-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:R TaoFull Text:PDF
GTID:1260330431455220Subject:Financial mathematics and financial engineering
Abstract/Summary:PDF Full Text Request
In this paper, we will discuss the asymptotic property and applications of the back-ward stochastic differential equations (BSDEs) with singular perturbed Markov chain and the corresponding PDEs. This paper consists of three parts. In the first part, we studies the weak convergence of the BSDEs with singular perturbed Markov chain under Meyer-Zheng topology. In the second part, we considers the optimal switching problem for regime-switching system. We obtain the optimal switching strategy by virtue of the oblique reflected BSDEs with Markov chain. Moreover, when the Markov chain has a two-time-scale structure, we studies the asymptotic property of the corresponding vari-ation inequalities. In the third part, we give an application of the BSDEs with Markov chain in stochastic maximum principle for forward backward stochastic system.In this paper, the singular perturbed Markov chain refers to the multi-time-scale Markov chain. In many physical models, different elements in a large system evolve at different rates. Some of them vary rapidly and others change slowly. Naturally, one wants to describe the largeness and smallness in a quantitative way. To reduce the complexity involved, we use a singular perturbation approach based on a two-time-scale model. The main idea is to formulate the problem using a Markov chain with two-time-scale structure. Then the variables associated with the fast scale are "averaged out" and replaced by the corresponding stationary distributions. In view of these, we can take αt=αtε governed by the generator Qε(t)=Q(t)/ε+Q(t). Here ε>0is a time scale parameter and both Q(t) and Q(t) are generators of a Markov chain. In this paper, Q(t) indicates the fast part and Q(t) the slow part. We will discuss the asymptotic property of the equations and the optimal switching or control problems. We can reduce the computational complexity by dealing with the limit problem.Next, we introduce the content and structure of this paper. In the first chapter, we give the research background and some preliminaries.In the second chapter, we studies the weak convergence of the BSDEs with singular perturbed Markov chain under Meyer-Zheng topology. First, we obtain the tightness of the parameterized BSDEs with ε by some classical BSDEs estimates, then we characterize the limit process by martingale problem. By virtue of the probabilistic representation by the BSDEs, we obtain the convergence of the corresponding PDE system when ε'0. Moreover, we give a numerical example to demonstrate the convergence. The content of this chapter is included in the following paper:R. Tao, Z. Wu and Q. Zhang, BSDEs with regime switching:Weak convergence and applications, Journal of Mathematical Analysis and Applications,407(1),97-111,2013.In the third chapter, we consider the optimal switching problem for the regime-switching system, which is described by stochastic differential equations modulated by an Markov chain. We can choose an switching control process to maximize the payoff function. There are two kinds of "switchings" in this paper. The switching of the Markov chain is determined by the market, while the other one is switching control which is chosen by the controller. We obtain the value function and the optimal strategy by means of oblique reflected BSDEs with Markov chain. When the Markov chain has the two-time-scale structure, we obtain the convergence of the corresponding variational inequalities by virtue of the BSDE method, which implies the convergence of the value function. We also give a numerical example. The results of this chapter are from the following paper:R. Tao, Z. Wu and Q. Zhang, Optimal switching under a regime switching model with two-time-scale Markov chains. submitted.In the fourth chapter, we discuss the maximum principle for the forward backward stochastic system. Assume the system follows an coupled forward backward stochastic differential equation modulated by an Markov chain and the control domain is convex. By the convex variation method, we give both the necessary and sufficient condition for the optimal control. Moreover, we give an application in an consumption-investment problem. The results of this chapter are included in the following paper:R. Tao and Z. Wu, Maximum principle for optimal control problems of forward-backward regime-switching system and applications, System and Control Letters,61(9),911-917,2012.In the following, we show the main results of this paper.1. Weak convergence of the BSDEs with singular perturbed Markov chain. We aim to obtain the weak convergence of the following BSDE:where the generator of the Markov chain αε(t) is given byThe state space of αε(t) is given by M=M1∪…∪Ml, where Mk={ski,skmk}, for k=1,...,l and M=m1+…+ml. Moreover, Q(t) has a block diagonal structurewhere for k∈{1,..., l}, Qk(t) is generator for a Markov chain with state space Mk·`Xtε is the solution of the following stochastic differential equation with Markov chain αtε A Markov chain at or the generator Q(t) is called weakly irreducible if the system of equations has a unique nonnegative solution v(t)=(v1(t),...,vm0(t)), which is called the quasi-stationary distribution. The main result of this part is the following theorem.Theorem0.1. Let Ytε be the solutions of BSDEs (0.0.17). Under the assumptions (A2.3-A2.7), the processes Yte converge weakly to a process Yt, which satisfies the following BSDEwhere Bt is a Brownian motion and Vti(j) is the compensated martingale measure related to a Markov chain αt. The generator of αt is Q(t)=diag(v1(t),..,vl(t))Q(t)diag(1m1,...,1m), where1mk=(1,...,1)’∈Rmk is an mk-dimensional column vector and vk(t)=(v1k(t),...,vmkk(t))∈R1×mk is the quasi-stationary distribution of Qk(t). f is defined by f(t,i,x,y)=∑j=1mi vji(t)f(t,sij,x,y).In order to prove this theorem, we first obtain the tightness of Ytε by Meyer-Zheng criterion. To characterize the limit process, consider the following operator where and Q(t)i3=(λij(t)), b, a are defined by By virtue of the unique solution of the martingale problem of g, we can characterize the limit process.Thereafter, we give the probabilistic interpretation for the corresponding PDE sys-tem both in the sense of viscosity solution and classical solution. Then. we can get the convergence of the PDE system.The results of this part are given by the following theorems.Theorem0.2. uε is the viscosity solution of the following reaction-diffusion equationthen fort∈[0, T] and x∈Rn, uε(t,x) converges to u(t,x) ε'0, where u is the unique viscosity solution of the following equation Remark0.1. Note that the convergence of uε(t, x) to u(t, x) implies that for any (t, x)∈[0,T] x Rn and i∈Mk, uε(t,i,x)'u(t,k,x).Theorem0.3. Under assumptions (A2.3-A2.4) and (A2.6-A2.11), the PDE system (0.0.19) has a unique Cb1,2solution uε. For all t∈[0, T] and x∈Rn, uε(t, x) converges to u(t, x) as ε'0, where u is the unique Cb1,2solution of the limit PDE (0.0.20).2. Optimal switching problem for regime switching system with singular per-turbed Markov chain.In this part, we studies the optimal switching problem for the following regime switching system where a(s) is a continuous-time finite-state Markov chain.The controller can choose a switching control in the switching set N={1,..., N}. A switching control is a double sequence (τn,ξn)n≥1, where τn is an increasing sequence of stopping times taking values in [t,T], representing the decision on "when to switch" and ξn are random variable valued in N, representing the new value of the regime after time τn. Given an initial regime i at time t, we define switching control processwhere1is the indicator function.Our objective is to find an admissible switching control process Ii,*such thatwhere the payoff function J(i, t,p, x, Ii) is defined as Here, gij is the transition cost from i to j.Vi,p(t, x):=J(i,t,p,x,Ii,*) is called the value function of the optimal switching problem.To solve this problem, we use the BSDE method. Consider the following oblique reflected BSDE with Markov chainBy virtue of penalization method, we obtain the existence of the solution.Theorem0.4. Assume (A3.1-A3.3). Then, BSDE (0.0.21) has a unique solution (Yt,p,x,Zt,p,x,Wt,p,x,Kt,p,x)∈S2×M2×H2×N2.Next, we use verification theorem to show the uniqueness of the solution. For any switching control process we define the following increasing process AI: Consider the BSDE with switching control, whose unique solution is denoted as (YiI,ZiI,WiIThe uniqueness of the solution of BSDE (0.0.21) and the optimal strategy can be obtained by the following theorem.Theorem0.5. Assume (A3.1-A3.3). Let (Yt,p,xZt,p,xWt,p,x,Kt,p,x) be a solution of BSDE (0.0.21) in S2×M2×H2×N2. Then,(1) For any I∈Ati, we have (2)Set T*0=t and ξ*0=i.Define a sequence {Tj*,ξj*}as follows: and ξj*is the random uariable such that Then. is an optimal strategy for the optimal switching problem and Moreover,we have which implies the uniqueness of the solution for BSDE(0.0.21).Next,we give a probabilistic representation for the corresponding variational in-equalities.Theorem0.6.Assume(A3.1-A3.3).The value functio V(t,x)is the unique viscosity solution of the following variational inequalities with Vi,p(T,x)=Φ(x),In the next part,we assume that the Markov chain αε has a two-time-scale structure, which is generated by Qε=[λpqε]such thatwhere Q=[λpq]and Q=[λpq].Assume further that the state space of αε is given by M=M1∪…∪ML,where Mk={sk1,…,skmk}for k=1,…L and M=m1+…+mL.First,we give a estimate of the increasing process Kit,p,x in the BSDE(0.0.21). Lemma0.1. The process Kit,p,x is absolutely continuous with respect to the Lebesgue measure. Moreover, we haveConsider the following variational inequalities with parameter s:We next define a limit optimal switching problem with averaged coefficients. Let vk=(v1k,...,vmkk) be the stationary distribution of Qk. We define Let α denote a new Markov chain generated by where vk is the stationary distribution of Qk and1n=(1,...,1)’∈Rn,Let Q=[λpq](p,q∈{1,...,L}.Consider the limit optimal switching problem with coefficients b, σσ, f and the Markov chain α. The corresponding HJB equation iswith terminal condition Vi,k(T,x)=Φ(x), whereThe main result of this part is the following theorem.Theorem0.7. For k=1,...,L and l=l,...,mk, we have Vi,sklε(t,x)'Vik(t,x). Moreover, Vik(t,x) is the unique viscosity solution of the HJB equation (0.0.24) for the limit optimal switching problem. 3. Stochastic maximum principle for forward backward regime switching sys-tem.In this part, we discuss the optimal control problem for the following regime switch-ing system where the state space of the Markov chain at is given by M={1,...,k}. Wt=(Wt(1),..., Wt(k)), nt=(nt(1),..., nt(k)),where nt(j)=1{αt-≠j}λ(αt-,j).Denote μ the class of admissible control taking values in convex domain U and satisfying Define the cost functional as follows,where l, h, r are deterministic measurable functions. The objective of our optimal control problem is to maximize the cost functional over the admissible control set U.First, we consider the necessary condition of the existence of the optimal control.Let u(-) be an optimal control for the control problem (0.0.25), the corresponding trajectory is denoted by (x(·), y(·), z(·), W(·)). Let v(·) be another adapted control process (need not take values in U) satisfying u(·)+v(·)∈U. For the reason that the control domain U is convex, we have for any0≤ρ<1, uρ(·):=u(·)+ρv(·)∈U.We introduce the following variational equation, which is a linear FBSDE with Markov chain:Then we can prove the following variational inequality: Lemma0.2. Under the assumptions (A4-1-A4-3), the following variational inequality holds:Define the Hamiltonian H:[0,T]×M×R×R×R1×d×Mρ×U×R×R1×d×R as follows: H(t,i,x,y,z,w,u,p,k,q)=(p, b(t, i, x,u))+(k,σ(t,i,x,u))-(q, g(t, i, x, y, z, wn, u))+l(t, i, x, y, z, wnt u),(0.0.29)where Wn=(W(1)n(1),…, W(k)n(k)) and n(j)=1{i≠j}λij Next, we introduce the following adjoint equation: Using Ito’s formula, we can obtain the main result of this part.Theorem0.8(Maximum Principle). Letu(·) be an optimal control and (X(·), Y(·), Z(·), W(·) be the corresponding trajectory.(P(·),K(·),Q(·)) is the unique solution of adjoint equa-tion. Then, for any v∈U, we have Hu·(v-ut)≤0,a.e.,a.s..(0.0.31)With additional concave conditions, we can obtain the sufficient condition for the existence of the optimal control.Theorem0.9. Suppose (A4.1-A4.3) hold. In addition, we assume h, r, H are all concave with respect to (X, Y, Z, u, W) and YT—Φ(XT) is of the special form YT—K(αT)Xt, where K is a deterministic measurable function. Let (P, Q, K, M) be the solution of the adjoint equation with respect to the control u(·). Then u(·) is an optimal control if it satisfies (0.0.31). In the last part, we apply the maximum principle to a consumption-investment problem.
Keywords/Search Tags:Backward stochastic differential equations, Two-time-scale Markov chains, Singularperturbation, Weak convergence, Meyer-Zheng topology, Optimal switching, Obliquerelfection, Variational inequalities, Viscosity solution, Stochastic maximum principle
PDF Full Text Request
Related items