Font Size: a A A

The Control And Game Of Partially Observed Forward-Backward Stochastic System With Jump

Posted on:2024-07-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:T ChenFull Text:PDF
GTID:1520307202961489Subject:Financial mathematics and financial engineering
Abstract/Summary:PDF Full Text Request
In control and game problems,decision-makers can only make decisions based on the information they have mastered.In many cases,they cannot obtain the complete information of the true state.Thus they can only make decisions based on the observed information and estimate the true state to obtain the optimal filtering.In control problems,the observation process is generally assumed to be a continuous stochastic process.For the control systems with discontinuous observation processes,due to greater difficulties in estimation and filtering,there are few relevant research results.However,such problems are of great practical significance and urgently need to be studied and solved.In addition,in game problems,the single-objective problem becomes a multi-objective problem.Decision-makers need to consider the strategies of other participants while making decisions,and the problem changes from finding the optimal control to finding the Nash equilibrium point.In recent years,with the rapid development of technology and economy,the connections between individuals have become closer,and the application of large population models has become more widespread.Thus the research of mean-field game has made great process and influence.Building on previous research,this thesis conducts a comprehensive study on several types of control and game problems in jump diffusion systems and partially observable systems,and attempts to use relevant theoretical results to address some practical problems in financial markets.The structure of this thesis is as follows:In Chapter 1,we introduce the research background and elaborates on the main contributions of each chapter.In Chapter 2,we consider the optimal control problem of the stochastic control system driven by Brownian motion and Markov chain in progressive structure.In our model,the relevant results are essentially different from those in predictable structure.In detail,we obtain the so-called necessary and sufficient stochastic maximum principle,which is the necessary and sufficient condition for the optimal control,respectively.Our stochastic maximum principle has two parts,including continuous part and jump part,which are used to characterized the characteristics of the optimal control at continuous time and jump time,respectively.This shows that the progressive structure can better describe the optimal control of stochastic control system with Maxkov chain.Moreover,we give an linear-quadratic problem to illustrate how to find the explicit form of optimal control in progressive structure.We also show that the cost functional can indeed get a smaller value in progressive structure.In Chapter 3,we study an optimal control problem of a partially observed meanfield stochastic control system with random jumps.In our model,the control domain is non-convex.Applying a special spike variation,we obtain the so-called stochastic maximum principle,which is a necessary condition of the optimal control.Since there are essential differences between the stochastic control system with jump and the system without jump.The estimate of the jump term(that is the third estimate in(2.10)in[84])is controversial in the predictable structure.More details we can see Song,Tang and Wu[79].Thus we apply the special progressive spike variation which first introduced by[79]to overcome this deficiency.The controlled state process is governed by a stochastic differential equation with jumps of mean-field type.The observation is also mean-filed type stochastic differential equation.The control is allowed to enter into all coefficients including the diffusion terms,jump terms and the drift term of the observation.In the case of partial information,we introduce three first order adjoint equations,two of which are defined by mean-field backward stochastic differential equations with jumps.The second order adjoint equation is still a backward stochastic differential equation with jumps.Thus,sharper estimates for the first order variational equations is needed.Compared with the existing literature,the main difficulties is to give an appropriate estimate for the first order variational equations of state and observation as the coefficients are progressive rather than predictable.Actually,the auxiliary process becomes more complex due to the progressive coefficients.Combining Girsanov’s theorem,we derive a maximum principle under partial information.It is worth mentioning that our partially observed maximum principle can degenerate into existing results under some reasonable assumptions.In Chapter 4,we investigate a linear-quadratic optimal control problem for partially observed forward-backward stochastic system with random jumps.In our model,the observation is no longer a Brownian motion but a general controlled stochastic process driven by Brownian motion and Poisson random measure,which also have correlated noise with the state equation.And the observation’s drift term is linear with respect to the state and control process.This assumption makes the problem more natural and consistent with the actual situation,because the observation process is not necessarily continuous in practice.For example,the stock price process in the financial market is usually discontinuous because of the impact of emergencies such as macro policies or because the signal reception process in wireless communication is interrupted for some reason.In the existing literatures,little research is available on the stochastic control problem and filtering problem when the observation is discontinuous.On the basis of the above assumptions about the observation process,the Girsanov transformation is invalid.Therefore,we extend the backward separation approach from the continuous stochastic control system to the discontinuous stochastic control system,and then we apply the backward separation approach to solve this problem.The necessary and sufficient conditions for the optimal control are obtained,and we also give the optimal filtering of the stochastic Hamiltonian system.Furthermore,the feedback representation of the optimal control strategy has also been given.Moreover,we give two special examples to illustrate that our theoretical result can be applied to many cases.As an application,we study an asset-liability problem and obtain the feedback representation of the optimal control strategy.This financial application illustrates the practical significance of our results.In Chapter 5,we study a class of linear-convex problem of partially observed large-population system with input constraints,where both two types mean-field terms,asynchronous style(state-averages)and synchronous style(state expectations)are considered.Here,the observation is a general controlled stochastic process rather than a Brownian motion,whose drift term is linear with respect to the state and control process.Thus there exists a cyclic dependence between control and observation,which could be overcome by backward separation approach to decompose the state and observation.Then,for the general caseby using the mean-field method to freeze the asynchronous state-averageswe obtain the related decentralized strategies by virtue of the Hamilton approach through a Hamiltonian system and related Consistency Condition,which are given by two types of mean-field forward-backward stochastic differential equations with partial information.In virtue of the method of continuity and discounting method,the well-posedness of such kind of equations are proved under two different conditions.Nextly,when the cost and control constraint become quadratic and linear subspace respectively,we give the feedback representation of the optimal decentralized strategies by Riccati approach,and the corresponding Consistency Condition is also given in Riccatitype.Finally,as an application,a general optimal consumption problem is considered to show the significants of our results.the control variable entered the difusion term.In our model,the weighting matrices for control process even other variables are allowed to be indefinite in the cost functional.Using the mean-field method to freeze the state averages,we obtain the related decentralized strategies by virtue of Hamiltonian approach through a stochastic Hamiltonian system.Applying the method of relaxed compensator,we show the wellposedness of stochastic Hamiltonian system in the indefinite case,which is a fully coupled regime-switching forward-backward stochastic differential equation that does not satisfy the Monotonicity condition.By a decoupling technique,we present the Hamiltonian type Consistency Condition which is a forward-backward stochastic differential equation driven only by Markov chain.Inspired by the method of equivalent cost functional,we can show that the decentralized strategies are ε-Nash equilibrium.Finally,as an application,an optimal investment problem in regime-switching market is considered to show the significants of our results.In Chapter 6,we consider the portfolio selection under non-Markovian regimeswitching model with random horizon.Unlike previous works,the dynamic of assets are described by non-Markovian regime-switching models in the sense that all the market parameters are predictable with respect to the filtration generated jointly by Markov chain and Brownian motion.The Markov chain is assumed to be independent of the Brownian motion,thus the market is incomplete.The time horizon is a general random horizon rather than a stopping time,which implies that the exit time depends not only on the price information,but also on other uncertain factors in the market.We use a submartingale to characterize the conditional distribution of random time and reconstruct the portfolio selection problem according to the Doob-Mayer decomposition theorem and some assumptions.We study a continuous-time mean-variance portfolio selection problem under nonMarkovian regime-switching model with a general random time horizon.We formulate this problem as a constrained stochastic linear-quadratic optimal control problem.When we apply the linear-quadratic approach to the mean-variance problem,the key difficulty is to prove the global solvability of the so-called stochastic Riccati equation and the auxiliary regime-switching backward stochastic differential equation arising from the problem.When the time horizon and the market parameters are both random which is the case concerned in this chapter,the corresponding stochastic Riccati equation is more complicated.It is a fully nonlinear singular backward stochastic differential equation for which the usual assumption(such as the Lipschitz and linear growth conditions)are not satisfied.With the help of BMO martingale technique and comparison theorem,we obtain the existence and uniqueness of the stochastic Riccati equation.Then we get the efficient portfolios in a feedback form as well as the eficient frontier.
Keywords/Search Tags:Stochastic maximum principle, Progressive structure, Markov chain, Forward-backward stochastic differential equation, Impulse control, Partially observation, Linear-quadratic problem, Mean-field game, Large population system, Linear-convex problem
PDF Full Text Request
Related items