Font Size: a A A

Second-order Optimality Necessary Conditions In A Class Of Discrete Optimal Control Problems

Posted on:2011-10-06Degree:MasterType:Thesis
Country:ChinaCandidate:J L WangFull Text:PDF
GTID:2120360305454855Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
With the expansion of the field of system theory and the extensive application of the computer technology, discrete system theory has been developed into a branch of control theory fully parallel with the continuous system theory. It becomes an important component of control theory, and the research of discrete systems seems deeper than that of the con-tinuous systems in some areas. The concepts and methods of discrete system theory form the basis of modern control theory, and its importance has been recognized. The discrete theory is playing an important roll in automatic control engineering, communications, radar technology, biomedical engineering, image testing technology, power systems and nuclear physics and so on. In the economic model, a new tactic may have to refer to the first two and even more previous results; and in the ecological model, in order to maintain the ecologi-cal balance, one capture may be done on the basic of the results of more than two previous captures. Problems like these exist and are very important. Beginning with that, this paper consider the problems of discrete optimal control with a new state depending on two previous states, we obtain the second-order optimality necessary conditions in three cases.First, we considered the discrete optimal control problem with the functional dependent with one control variable ui: where are twice continuously differentiable, and xi∈Rn is a state variable, ui∈Rr is the con-trol parameter, N is a given integer of steps. We call vectorξ= (x0,x1,…, xN) a tra-jectory and w= (u1,…,uN-1) a control. x0,x1 are two starting points, and xN is the end point of corresponding trajectory. Then (x0, x1, w) defines the corresponding trajectoryξ= (x0, x1,…, xN). The discrete optimal control problem is to minimize the functionFirst, let's introduce the Pontryagin's function and the LagrangianDefinition 1.1. We say that (ξ, w) satisfies Lagrange multipliers rule, if there exists such thatλ≠0,λ0≥0,λ1≥0,μ≥0 and such that the conditions below hold: Let∧=∧(ξ,w)be the set of all Lagrange multipliersλcorresponding to(ξ,w).Put I={j:gi,j(ui)=0},i=1,N-1,and by (?) we denote the set of all vectors (h,v),h(?) (h0,hl,…,hN)∈Rn(N+1),v=(v1,…,VN-1)∈Rr(N-1),such that the conditions below hold:We denote the maximum linear subspace in (?) by M,that is,the subspace M consists of all vectors(h,v)such that(1.12)-(1.16)hold,only with inequalities replaced by equalities in(1.14)and(1.15).Let where K=(K1,K2)T.Define{ai}and{bi}by ai+bi=Ci,ai-1bi=-Ci.Define Bi,i=0,N as follows and put B be the following block matrix where 0 means the zero matrix having the corresponding size. For the given Lagrange mul-tiplierλ, we define the quadratic form where A[x]2-(Ax, x) denotes the arguments of the bilinear mapping. Let∧a=∧a(ξ, w) be the set of all Lagrange multipliersλ∈∧(ξ, w) such that indMΩλ≤m(N-1)+k1+k2-2n-r(N-1)+dim(kerB), where indMΩλstands for the index ofΩλto the subspace M. It is also the maximum dimen-sion of subspaces that it is negative definite on. And dim(kerB) means the dimension of the set of all vectors (h0, h1, v) such that B(h0, h1,v)T= 0. We have the following theoremTheorem 1.1. Suppose (ξ, w) is a local minimum for the problem (1.1)-(1.4), then we have (?)a≠0 and, furthermore,Then we considered the discrete optimal control problem with the functional dependent with two control variables ui-1 and ui. And also we considered the discrete optimal con-trol problem with the functional independent with control variable. We obtained the results similar with Theory 1.1 respectively.
Keywords/Search Tags:Discrete optimal control, Mathematical programming, Optimality conditions
PDF Full Text Request
Related items