Font Size: a A A

Constraint Vector Optimization Problems Of Approximate Lagrange Multiplier And Kkt Conditions

Posted on:2014-01-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:R X LiFull Text:PDF
GTID:1220330401954022Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Using the techniques and methods of variational analysis, we study weak ap-proximate Pareto solutions, approximate KKT points of constrained vector opti-mization problems in infinite dimensional spaces. Metric regularity and Robinson-Ursescu theorem of metric regularity for multifunctions play very important roles when we study the stability of approximate KKT points. In this paper, we also study the metric regularity and Robinson-Ursescu type theorem for a kind of approximate convex multifunctions. we mainly obtain the results below:1. In the real Hilbert space case, in terms of proximal normal cone and proximal coderivative that express variational behavior of order two, we establish a Lagrange multiplier rule for weak approximate Pareto solutions of constrained vector opti-mization problems. In real Hilbert spaces, our Lagrange multiplier rule improves the main result on vector optimization in Zheng and Ng(SIAM J. Optim.21,886-911,2011). By the result, we give some necessary conditions for weak approximate Pareto solutions of constrained vector optimization problems. In particular, we in-troduce the notion of a fuzzy proximal Lagrange point and prove each Pareto (or weak Pareto) solution is a fuzzy proximal Lagrange point.2. We consider approximate KKT points for smooth and cone-convex con-strained vector optimization problems. We establish the stability results of ap-proximate KKT points for such two problems. Particularly, under much weaker conditions, we extend and improve some results by Durea, Dutta and Tammer (Op-timization,60(2011), pp.823-838) to the infinite dimensional spaces.3. We introduce the notions of approximate KKT+and KKT++for set-valued constrained vector optimization problems. When these new notions are restricted to smooth and cone-convex constrained vector optimization problems, we will see that they are equivalent and they can reduce to the classical KKT point in scalar optimization. In particular, we establish some stability results of these approximate points for set-valued, smooth and cone-convex constrained vector optimization prob- lems.4. Metric regularity is closely lined with metric subregularity and error Bound, they are fundamental notions and play very important roles in optimization theory. In this paper, we study and consider metric regularity for a kind of approximate con-vex (γ-paraconvex) multifunctions. By Robinson-Ursescu type theorem, we provide some necessary conditions for (1,γ)-metric regularity of y-paraconvex multifunction in infinite dimensional spaces, and we also study and consider (1,γ)-error bounds for such a kind of multifunctions.
Keywords/Search Tags:subdifferential, normal cone, coderivative, constrained vector opti-mization problem, optimality condition, Pareto point, Lagrange multiplier, approx-imate KKT point, metric regularity, set-valued mapping, error bound, γ-paraconvexmultifunction
PDF Full Text Request
Related items