Font Size: a A A

Research And Application Of Machine Learning Algorithm Based On Stochastic Variational Inequalities

Posted on:2024-05-21Degree:MasterType:Thesis
Country:ChinaCandidate:C Z YangFull Text:PDF
GTID:2530306917990629Subject:Statistics
Abstract/Summary:PDF Full Text Request
Machine learning algorithms are widely used to solve problems in management science,transportation,game theory,and optimal control.However,some random factors are often encountered when the actual problem is dealt with.Ignoring these factors may lead to a large deviation in solving the problem,resulting in damage to economic or social interests.On the other hand,with the advent of the era of big data,large quantities of data are involved in practical problems,which makes efficient data processing a difficult task.Therefore,it is of great practical and theoretical significance to study efficient machine learning algorithms for solving random cases.Stochastic Variational Inequality(SVI)model receives more and more widespread attention because of its stochastic variational inequality(SVI)model is simple and applicable to handling stochastic machine learning algorithms.When solving SVI,it is often difficult to directly solve this problem with mathematical expectation,and the existing real-valued algorithms generally cannot be directly used to solve SVI.How to design an efficient numerical algorithm to solve SVI has become one of the problems widely discussed by scholars.In addition,the feasible set problem becomes one of the important factors affecting the algorithm design.Therefore,selecting a wider range of feasible sets can make the algorithm more efficient and have a wider application range.Some algorithms employ Euclidean distance,which often does not allow the algorithm to consider the structure of feasible sets and thus cannot solve the problem effectively.One possible approach is to choose a more efficient distance function,such as the Bregman distance.This allows the algorithm to select a wider range of feasible sets when solving the problem,so as to solve the problem efficiently.On the other hand,the actual application of random Bregman algorithm to solve SVI research is still in the initial stage.How to model practical problems and transform them into SVI problem models,and then use stochastic Bregman algorithm to solve them,has become one of the topics discussed by scholars.In this paper,we first study SVI with expectation,propose two classes of three Bregman numerical algorithms,and then prove the convergence,convergence rate and complexity of the algorithms almost everywhere.Finally,the proposed algorithm is applied to solve four kinds of practical problems,which provides certain theoretical and practical value for solving machine learning algorithm problems of similar situations.The main research contents and innovations of this paper are as follows:Firstly,the Bregman reflection gradient algorithm for solving stochastic variational inequalities is proposed.Compared with the classical variance-descending external gradient algorithm,the Bregman reflection gradient algorithm only uses one step projection in each cycle,which can effectively save the CPU time of the algorithm.The new algorithm uses Bregman distance.Compared with Euclidean distance,the feasible set of the algorithm is wider,which makes the algorithm more adaptable and more efficient.The convergence of the algorithm is analyzed under the Minty inequality condition,and then the convergence rate and complexity are measured.Secondly,a Bregman algorithm for solving stochastic variational inequalities is proposed,which uses the Single-Call technique and only samples in the same sample space in each iteration.Compared with the classical variance descending external gradient algorithm,the new algorithm saves time in dynamic sample collection,which makes the algorithm more efficient in solving problems.On this basis,the Bregman adjacency point algorithm of line search is proposed by adding line search technique.The algorithm adaptively selects the step size of each iteration cycle through the line search rule,which makes the algorithm solve the problem more accurate.Similarly,Bregman distance is used,which allows the algorithm to adapt to a wider set of feasible options,which makes the algorithm more efficient and applicable to a wider range.The convergence rate is proved under the condition of Minty inequality,and then the convergence rate and complexity of the algorithms are measured by the natural residual function.Thirdly,the proposed Bregman algorithm is applied to solve machine learning algorithm problems in random cases.Four problem models of sparse binary classification,random complementarity,random fraction constrained optimization and Nash-Cournot were solved.The machine learning algorithm problem is transformed into SVI problem model,and then the new algorithm proposed in this paper is used to solve the above four kinds of problems,which provides a new idea and method for designing efficient machine learning algorithm and dealing with actual random problems.
Keywords/Search Tags:Stochastic variational inequality problem, Bregman, Random approximation, Variance decreases, Machine learning algorithm
PDF Full Text Request
Related items