Font Size: a A A

Large Scale Optimization Algorithms For Machine Learning

Posted on:2020-06-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:L LuoFull Text:PDF
GTID:1368330623463952Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Machine Learning methods provide strong capabilities for data analysis for computers,which is widely used in many applications such as data mining,computer vision and natural language processing.Training a machine learning model usually can be view as solving a large scale optimization problem which is dependent on a large amount of matrix operations.The high quality matrix could significantly reduce the time and space complexity of the algorithms.This paper focus on the matrix approximation in optimization algorithms and designing more reasonable matrix approximation algorithms in optimization view.The proposed methods can be applied to Gaussian process regression,convex online optimization,factorization machine,etc.This work provides the following contributions.· We propose regularized matrix approximation based on unitary invariant norm.The approximation has tighter error bound and more well-conditioned for positive definite input.All these properties imply our algorithm is more feasible to optimization algorithms.· We propose spectral shifting Nystr?m method,which approximates the kernel matrix by sampling.For the matrix whose bottom singular values are large,our method achieves lower approximation error than conventional Nystr?m methods apparently.The algorithm achieves good result on Gaussian process regression.· We propose robust frequent directions,which improves frequent directions and incremental SVD by an additional regularization term.We apply it to online Newton step algorithm and obtain a hyperparameter-free online learning algorithm.· We propose support matrix machine to deal with the classification problem with matrix form samples.The model use the spectral elastic net regularization,which enjoys the grouping effect with respect to rows(or columns)and could utilize the structure information of the matrix samples.· We proposed generalized frequent directions to approximate symmetric indefinite matrix and sketched follow-the-regularizer-leader to solve online factorization machine.Algorithms approximates the gradients of the model by utilizing the specific loss functions.It also has similar regret bound to classical follow-the-regularizer-leader and takes much lower computational complexity.· We also extend frequent directions to approximate the product of general matrices,which is more accuracy and more stable than the methods based on sub-sampling and random projections.There are many potential application of our methods including canonical correlation analysis,multi-task learning,multi-label classification,etc.
Keywords/Search Tags:Machine Learning, Optimization Algorithms, Matrix Approximation
PDF Full Text Request
Related items