Font Size: a A A

Kernel optimization and distributed learning algorithms for support vector machines

Posted on:2007-07-15Degree:Ph.DType:Thesis
University:University of California, Los AngelesCandidate:Lu, YumaoFull Text:PDF
GTID:2458390005482022Subject:Statistics
Abstract/Summary:
The support vector machine (SVM) has been one of the most successful tools for pattern recognition and function estimation in the recent ten years. The underlying training problem can be formulated as a large quadratic programming problem that can be efficiently solved via existing decomposition algorithms. This assumes, however, that the kernel function and all the parameters are given and the training vectors can be locally accessed. These assumptions do not hold for classification problems with heterogeneous features or distributed training data. The purpose of this thesis is to address these problems by developing methods for kernel optimization and distributed learning.; Recent advances in kernel machine algorithms based on convex optimization have made it easier to incorporate information from training samples with little user interaction. As a first contribution, we generalize kernel estimation techniques for binary classification to multi-class classification problems. The kernel optimization problem for multi-class SVM is formulated as a semi-definite programming problem (SDP). A decomposition method is proposed to reduce the computational complexity of solving the SDP. As a further step of generalization, we consider kernel optimization for support vector regression (SVR). The proposed kernel optimization methods are applied to retina ganglion cell neuron signal analysis.; The distributed SVM training algorithm proposed in this thesis is based on a simple idea of exchanging support vectors over a strongly connected network. The properties of the algorithm in various configurations have been analyzed. We also propose a randomized parallel SVM that uses randomized sampling and has a provably fast average convergence rate.; Finally, we discuss the maximum likelihood estimation of Gaussian mixture models with known variances. The problem is formulated as a bi-concave maximization problem. We work out the details of the generalized Benders decomposition for calculating the global optimum of the bi-convex problem. Simple numerical results are presented to demonstrate the advantage over the widely used Expectation-Maximization (EM) algorithm.
Keywords/Search Tags:Support vector, Kernel optimization, Algorithm, SVM, Problem, Distributed
Related items