With the development of learning theory in recent years, machine learning algorithms are being applied in practice. Since many machine learning algorithms involve matrix inversion or decomposition, of which the time complexity is almost O(n3), when the data set is large, the effi-ciency of machine learning algorithms is rather low, severely glooming the prospect of machine learning algorithm application. To solve this problem, this thesis explored the method to accelerate machine learning algorithms, which was applied in specific algorithms.ELM (Extreme Learning Machine) originates from sin-gle-hidden-layer feed-forward neural networks (SLFN). Yet different from traditional SLFN, ELM algorithms no longer need to get hid-den-layer parameters from training data, but directly getting them ran-domly, which results in a higher learning efficiency compared with tradi-tional algorithms based on neural network architecture. Because ELM learning also involves matrix inversion, it will be limited by the scale of data. To solve the problem, this paper provides a speed-up algorithm based on matrix approximation. If the given parameters are appropriate, the proposed algorithm in this paper can increase ELM speed to a great extent without lowering accuracy too much. What’s more, the algorithm were satisfactory when applied to LS-SVM (Least Square Support Vector Machine). The experimental results on benchmark datasets proved the effectiveness and feasibility of this algorithm. The main achievements are concluded as follows:1. This thesis made a deep exploration into the problems in ma-chine learning algorithms and an analysis of the existing matrix ap-proximation methods.2. This thesis proposed a speed-up ELM algorithm based on Nystrom. The algorithm constructs an approximated kernel matrix through Nystrom method by selecting landmark points in datasets and then train ELM through approximated kernel matrix inversion. By doing so, the time complexity can be reduced from O(n3) to O(nm2+m2) (m means the number of landmark points).3. This thesis also proposed an ELM speed-up algorithm based on Random approximation. This algorithm constructs an approximated low-rank subspace through Random method and trains ELM through ap-proximated kernel matrix inversion in low-rank subspace. By doing so, the time complexity can be reduced from O(n3) to O(kn2+k3) (k means the numerical rank in subspace).4. This thesis proposed a LS-SVM speed-up algorithm on matrix approximation. This algorithm can effectively reduce the training time of LS-SVM.The research achievements are not limited to the speedup of ELM and LS-SVM but can be applied to general machine learning algorithms, which endows them with much value in practice. |