Font Size: a A A

An FPGA-Based Hardware Accelerator for K-Nearest Neighbor Classification for Machine Learnin

Posted on:2018-03-22Degree:M.SType:Thesis
University:University of Colorado Colorado SpringsCandidate:Mohsin, Mokhles AamelFull Text:PDF
GTID:2448390002952026Subject:Electrical engineering
Abstract/Summary:
Machine learning has become the cornerstones of the information technology. Machine learning is a type of artificial intelligence (AI), which enables a program (or a computer) to learn, without being explicitly programmed and without human intervention. Machine learning algorithms can be categorized into supervised learning (classification) and unsupervised learning (clustering).;Among many classification algorithms, K-nearest neighbor (K-NN) classifier is one of the most commonly used machine learning algorithms. This algorithm is typically used for pattern recognition, data mining, image recognition, text categorization, and predictive analysis.;Many machine learning algorithms, including K-NN classification are compute and data intensive, requiring significant processing power. For instance, the K-NN consists of many complex and iterative computations, which include computing the distance measure between the training dataset and the testing dataset, element by element, and simultaneously sorting these measurements.;In this research work, our main objective is to investigate and provide an efficient hardware architecture to accelerate the K-NN algorithm, while addressing the associated requirements and constraints. Our hardware architecture is designed and developed on an FPGA-based (Field Programmable Gate Array--based) development platform. We perform experiments to evaluate our hardware architecture, in terms of speed-performance, space, and accuracy. Our hardware architecture is also evaluated with its software counter-part running on the same development platform. Experiments are performed using three different benchmark datasets with varying data sizes.;We introduce unique techniques, including pre-fetching and burst transfers, to reduce the external memory access latency issue that is common in embedded platforms. We also address various issues with the proposed hardware support for K-NN algorithm in the existing literature.;This investigation demonstrates that machine learning algorithms can indeed benefit from hardware support. Our proposed hardware architecture is generic and parameterized. Our hardware design is scalable to support varying data sizes. Our experimental results and analysis illustrate that our proposed hardware is significantly faster, for instance, our hardware architecture executed up to 127 times faster than its software counter-part.
Keywords/Search Tags:Hardware, Machine, Classification, K-NN
Related items