Font Size: a A A

Efficient architectures for MLP-BP artificial neural networks implemented on FPGAs

Posted on:2008-03-25Degree:M.ScType:Thesis
University:University of Guelph (Canada)Candidate:Savich, Antony WalterFull Text:PDF
GTID:2448390005969121Subject:Engineering
Abstract/Summary:
Artificial Neural Networks, and Multi-Layer Perceptron with Back Propagation (MLP-BP) algorithm in particular, have historically suffered from slow training. Unfortunately, many applications require real-time training. This thesis studies aspects of MLP-BP implementation in FPGA hardware (Field Programmable Gate Arrays) for accelerating network training. This task is accomplished through analysis of numeric representation and its effect on network convergence, hardware performance and resource consumption. The effects of pipelining on the Back Propagation algorithm are analyzed, and a novel hardware architecture is presented. This new architecture allows extended flexibility in terms of selected numeric representation, degree of system-level parallelism, and network virtualization. A high degree of resource consumption efficiency is accomplished through a careful architectural design, which allows placement of large network topologies within a single FPGA. Examination of performance for this pipelined architecture demonstrates at least three orders of magnitude improvement over software implementation techniques.
Keywords/Search Tags:MLP-BP, Network, Architecture
Related items