Font Size: a A A

Built-In Self Training of Hardware-Based Neural Network

Posted on:2018-01-31Degree:M.SType:Thesis
University:University of CincinnatiCandidate:Anderson, ThomasFull Text:PDF
GTID:2448390002499026Subject:Computer Engineering
Abstract/Summary:PDF Full Text Request
Artificial neural networks and deep learning are a topic of increasing interest in computing. This has spurred investigation into dedicated hardware like accelerators to speed up the training and inference processes. This work proposes a new hardware architecture called Built-In Self Training (BISTr) for both training a network and performing inferences. The architecture combines principles from the Built-In Self Testing (BIST) VLSI paradigm with the backpropagation learning algorithm. The primary focus of the work is to present the BISTr architecture and verify its efficacy.;The development of the architecture began with analysis of the backpropagation algorithm and the derivation of new equations. Once the derivations were complete, the hardware was designed and all of the functional components were tested using VHDL from the bottom to top level. An automatic synthesis tool was created to generate the code used and tested in the experimental phase. The application tested during the experiments was function approximation. The new architecture was trained successfully for a couple of the test cases. The other test cases were not successful, but this was due to the data representation used in the VHDL code and not a result of the hardware design itself. The area overhead of the added hardware and speed performance were analyzed briefly. The results showed that: (1) the area overhead was signifffcant (around 3 times the area without the additional hardware) and (2) the theoretical speed performance of the design is very good.;The new BISTr architecture was proven to work and had a good theoretical speed performance. However, the architecture presented in this work cannot be implemented for large neural networks due to the large amount of area overhead. Further work would be required to expand upon the idea presented in this paper and improve it: (1) development of an alternative design that is more practical to implement, (2) more rigorous testing of area and speed, (3) implementation of other training methods and functionality, and (4) additions to the synthesis tool to increase its capability.
Keywords/Search Tags:Work, Training, Built-in self, Neural, Hardware, Speed
PDF Full Text Request
Related items