Font Size: a A A

Research And Application Of Neural Networks With Limited Precision Weights

Posted on:2012-01-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:J BaoFull Text:PDF
GTID:1118330332975734Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
In order to resolve the contradiction between computing efficiency of the traditional neural network with continuous weights, and characteristic good real time capability and tidy memory capacity in embedded systems, a neural network simplification and optimization method is researched including weight range and structure and training method of the neural network with limited precision, so that the neural network can be used to resolve practical problem in embedded systems. The outlines of the paper are as follows:The weight range and number for a given category of problems can be calculated, and ensure a solution exists through analysis for ability of resolve problem of the neural network with limited precision. Using limited range fixed point or integer weights opens the road for efficient neural network implementation because a limited range for the weights can be translated into reduced storage requirements and its computation can be implemented in a more efficient way than the floating point one. But if the weights are restricted in a drastic way both range and precision, the existence of a solution is not to be taken for granted anymore. Therefore calculation and test and verify must be done to ensure the neural network good effect for different application problem.The improving on structure and training method and activation function of the network are researched so that it can be content with capability of real-time and convergence. A improving genetic algorithm is used to training neural networks with fixed point or integer weights, and quantized Non-linear activation function is used in the process of training and running of the network so that quicken running of the network.According to above-mentioned method, the neural network with limited precision integer weights is optimized using the improving GA. Due to traditional image enhancement algorithms can not be able to process different kinds of image adaptively and automatically. A new algorithm of neural network for the self-adaptive image sharpening real time is presented. This algorithm is applied to the processing of image enhancement; the result achieves a better effect of image enhancement. At last several different software and hardware platforms are used to test its performance. Experiment shows that this new algorithm for image enhancement runs fast and the real-time performance increased significantly.At present, there have been several common calibration methods in us, such as two-point calibration, three-point calibration and five-point calibration, all of which are based on pro-rata basis. However, anomalistic nonlinear relationship between the touch-screen and LCD-screen will led to great error when pro-rata basis is used for touch screen calibration. In order to provide a higher precision touch-screen-LCD calibration method than traditional one for embedded systems, a new touch-screen-LCD calibration method using neural network with fixed point weights is presented to settle for nonlinear characteristic of them. The neural network with precision-adjustable fixed-point weights is trained by a improving genetic-based algorithm. In addition, the use of discrete and linear activation function for all the hidden and output neurons greatly reduces the complexity of implementation. Finally, this new type of neural network is applied to the calibration of touch screen, and the experiment results show it has higher precision compared with the traditional touch-screen-LCD calibration method. Furthermore, the computing efficiency has been greatly improved when the neural network is used in embedded hardware.Based on analyzing existing digital recognition methods, a new digital recognition method using the neural network with limited precision weights is proposed. Quantize Back-propagation Step-by-Step (QBPSS) algorithm is used to train the neural networks with limited precision weights. And a new quantization strategy called quantized the non-linear activation function as a look-up table according to weights precision, which can greatly improve the computation efficiency in embedded systems, is presented. The optimized neural network performance has been evaluated by comparing to conventional neural networks with floating-point weights on digital recognition in ARM embedded systems, and the results show the optimized neural network has 11 times faster than the conventional ones.
Keywords/Search Tags:neural network, limited precision, weight, activation function, embedded systems
PDF Full Text Request
Related items