| Neural network is an information processing system,interconnected by a large number of neurons.It has outstanding nonlinear mapping ability,good fault tolerance and self-learning ability.For the nonlinearity,uncertainty and other complex issues of the control system due to external environment disturbance in practical application,it provides a possible solution.At present,neural network has been widely used in the pattern recognition,information batch processing,artificial intelligence and other fields.Neural network has extensive market demand in the hardware implementation of such fields as image processing,time series prediction and pattern recognition,thus embodying important practical significance,high scientific value and commercial value.Currently,in the study of neural network implementation,the traditional computer still occupies an important position.Most of the current research still remains in the level of software implementation and simulation,and general computer’s serial computing approach puts a great constraint on the neural network parallel processing performance.Therefore,it is necessary to find a more effective way to achieve it.With the development and application of large scale integrated circuits,the hardware implementation conditions for neural network are mature,and FPGA(Field Programmable Gate Array),with its hardware programmable,dynamic reconfigurable,huge parallel features,effectively breaks the restrictions of current nerve network serial implementation,providing a new way of thinking for the implementation of neural network.Based on the GVM neural network model,this paper explores its hardware implementation method on FPGA.First of all,in the neural network FPGA hardware implementation,the core part as well as the overall part is designed following the top-down principle,until it is easy for implementation and simulation.It includes a single neuron design implementation,data fixed-point,data bit processing,multiplication and accumulation module design.For the activation function,based on the analysis of its properties and the advantages and disadvantages of various methods,the method of combining lookup table and segmentation function is put forward,then designed and realized.Secondly,this paper takes the function approximation as an example to verify the hardware implementation of the neural network as a whole.With the help of prior knowledge,the optimal weight matrix is trained by MATLAB,and the stability and superiority of the neural network structure are ensured.Then,the hardware implementation scheme is verified on the FPGA.At the end,the performance of the hardware implementation is analyzed and assessed.Xilinx Zynq series development board is used as the hardware platform for the analysis of resource consumption and accuracy realization.Based on the analysis of resource occupancy and implementation precision,some optimization for the hardware scheme is carried out,which further improves the running frequency and calculation precision.Experimental results show that,the established network system has a reliable stability under the premise of achieving a high accuracy.In the initial test phase,the operating frequency is at 100 MHz or so,in such a case function fitting based on FPGA meets the real-time requirement by and large for the neural network.The error of function fitting on the FPGA is pretty small,reaching the order of 3-2-10~10 magnitude.Therefore,it satisfies the accuracy requirements which is essential for neural network’s performance,it ensures a higher accuracy under the premise of the consumption of less hardware resources as well.Optimized neural network’s operating frequency and calculation precision are improved at the expense of a little more hardware resources’ consumption.The operating frequency is about 120 MHz after the design is optimized,which makes the network system fit the real-time property better;the accuracy has also been further improved to reach the order of 4-3-10~10 magnitude.In this situation,function fitting based on FPGA is more ideal in the effect.In addition,the entire hardware design also applies to other types of neural networks with good versatility. |