Artificial neural network(ANN) is an information processing system which can simulate the physiological structure and function of the human brain by establishing the mathematical model of brain’s neural activities. ANN can process the signal massively and parallelly and have the advantages such as automatic organization, adaptive learning and fault tolerance.Neural network implementation technology can be divided into hardware realization and software realization. The software realization have no ability to implement fast and large scale parallel calculation, it is also difficult to meet the real-time requirements, whereas, the artificial neural network with the hardware method overcomes the shortcomings of the former, it is easy to play the advantages of neural network in large scale parallel calculation.Because of the incomparable advantages of analog circuit, such as simple circuit structure, low power consumption, high processing speed and small chip area,analog circuit is applicable to the implementation of complicated neural network. However, relative to the digital circuit which is characterized by mature design method, fine scalability and high-precision, there is a compromise between power, speed, gain, bandwidth, signal swing and supply voltage in analog circuits design, and it has been the reason for no breakthrough of the analog circuit design.In this paper, the block circuits of feedforward neural network is designed, by using TMSC 0.5μm CMOS process. Based on the block circuits, a feed-forward artificial neural network is completed, at the same time, some applied research of the feedforward neural network is also completed. The main research work is summarized as follows:(1) A new CMOS operational transconductance amplifier(OTA) called LOTA is proposed, of which the transconductance gain can be widely and linearly tuned. The design method is achieved by employing two square circuits for bias of two basic differential CMOS OTAs respectively. By setting a reasonable reference current, not only the input voltage swing and the adjusting range of the gain can get a wide value at the same time, but also the transconductance gain can be continuously and linearly adjusted from a negative value to a positive one through the external bias current.(2) By employing two translinear loops based on the class-AB current mirror, a high precision current-mode four-quadrant multiplier is presented. The proposed multiplier has the merits of high precision, wide input range, good linearity and low power consumption, so it is applicable to the neural network as the synapse circuit.(3) In order to solve the problems of activation function circuit, such as complicated structure, unadjustable parameters and mismatch with the synapse in the hardware realized neural network, an ambipolor sigmoid activation function and its derivative generator, which is based on the differential transconductor circuit and the translinear loop circuit, is presented. The proposed generator has the merits of simplicity and programmability. The amplitude, the threshold and the gain factor of the generated function can be adjusted through the external bias current or voltage.(4) In order to overcome the shortcomings of single layered perception model, which have no ability to implement XOR operation and classification of linearly non-separable data. A trapezoidal activation function circuit is proposed which is comprised of two threshold circuits and a subtraction circuit.(5) Based on the block circuits, a feed-forward artificial neural network is implemented and verified for the XOR problem and classification of one-dimensional or multidimensional data.(6) The general rules, protective measures for external distractions, basic processes and tools of layout design are introduced, based on the Virtuoso Layout Editor of Cadence software, the layouts of each block circuit of feed-forward artificial neural network is finished. |