Channel coding-and-decoding technology is one of the most critical technologies in modern communication technology,which can effectively balance the efficiency and reliability of communication.LowDensity Parity-Check(LDPC)code is regarded as one of the best errorcorrecting codes because of its excellent performance and is widely used in various communication fields.In recent years,the high-performance coding-and-decoding technology of LDPC code has gradually become the focus of research.The most important thing is the design and high-speed realization of the high-performance decoding algorithm.However,it is difficult for traditional research methods to achieve the above two goals simultaneously.Therefore,we delve into using the deep neural network to improve LDPC decoding.The main work of the thesis is divided into two parts: decoding algorithm improvement and the high-speed implementation of the improved decoding algorithm.(1)In terms of decoding algorithm improvement,we propose corresponding improved algorithms based on the deep neural network to reduce algorithm complexity and improve error correction performance.Firstly,given the problem that the traditional simplified BP decoding algorithm reduces the error correction performance,we propose the linear-fitting decoding algorithm based on the deep neural network by using the ability that the deep neural network can fit the curve.This algorithm reduces the complexity of the algorithm without affecting the error correction performance.Secondly,given the problem that traditional research methods are difficult to improve the error correction performance,we propose the deep neural network for LDPC decoding and propose the auxiliary decoding algorithm based on the deep neural network by using the ability that the deep neural network can optimize network parameters through model training.This algorithm adds corresponding correction parameters to the iterative information,accelerating the decoding convergence speed and improving error correction performance.In addition,given the large scale of the decoding network and the difficulty of algorithm improvement for long codes,we propose the auxiliary decoding algorithm with the parameter sharing method and propose the model training method to obtain the best error correction performance.(2)In terms of high-speed implementation of the improved decoding algorithm,we propose two high-speed implementation schemes based on heterogeneous computing,given the low implementation speed of the traditional CPU method.Firstly,we analyze the decoding implementation complexity of the traditional implementation and the deep neural network implementation and compare the computational complexity of different decoding algorithms,proving the feasibility of high-speed implementation and verifying the improvement ideas of the two algorithms.Then,we propose the high-speed implementation scheme based on heterogeneous computing,analyze the feasibility of the scheme,discuss the data interaction between different devices,and analyze the applicable platforms of different types of decoding algorithms in detail according to the characteristics of the algorithm and the application scope of the device.We propose the high-speed implementation architecture based on CPU-GPU heterogeneous computing for the improved BP decoding algorithm.The decoding speed of this scheme can reach 48.6 Mbit/s,which is three orders of magnitude faster than the traditional CPU implementation.We propose the high-speed implementation architecture based on CPU-FPGA heterogeneous computing for the improved min-sum decoding algorithm and introduce the decoder module’s design ideas.The decoding speed of this scheme can even reach 174.007 Mbit/s. |