| With the implementation of fully automatic driving and intelligent medical treatment,there is an increasing demand for low delay and highly reliable communication networks.Low density parity check code(LDPC)is one of the key codewords in the core channel coding and decoding of the fifth generation mobile communication technology,cable data service interface standard and other fields.The improvement of its decoding efficiency and accuracy has become a research hotspot.At present,the traditional LDPC decoding technology represented by the confidence propagation algorithm based on log likelihood ratio(LLR)generally has the problems of long decoding time and high complexity,and the decoding performance is vulnerable to noise in practical communication.In recent years,in addition to the theoretical improvement of the traditional decoding algorithm,the major breakthrough of deep learning in various fields makes it introduced into the decoding algorithm as a technology to provide more solutions to improve the decoding effect.In this study,a simplified LDPC decoding algorithm is proposed.At the same time,the deep learning algorithm is used to consider the influence of noise in the channel on the decoding accuracy.On the one hand,the decoding rate is improved,on the other hand,the bit error rate is effectively reduced.Firstly,aiming at the problems of long decoding time and high complexity of the current decoding algorithm based on log likelihood ratio,a simplified LLR decoding algorithm is proposed.By dividing the constellation of quadrature amplitude modulation into independent pulse amplitude modulation constellation,and dividing the decision domain of pulse amplitude modulation,the decision domain of quadrature amplitude modulation can be limited to a small range,which simplifies the decoding algorithm.Simulation results under different modulation modes and bit rates show that this method reduces the decoding complexity and reduces the decoding time by 93.6%compared with the log likelihood ratio algorithm.However,when the information bits are sparse,the bit error rate of this method is high.Then,aiming at the high bit error rate of the simplified LLR decoding algorithm when the information bits are sparse,an error compensated LLR decoding algorithm is proposed.By dividing the decision domain into sawtooth regions,the average error of each region is calculated and stored.After the simplified LLR decoding,the error term is compensated into the simplified LLR algorithm.Simulation results show that the decoding time of this method is reduced by 90.8%compared with LLR algorithm,and the bit error rate is close to it,which reduces the dependence on information bit density under high bit rate and low bit rate codewords.In addition,aiming at the problem that the decoding accuracy is lost due to noise interference in the channel,a channel decoding model based on iterative residual neural network is proposed.Combined with the error compensated LLR decoding algorithm,the codeword of the error compensated LLR algorithm is used as the input of the residual neural network.This process is defined as a round of processing,and the iterative cycle is compared with the processing results of the error compensated LLR decoding algorithm,so as to obtain a more accurate estimation of the channel condition and effectively eliminate the noise.Simulation analysis shows that the decoding algorithm obtains a gain of 0.25 DB signal-tonoise ratio compared with the error compensated LLR decoding algorithm. |