Font Size: a A A

Research On Decoding Algorithm Based On Deep Neural Network

Posted on:2022-09-06Degree:MasterType:Thesis
Country:ChinaCandidate:H J SunFull Text:PDF
GTID:2518306350469764Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
The idea of applying machine learning to the theoretical research of error-correcting codes has appeared a long time ago.Since machine learning and error correction code theory use similar statistical methods to solve similar problems,machine learning algorithms can be applied to error correction codes.In recent years deep learning methods have demonstrated amazing performances in various tasks.These methods outperform human-level object detection in some tasks,they achieve state-of-the-art results in target recognition in images and speech processing.Applying deep learning to LDPC codes to improve the error-correction performance of a belief propagation decoder has been proven with a bright future.Nachmani et al showed that assigning weights to the edges of the Tanner graph that represent the given linear code,thus yielding a "soft" Tanner graph.These edges are trained using deep learning techniques.The decoding performance of neural decoder is better than traditional belief propagation decoding algorithms because they learn to use weights to mitigate the adverse effects of cycle in the Tanner graph.Neural network decoders require relatively fewer iterations than traditional decoding algorithms,and in some cases,the error rate of the neural decoder can be close to that of the ML decoder.However,the method proposed by Nachmani et al.requires a lot of multiplication and hyperbolic function calculations and memory overhead.And there are a large number of parameters in each hidden layer of the neural network,so that these decoders cannot be effectively implemented in real-time hardware implementation.In order to solve the complexity problem of the neural network-based decoding algorithm,and to improve the performance of the neural network decoder,this paper proposes a low-complexity offset minimum sum(OMS)neural network decoding algorithm and a neural network decoding algorithm based on adaptive parameters.The work done in this paper is as follows:1.A low-complexity offset minimum sum(OMS)neural network decoding algorithm is proposed.By analyzing the network structure and calculation rules of the neural network decoder.This paper proposes a simplified algorithm for the neural network sum-product decoder algorithm.This algorithm transforms the multiplication operations and hyperbolic function operations that are not conducive to hardware implementation into addition operations that are easy to implement in hardware,greatly reducing the complexity of the decoding algorithm.Removing the parameters in the neural network that have no obvious impact on the improvement of decoding performance is to reducing the training time of the neural network.Experimental results show that the low-complexity offset minimum sum(OMS)neural network decoding algorithm can save about 60%of network parameters and save training time by 55%.2.We propose a neural network decoding algorithm based on adaptive parameters.By analyzing the structure of the decoding network,we found that the decoding result of the hidden layer in the middle of the network is valid information for the decoding network and should not be discarded.Use adaptive weights to construct a new loss function to improve the decoding performance of the decoding network.Experiments have proved that the decoding performance of the neural network decoding algorithm based on adaptive parameters is improved by 0.3dB relative to previous algorithm.
Keywords/Search Tags:Deep learning, channel decoding, Tensorflow, loss function, gradient descent
PDF Full Text Request
Related items