Font Size: a A A

GLDPC Decoder On FPGA And Performance Evaluation Over Binary Erasure Channel

Posted on:2019-04-05Degree:MasterType:Thesis
Country:ChinaCandidate:HABANABASHAKA JEAN DAMOURFull Text:PDF
GTID:2428330545452154Subject:Electronics and Communication Engineering
Abstract/Summary:PDF Full Text Request
The life of people has changed tremendously in view of the rapid growth of mobile and wireless communication.Channel coding is the heart of digital communication and data storage.The traditional block codes and conventional codes are commonly used in digital communications.To approach the theoretical limit for Shannon's channel capacity,the length of a linear block code has to be increased,which in turn makes the decoder complexity to become high and may render it physically unrealizable.The powerful LDPC codes approach the theoretical limit for Shannon's channel capacity with feasible complexity for decoding.The field of wireless communication has undergone a phenomenal growth in both scope and application.The need to transmit and receive information in more reliable way over a noisy channel has become an essential factor which determines the performance.There have been several major developments in the field of error correcting codes in the past and various coding techniques have been introduced,all of these techniques aim achieving reliable communication.Error correcting is ability to re-construct the original information which was transmitted.An error correcting code is an algorithm for expressing a sequence of bits such that any errors which are introduced can be detected and corrected based on the remaining bits.In recent past the low-density parity check codes(LDPC)gained more attention and is considered as the important error correcting codes for the coming years in the field of telecommunication and magnetic storage,cause of LDPC codes are efficient channel coding codes that allow transmission errors to be corrected.Furthermore,an LDPC code is constructed using a sparse bipartite graph.LDPC codes are capacity approaching codes,which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum i.e.the Shannon limit for memoryless channel.The noise threshold defines an upper bound for the channel noise,up to which the probability of lost information can be made as small as desired.Using iterative belief propagation techniques,LDPC codes can be decoded in time linear to their block length.Owing to the characteristic of their parity-check matrix which contains only a few l's in comparison to the amount of O's,resulting to the main advantage that they provide a performance which is very close to the capacity for a lot of different channels and linear time complex algorithms for decoding.Furthermore,they are suited for implementations that make heavy use of parallelism.GLDPC code is generalized of LDPC code that shows good performance due to its large minimum distance and low decoding complexity.Therefore,the GLDPC code is used at the receiver to detect and correct the bits were erased in the received codeword.Meanwhile,the component node constraint as parity check matrix is represented by Hamming Code H.Moreover,the hamming code has ability to detect at least two erasures and fill at least one erasure according to its minimum distance.The concept of ability to detect and correct is better understood by introducing the weight and distance properties of a codeword.Simultaneous,the minimum distance is related to ability of correcting errors appear in codewords.Furthermore,the weight or Hamming weight of a binary codeword is simply the number of ones that are in the codeword.In fact,the common assumption of GLDPC code is that for each hamming code(component code)there is a processor which is able to detect at least two erasures and correct at least single erasure bit.In fact,the capability of detecting and correcting many erasures in codeword it depends to the minimum distance of hamming code which is considered.In short,owing to good large minimum distance of GLDPC codes,as a result good potential in detecting and filling many bits which are corrupted across BEC as well.Referring on GLDPC codes,the Tanner graph component nodes receive all bits from connecting VNs concurrently for decoding.i.e.without controlling technique at receiver there will happen the serious problem of collision among them.Therefore,to overcoming this problem SIC technique is applied.The reason why it is more important,it is because many signals need to be transmitted to the receiver simultaneously.While the decoder at receiver would have ability to decode the information bits parallelism.Within this time SIC has role to recover all bits were transmitted so that receiver could receive and decode all as well.However,the GLDPC codes are more flexible and low complexity due to due to fewer one's as compared to zero's and the check nodes and variable nodes are more general instead of SPC and repetition respectively as the advantages.Unfortunately,they have disadvantages;major drawback of GLDPC codes is for high encoding complexity ofO(n2).Furthermore,the binary erasure channel was considered,by details the BEC was introduced by Elias in 1955.It is communication channel model used in coding theory and information theory.Furthermore,the model of BEC is described in terms of bits sent and received where the transmitter sends stream information bits of 1's and 0's to Channel.And the receiver either receives the bit 1 and 0 or it receives the symbol e that represents the bit was erased during transmitting,which are called the "Erasures".Moreover,the receiver knows that a bit was sent by sender but it does not know if the bits were erased it represents 1 or 0.Because of the fact the receiver needs to recognize exactly bits were erased in the channel after decoding.Therefore,GLDPC decoder paly crucial role to discover these bits were erased by impairment of channel.The motivation of this work emphasized on channel coding where the information bits need to be encoded before being sent into channel for the purpose to assure the confidential of messages are sent to the channel by sender.Other side the receiver will need to know the messages have been by sender.Therefore,GLDPC codes will be used to decode the information bits.These codes show good ability to decode information bits have been passed through memoryless channel,such as BEC,BSC and so on.This work is concerned with investigating the performance of generalized low-density parity check(GLDPC)codes over binary erasure channel(BEC).In a nutshell,three versions structures of GLDPC decoder on FPGA were proposed namely serial,parallel,and hybrid structure and reflect on hardware consumption analysis and decoding speed.and then the UEBR performance of GLDPC codes analysis over BEC will be carried out to prove the reliable transmission of GLDPC decoder over binary erasure channel.Furthermore,the only one decoder structure among them was used to investigate the UEBR and UEWR performance analysis by MATLAB such as parallel structure,as the output results to all structures are the same,except the time requires to each one for completing the decoding.In addition,the small LDPC code was used to propose three versions structures decoder,while to prove the reliable transmission of GLDPC decoder over BEC large code was considered.The proposed GLDPC codes outperform currently low-density parity-check(LDPC)coding schemes in terms of bit error rate and decoding complexity.Because of all digital messages suffer from noise during the transmission.As a result,the GLDPC codes are generalized of standard LDPC codes that are capable of high-performance error correcting capabilities these erroneous bits.Meanwhile,the GLDPC coding is limited to simple linear block component codes such as binary BCH,Hamming codes and Reed-muller codes.So,this work I proposed the GLDPC codes with considering Hamming codes as component codes.Therefore,the GLDPC codes show excellence performance close to Shannon limit owing to their considerable minimum distance.The construction of GLDPC code from standard LDPC code has been done by MATLAB.However,each single one's position check node is replaced by a component code whose length n is the same as the degree of the check node and zero's positions are replaced by zeros matrix with the same size of component code.Subsequently,three structures were proposed with considering small code and simulated large code over BEC by MATLAB tools.As a result,they show that parallel structure is suitable in case high throughput is required even if it can involve much large area consequently to high costly.Moreover,the hybrid structure shows trade-off between hardware usage and throughput.But serial structure is suitable while high throughput is not interesting because it reduces the area consumption as a result to low cost.In addition,error correction algorithms capabilities are frequently evaluated by using the bit error rate(BER)vs.erasure probability plot,where it shows efficient performance.According to analysis results,the code was investigated by MATLAB simulation is(10000,5000)GLDPC code,it showed superb performance close to Shannon limit.At UEBR of 10-6,the code is approximate 0.12 away from channel capacity(Shannon limit),0.5 for the BEC.In addition,at UEWR of 10-4 the code is approximate 0.12 away from Shannon limit as well.So,the implication of Shannon limit that erasure probability less than 1-R,so that information can be transmitted reliably over BEC,by using a sufficiently long code with rate R is proved.On this work(6,3)-regular(96,48)LDPC Code,frequency of 2Gb and 10 iterations were considered.I.e.this small code was used,it would demonstrate the approach as well as the GLDPC code becomes large.As a result,the decoding speed of parallel is 48 times to hybrid structure and approximate 291 times to serial decoder.Owing to processors requiring to each structure is quite different,the latency becomes different as well.For instance,to this case parallel structure requires one processor to each node,resulting to 144 processors totally,serial structure requires only two processors and hybrid structure requires 32 processors.Consequently,the number of wires is required to the serial decoder is 16 times less than globe wires need to hybrid decoder structure and 48 times less than wires of parallel decoder structure.In generally when computations need to perform on general-purpose processor,a trade-off should be made between computation speed and resource consumption such as memory usage,silicon required,energy consumption and so on.Typical issue was proved to parallel decoder structure is that,however the resulting for increasing computation speed by exposing more parallelism of processors,it will affect the resource usage,means that it will greatly require vaster area and wires connection,consequently to high costly.Meanwhile,there are three primary definitions of speed depending on the context of the problem:throughput,latency,and timing.In the context of processing data in an FPGA,throughput refers to the amount of data that is processed per clock cycle.A common metric for throughput is bits per second.Latency refers to the time between data input and processed data output.The typical metric for latency will be time or clock cycles.Timing refers to the logic delays between sequential elements.When we say a design does not "meet timing," we mean that the delay of the critical path,that is,the largest delay between flip-flops(composed of combinatorial delay,clk-to-out delay,routing delay,setup timing,clock skew,and so on)is greater than the target clock period.The standard metrics for timing are clock period and frequency.Furthermore,a high-throughput design is one that is concerned with the steady-state data rate but less concerned about the time any specific piece of data requires to propagate through the design(latency).A low-latency design is one that passes the data from the input to the output as quickly as possible by minimizing the intermediate processing delays.Oftentimes,a low-latency design will require parallelisms.Timing refers to the clock speed of a design.As a result,the latency of serial structure is 288 times more parallel structure 48 times more than hybrid structure with referring to the case was considered to this work.Meanwhile,the latency parameter has impact to throughput because of it is inverse proportional to the decoding speed.Means that if latency becomes large throughput will be reduced but if latency is reduced,resulting to grow up of throughput.Moreover,amongst the parameters were used to compute throughput,the latency is described in details because it is only different according to others parameters.Means that others parameters are the same within the expression to all decoder structures.In a summary,refer on result the parallel decoder structure is good choice when high throughput is required,contrary serial decoder structure is required.When trade-off between hardware consumption and throughput is interested,the hybrid decoder structure is considered.
Keywords/Search Tags:GLDPC code, FPGA, BEC, SPA decoding, BER
PDF Full Text Request
Related items