Font Size: a A A

Research On Key Techniques Of Post-processing In Quantum Key Distribution

Posted on:2017-03-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:D LeFull Text:PDF
GTID:1318330536981062Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Cryptography is important to protect the security of information,but the existing cryptographic techniques are usually computationally secure.One-time pad(OTP)is the only cryptographic algorithm proved to be unconditionally secure.However it is difficult to distribute unconditionally secure keys for OTP.Based on quantum mechanics,quantum key distribution(QKD)successfully solves the key distribution problem,which makes that it is possible to use OTP in practical applications.QKD consists of quantum part and classical post-processing part.In quantum part,quantum states are used to distribute partly secure and partly related original keys.In classical post-processing part,the original keys are performed by sifting procedure,error reconciliation procedure and privacy amplification procedure,and are finally converted into unconditionally secure keys.QKD aims to provide unconditionally secure keys for communicating parties,so its top priority is to increase the net secure key rate which means the amount of unconditionally secure keys per second for communicating parties.As one key part of QKD,QKD post-processing naturally should serve the top priority.In different QKD systems,due to different communication distance,different QKD protocol and other factors,the original keys input into classical post-processing part are different.Hence,it is a demanding prompt solution problem for classical post-processing part how to optimized its modules to maximize the net secure key rate for given original keys.The main research works and contributions of this dissertation are as follows.(1)Aiming to maximize the net secure key rate,we analyse the impact of each module of post-processing on the net secure key rate,and then present a performance optimization model of QKD post-processing.Based on the performance optimization model,we present the evaluation indicators for each module of QKD post-processing.According to the proposed evaluation indicators,we evaluate some existing representative error reconciliation algorithms,which have been studied widely since the first QKD system appeared.The following three research works are done according to the presented performance optimization model of QKD post-processing.(2)Aiming at the problems that sifting module in QKD systems with high repeated frequency needs to process large data amount and consume a lot of authenticated keys,we propose a MRZLFL code based sifting algorithm with high compression ratio.The proposed method has considered the feature of communication data and the pressures of process speed and storage of sifting module,and achieves the compression performance closing to the Shannon limit.The MRZLFL code firstly employs the modified zero run length code to convert binary source into n-ary source,then encodes the messages of the n-ary source with fixed-length.In order to compensate the compression performance of fixed length code,we find the optimal parameters for fixed length code to achieve the best compression performance.Theoretical analysis and experimental results show that the proposed algorithm has a performance close to the Shannon limit.In order to verify the impact of compression performance on the consumption of authentication keys,we apply the proposed algorithm to a practical QKD system.It is shown that the proposed algorithm reduces more than 26% and 15% of authentication key amount when the communication distances are 1km and 25 km,respectively.(3)Aiming at the requirement of high speed error reconciliation by QKD systems with high sifted key rate,we study polar code based error reconciliation algorithm.Firstly,we analyse the security of the configures of polar code in QKD and their performances.The analysis show that the latency of the two best configures are only related to polar decoder.Then we present three optimizations of SC decoders to speed up them,where the first two optimizations are suitable for the hardware implementation based SC decoder and the third optimization is suitable for the software implementation based SC decoder.In order to provide theoretical support for the optimizations,we analyse the dependencies of likelihood ratios of SC decoder.The three proposed optimizations are as follows.(i)Aiming at the problem that the existing schedule algorithms of SC decoders are still primitive,we propose a high efficiency schedule algorithm of SC decoders.Compared with the existing methods,the critical path delay and the space complexity of the proposed method are both constant,which save the storage resource and increase clock frequency.(ii)Aiming at the problem that SC decoder has high latency,we propose a pre-computation look-ahead technique based SC decoder and fully analyse the cost.Theoretical analysis show that the latency of SC decoder can be greatly reduced with a low cost.(iii)Aiming at the problem that the implementations of software implementation based SC decoder are all recursive,we proposed a non-recursive SC decoder.Experimental results show that the speed of the proposed method is 2.2 times to 3.3 times of the recursive SC decoder.(4)Aiming at the requirement of high reconciliation efficiency by QKD systems with low sifted key rate,we propose an error reconciliation algorithm with high reconciliation efficiency.Among the existing error reconciliation algorithms,the class of Cascade algorithms usually have the best reconciliation efficiency.We optimize the class of Cascade algorithms from two aspects.Firstly,we proof that the number of corrected errors always is even when each group is performed by track-back error correction technique,so it can be inferred that the parities of the last group of ith pass are always the same,where i is equal to or more than 2.Secondly,we find that,during track-back error correction,the error of the groups with size 2 can be directly corrected without BINARY procedure.These two optimizations reduce the number of disclosed bits and improve the reconciliation efficiency.
Keywords/Search Tags:quantum key distribution, post-processing, sifting, error reconciliation, polar code, Cascade protocol
PDF Full Text Request
Related items