Font Size: a A A

Research Of Lossless Compression Algroithm Based On Context Tree Model

Posted on:2021-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:J WangFull Text:PDF
GTID:2518306197455514Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Lossless compression of images is widely used in fields that require high image details.Context modeling technology,as a very effective method to estimate the source probability models,is often used to model image sequences.However,for the high-order context model,there are many probability distributions in the model,but the length of the source sequence used for the statistical model is very limited,so it is impossible to fully train all the probability distributions in the context model,thus causing the "context dilution" problem.Compared to the context model for binary sources,the context model for M-ary sources will generate more probability distributions in the context model.Therefore,the context model for M-ary sources exist more serious "context dilution" problem than the context model for binary sources.First,in order to alleviate the "context dilution" problem caused by the lossless compression of M-ary sources.This paper uses binary context tree model to model the M-ary source sequence and proposes to use the increment of the description length as the selection criterion for the merging of two nodes.The M-ary context tree model is transformed into a binary context tree model to analyze the statistical information of the source in more details.Based on the transformed binary context tree model,this paper studied the selection criterion of context tree nodes,and proposes to use the increment of the description length as the selection criterion for the merging of two nodes.There is an inseparable relationship between the increment of the description length and the similarity of the probability distributions,so,using increment the of description length as the node merging criterion is more suitable for encoding M-ary sources than using description length as the node merging criterion,significant progress has been made in mitigating "contextual dilution".Then,aiming at the problem that the zero frequency symbol cannot be encoded during the arithmetic coding process.This paper introduces the escape symbol to deal with it,compared with the traditional method of initializing the conditional probability distribution in the context model to a uniform distribution,the escape processingmethod not only avoids the use of uniform distribution in the coding process,but also obtains a probability distribution that is close to the true statistical distribution of the source through statistics.Finally,this paper also studies the impact of the count value on the compression performance in the probability model.The research shows that regularly updating the probability models is very beneficial to improve the compression performance.Experimental results show,using the 4-order Context model can obtain a better coding performance than other order Context models;the compression performance using the increment of description length as the node selection criterion is better than the description length as the node selection criterion;careful division of context symbols' information can effectively avoid discarding too much important context information during node selection;introduction escape symbol to deal with the zero-frequency symbols in the arithmetic coding process can effectively improve the compression performance.In addition,this paper also analyzes the impact of statistical values in the Context model on compression performance through experiment.
Keywords/Search Tags:Context modeling, Lossless compression, Escape symbol, Entropy coding, The increment of the description length
PDF Full Text Request
Related items