Font Size: a A A

Variational Bayesian Regularization:New Theory And Methods

Posted on:2020-12-28Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y H LiuFull Text:PDF
GTID:1368330590954121Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Various practical inverse problems in science and engineering require reconstruct-ing loss information by leveraging observation information,e.g.,reconstructing source signals in blind source separation,reconstructing network weights in neural network.However,limited observation information in inverse problems generally causes trouble-some challenges in reconstructing loss information,i.e.,ill-posedness in blind source separation and overfitting in neural network.Regularization is a effective and neces-sary technique to remedy those challenges by incorporating priori knowledge.In various regularization methods,Bayesian regularization can naturally incorporate priori knowl-edge on loss information by prior model to effectively regularize solution space in inverse problems.Built on Bayesian theory,this work pays close attention to three inverse problem-s,e.g.,blind source separation,image deblurring and preventing overfitting for deep learning.Specifically,1 Frame-based variational Bayesian learning for independent and depen-dent source separation.To relieve the computational burden in traditional methods based on variational bayesian learning for source separation,this work presents a frame-based variational Bayesian learning framework,which splits source signals and observed signals into frames and implements variational inference on these frames independently.Since the proposed framework is based on each frame with less sample size,it significantly improves computational burden for previous variational Bayesian learning.In addition,existing variational Bayesian methods are limited by the fact that they fail to handle dependent source separation,this work proposes a simple and effective prior model to model dependent source sig-nals.This model zigzag concatenates dependent source signals into a long serial,then employs Gaussian process to model the temporal information of the long se-rial.Extensive experiments on toy and real dataset demonstrate the advantages of proposed method in computational cost and dependent source separation.2 Image deblurring using super Gaussian fields.Although a large number of sparsity-based priors in the gradient spaces have been successfully applied for blind image deblurring,they inherently are limited by the fact that they only explore the locally spatial coherence in natural image statistics,and fail to model more com-plicated structures than it,e.g.,non-local similarity.This motivates us leveraging Markov random fields(MRFs)to break the limitation.However,traditional MRFs models,e.g.,Fields of Experts,generally learns their model parameters from exter-nal images with the finest scale,which cannot be directly embedded the practical coarse-to-fine framework of image deblurring.To address this issue,we present a novel MRFs,termed Super-Gaussian fields(SGFs),in which we define super-Gaussian distributions as the potentials.Through this definition,the partition function in SGF,the curse for traditional MRFs,can be theoretically ignored,so that the proposed SGFs can be an image-specific and scale-specific model.Together this model with Bayesian MMSE,we proposed a new method for image deblurring.In theory,the proposed method largely avoids troublesome local minima that wide-ly appears in tractional MAP estimation based methods,and extends traditional variational Bayesian methods in the gradient spaces to sparse-promoting filter s-paces.Extensive experiments on both blind deblurring and non-blind deblurring demonstrate the theoretical advantages of the proposed method.3 Variational Bayesian dropout.Variational dropout is a generalization of Gaus-sian dropout,which aims at inferring the posterior of network weights based on a log-uniform prior on them to learn these weights as well as dropout rate simul-taneously.The log-uniform prior not only interprets the regularization capacity of Gaussian dropout in network training,but also underpins the inference of such posterior.However,the log-uniform prior is an improper prior(i.e.,its integral is infinite)which causes the inference of posterior to be ill-posed,thus restricting the regularization performance of variational dropout(VD).To address this problem,we present a new generalization of Gaussian dropout,termed variational Bayesian dropout(VBD),which turns to exploit a hierarchical prior on the network weights and infer a new joint posterior.Specifically,we implement the hierarchical prior as a zero-mean Gaussian distribution with variance sampled from a uniform hyper-prior.Then,we incorporate such a prior into inferring the joint posterior over network weights and the variance in the hierarchical prior,with which both the network training and the dropout rate estimation can be cast into a joint opti-mization problem.More importantly,the hierarchical prior is a proper prior which enables the inference of posterior to be well-posed.In addition,we further show that the proposed VBD can be seamlessly applied to network compression.Ex-periments on both classification and network compression tasks demonstrate the superior performance of the proposed VBD in terms of regularizing network train-ing.
Keywords/Search Tags:Variational Bayesian Inference, Blind Source Separation, Image Deblurring, Preventing Overfitting for Deep Learning, Network Compression
PDF Full Text Request
Related items