Font Size: a A A

Research And Implementation Of Fast Image Superresolution Method Based On MoE Model

Posted on:2016-10-03Degree:MasterType:Thesis
Country:ChinaCandidate:B Q WangFull Text:PDF
GTID:2308330479491060Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Image super-resolution (SR) problem aims to obtain a high resolution output image given single or a set of low resolution input images, efficiently and effec-tively, with no additional hardware assistance. In this progress, the output image should preserve fine details and give pleasant visual effects. Good image SR meth-od could relieve the limits of camera like device, which leads to its great demands in amounts of fields, such as medical imaging, mobiles, surveillance, astronomi-cal imaging. In additional, SR method can be used as the pre-processing of image processing, owning serve meaningful researching and applying value in face recog-nition and visual tracking, image compression, image registration, feature extraction, and many others.For single-image super-resolution (SISR), single statistical or machine learning based models generally fail to generate high visually pleasant output without sacri-ficing computational efficiency. The recently developed local learning methods can provide a remedy by dividing the large scale input space into a number of regions and then learning a simple local model for each region. However, for these methods the partition is separately learned from local models which makes the learned parti-tion not tailored for subsequent local model learning. To a decided partition, the lo-cal model learning could go optimized, but, as a whole model to be trained, but a global solution needed. In the separate learning methods, using few local models would tends to a large error ratio, while given amounts of local models, it would result in an abundant number of local models for achieving satisfying performance.To address this, we propose a mixture of experts (MoE) model for joint learn-ing of partition and local regressors to guarantee a low error ratio while reducing local model number. In this thesis, our single-layer MoE model consists of two components:a gating partitioning network and several local regressors, dealing with training set partitioning and local models learning, respectively. Given the probabil-ity interpretation of this model, on a set of 5 million LR/HR paired patches, iterative expectation maximization (EM) algorithm is adopted to enable the training of MoE. Finally, we carry out several experiments for multi-demands and multi-scaling fac- tors on 3 common testing datasets. Quantitative and qualitative image evaluation results demonstrate that the proposed method in the thesis can preserve fine details while recovering sharp edges from an input low resolution image, and outperforms state-of-the-arts based on qualitative and quantitative evaluation in terms of image quality and testing speed. Additional, based on the open-source library QT and OpenCV, we have systematically implemented the proposed algorithm using C++ language. It contains several functional modules-customer interface control mod-ule, image super-resolution module, basic image processing modules. The system provides the SR function dealing with various zooming scale and multi-demand with testing speed, and also have a plug-in magnifier zooming comparison mode.
Keywords/Search Tags:image super-resolution, local learning, mixture of experts, joint learning, expectation maximization
PDF Full Text Request
Related items