Font Size: a A A

Evolutionary Multi-objective Sparse Reconstruction And Ensemble Learning

Posted on:2016-12-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:L LiFull Text:PDF
GTID:1318330464462881Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Artificial intelligence (AI) has seen many changes and advances during the last six decades. Evolutionary computation and classification problems are two very important and highly specialised topics within the AI research field. This thesis focuses on the study of these two topics. Initially, we investigated evolutionary multi-objective algorithms for constrained optimization problems and the decomposition methods for large scale prob-lems. Next we showed how these evolutionary multi-objective approaches can be applied to the compressed sensing problem, which have been a very popular research topic in many areas in the last decades. Finally, we showed how the compressed sensing meth-ods can be applied to provide very efficient new ways of training ensembles of multiple classifiers. To summarise, this thesis includes the following original contributions:· A novel selection evolutionary strategy for constrained optimization problems is proposed.Since a large number of COPs arise in applied science and engineering problems, their study has both theoretical and practical significance. The search space is par-titioned into feasible and infeasible spaces by the constraints of different types. Because of the existence of infeasible areas, it can be very difficult to handle constrained optimization problems in a way that ensures efficient, optimal and constraint-satisfying convergence. We address this problem by employing multi-objective techniques to combine trade-offs between searching in feasible and infea-sible areas. A self-adaptive selection method is introduced to exploit both infor-mative infeasible and feasible solutions from a perspective of combining feasibility with multi-objective problem (MOP) techniques. Since the global optimal solution of a COP is a feasible non-dominated solution, both non-dominated solutions with low constraint violation and feasible ones with low objective values are beneficial to an evolution process. Thus, the exploration and exploitation of both of these two kinds of solutions are preferred during the selection procedure. Several theorems and properties are given to prove the above assertion. Furthermore, the performance of our method is evaluated using 22 well-known benchmark functions. Experimen- tal results show that the proposed method outperforms state-of-the-art algorithms in terms of the speed of finding feasible solutions and the stability of converging to global optimal solutions.· A decomposition method for LSO based on mixed second order partial deriva-tives is proposed.We encounter to the "curse of dimensionality" when we solve problems with large scale of dimensions. Cooperative co-evolution framework has strengthened the scalability of evolutionary algorithms. Because it decomposes the original high di-mensionality problems into several sub-problems and then optimizes each of them individually, decomposition approach used in cooperative co-evolution framework is the dominating factor for the performance of these algorithms on large-scale func-tion optimization problems. This chapter provides the theoretical analysis of the interaction between variables. Several theorems and one lemma are given to in-vestigate characteristics of the relationship between the decision variables. An de-composition approach based on the mixed second order partial derivatives of the analytic expression of the optimization problems is presented. We investigated the advantage and disadvantage of the automatic decomposition approach differential grouping and we also gave an enhanced versions of differential grouping to deal with problems which differential grouping is not capable of. The comparison be-tween different grouping strategies and experimental results on 20 benchmarks are given to testify the proposed decomposition methods.· An evolutionary multi-objective optimization for compressed sensing prob-lems is proposed.An evolutionary multi-objective approach is introduced to investigate the two com-peting cost function terms (measurement error and a sparsity-inducing term) in com-pressed sensing problems. We investigate the sparse reconstruction problem, and present data to show that knee regions do exist on the Pareto Front (PF) for this problem and that optimal solutions can be found in these knee regions. A new soft-thresholding evolutionary multi-objective algorithm (StEMO) is then presented, which uses a soft-thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence towards the PF, and an- other with higher probability to improve the spread of solutions along the PF, en-abling an optimal solution to be found in the knee region. Experiments are pre-sented, which show that StEMO significantly outperforms five other well known techniques that are commonly used for sparse reconstruction. Practical applications are also demonstrated to fundamental problems of recovering signals and images from noisy data.· Efficient ensemble learning based on compressed sensing is proposed.Efficient ensemble learning requires optimising two objectives:minimising the number of models needed in the ensemble, while maximising the accuracy of the final ensemble. We show how these two objectives can be posed as the two terms in the compressed sensing problem. Compressed sensing techniques are then em-ployed to find an ensemble which is both small and effective. An additional con-tribution of this chapter, is to present a new performance evaluation method (a new pairwise diversity measurement) called the roulette-wheel kappa-error. This method takes into account the different weightings of the classifiers, and also re-duces the total number of pairs of classifiers needed in the kappa-error diagram, by selecting pairs through a roulette-wheel selection method according to the weight-ings of the classifiers. This approach can greatly improve the clarity and informa-tiveness of the kappa-error diagram, especially when the number of classifiers in the ensemble is large. We use 25 different public data sets to evaluate and compare the performance of compressed sensing ensembles using four different sparse recon-struction algorithms, combined with two different classifier learning algorithms and two different training data manipulation techniques. We also give the comparison experiments of our method against another five state-of-the-art pruning methods. These experiments show that our method produces comparable or better accuracy, while being significantly faster than the compared methods.· Joint sparse ensemble learning is proposed.A set of weak classifiers is combined into a robust ensemble by using a joint sparse representation method, which assigns a sparse coefficient vector to the decision of each classifier. Training data are partitioned into several sub-groups to generate sub-underdetermined systems. The joint sparse method enables these sub-groups to then share their information about individual classifiers, to obtain an improved overall classification. Partitioning the training dataset into sub-groups makes the proposed joint sparse ensemble method parallelizable, making it suitable for large scale problems. Additionally, the joint sparse ensemble method can be used for prob-lems with multiple output classes, whereas previous sparse approaches are limited to binary decision problems. Two different strategies are described for generat-ing the sub-underdetermined systems, and experiments show these to be effective when tested with two different data manipulation methods. Additional experiments evaluate the performance of the joint sparse ensemble learning method in compari-son to five other state-of-the-art methods from the literature, each designed to train small and efficient ensembles. Results suggest that joint sparse ensemble learning outperforms other algorithms on most datasets.
Keywords/Search Tags:Constrained Optimization, Large Scale Optimization, Classifier Ensemble, Sparse Re- construction, l1 minimization
PDF Full Text Request
Related items