Font Size: a A A

A Optimization Study Based On The Method Of Producing New Sample Distribution

Posted on:2013-08-31Degree:MasterType:Thesis
Country:ChinaCandidate:C L Z ChenFull Text:PDF
GTID:2248330374457073Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
This work proposed a new variant of AdaBoost called ERstd-AdaBoostalgorithm. The AdaBoost algorithm updates sample’s weight with nodiscrimination. However, ERstd-AdaBoost algorithm can update sample’sweight differently to achieve a better accuracy with no decline of diversity.Experiments on several benchmark real-world sets available from the UCIrepository show that the new algorithm can absolutely improve theperformance ofAdaBoost, and its stabilization is acceptable.A research based on IB(InverseBoosting) algorithm is carried out in thiswork, and a improved version of IB called IB+is proposed. Both IB and IB+algorithm enhance the weight of samples which have been classified correctlyduring the training process. The most different between IB and IB+is themethod to update the weight of training samples in each iteration. For IBalgorithm, the weight of training samples will be updated according to aninverse error vector which was decided by the performance of the last trainedsingle net. However the IB+algorithm adopt a mesosphere ensemble netinstead of a single net to determine the inverse error vector thus a moresuitable sample distribution will be achieved. Further experiment results show that the performance of ensemble net which was developed using an inverseerror vector to create new sample distribution will be decided by theperformance of base single net not the degree of correlation.Finally,this paper gives a further survey of the inverse boosting algorithmwhich employs an inverse error vector to produce new sample distributions.The experimental result shows that the inverse boosting can outperformnormal boosting in some cases. Further experiments on several benchmarkreal-world data sets available from the UCI repository are taken to get a betterinsight into the inverse boosting algorithm, and then we find that theperformance of the ensemble is determined by the performance of its baseclassifiers. This distinctive feature of inverse boosting indicates that traditionalways to improve the performance of an ensemble through enhancing thediversity between the base classifier are invalid here. Therefore, this paperproposes a method with pertinence to improve the performance of inverseboosting.
Keywords/Search Tags:Neural Network Ensemble, Optimization of new SampleDistribution, Error-Rightstd, Inverse Boosting
PDF Full Text Request
Related items