Font Size: a A A

Automatic Construction Of Parallel Heuristic-algorithm Portfolios

Posted on:2021-05-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:S C LiuFull Text:PDF
GTID:1368330602994187Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Over the past ten years,due to the wide application of parallel-computing archi-tectures,the available computing power has been dramatically improved.As a conse-quence,exploiting parallelism is now becoming more and more important in designing efficient solvers for computationally hard problems.However,the manual design of parallel solvers still remains a laborious work.As an alternative,automatic construction of parallel heuristic-algorithm portfolio(ACPP)has attracted a lot of research interest recently.ACPP aims at automatically building effective parallel algorithm portfolios(PAPs),which contain multiple component algorithms,based on a given problem in-stance set and a given configuration space.In other words,the PAP building process is fully automated,thus greatly easing the burden on the solver designers.Moreover,when using a PAP to solve a problem instance,all of its component algorithms are run independently in parallel.Such a simple parallel-solving strategy makes it easy to de-ploy PAPs on workstations,servers and cloud environment.Because of its low cost and easy deployment,ACPP has good prospects for further development and application.However,current research on ACPP is still in its early stages,and it is still far from being practical.This thesis aims to study ACPP from both theoretical and methodological per-spectives.First,we analyze the automatic algorithm configuration modules which are closely related to ACPP.More specifically,we systematically analyze the performance estimation part of automatic algorithm configuration and obtain the following results:1)we establish the universal best performance estimator given finite computational bud-gets;2)we study the performance estimation error,and obtain its upper bound con-sidering finite and infinite algorithm configuration space,respectively.Comparative experiments on multiple problem domains show that our proposed performance estima-tor has a smaller estimation error than existing estimators.In addition,the experiment results also reveal that our analysis of the upper bound of the estimation error is accu-rate.Finally,based on the above main results,we identify some insights for enhancing existing automatic algorithm configuration tools.Second,we propose a new ACPP method based on explicit instance grouping,dubbed PCIT,which could automatically construct a high-performance PAP at a rea-sonable computational cost.PCIT adopts a strategy that gradually improves the quality of the grouping.In the initial stage,the instance set is split in a completely uniform and random way.In the subsequent construction process,PCIT will collect a large amount of data generated by the runs of the automatic algorithm configuration procedure to build a model for adjusting the instance grouping.After several rounds of adjustment,PCIT outputs a PAP based on the final instance grouping.The experiment results on multiple problem domains show that the PAP obtained by PCIT outperforms significantly than the ones built by the existing ACPP methods,and could even rival the state-of-the-art hand-designed parallel solvers.For existing ACPP methods an indispensable assumption is that the given train-ing instances can sufficiently represent the target use cases,such the constructed high-performance PAPs can generalize well.However,in many cases this assumption does not hold.Thus we propose a generative adversarial solver trainer,dubbed GAST.GAST puts PAP construction and instance generation in an adversarial setting,where the latter seeks to generate new training instances that are hard for the current PAP,and the for-mer tries to configure a new component algorithm to solve these instances.Our anal-ysis shows that GAST is an explicit approximation of the steepest descent procedure for maximizing the generalization performance of the PAP.We conduct experiments on multiple problem domains.The results show that when the training data is scarce or biased,the PAPs obtained by existing ACPP methods will suffer from a significant performance degradation on the test instances.In comparison,the PAPs constructed by GAST perform consistently well.Although GAST can overcome the difficulty caused by scarce or biased training data,it lacks an effective mechanism to control the size of the PAP.In practice,this may cause the PAP output by GAST to be too large.Therefore,we propose a new ACPP method dubbed co-evolving of parallel solvers(CEPS),which can improve any given PAP while maintaining its size.When updating the PAP,for exploring the PAP space,CEPS will repeatedly randomly perturb it for multiple times to generate new PAPs and then retain the best-performing one.We also propose a method,dubbed GIP,which greedily selects algorithms from a set of algorithm configurations to form a PAP.We apply GIP+CEPS to a large-scale optimization problem in the logistics industry.The results show that CEPS could largely improve the generalization performance of a given PAP,and the PAP output by GIP+CEPS is significantly better than the PAPs constructed by existing methods,and is the state-of-the-art on this problem.We further test the PAP output by GIP+CEPS on existing benchmarks,and the results show that,without re-training,the PAP could generalize well to the benchmarks,and finds new best solutions on 10 instances.
Keywords/Search Tags:Parallel Solvers, Parallel Algorithm Portfolios, Automatic Solver Con-struction, Automatic Algorithm Configuration, Heuristic Algorithms
PDF Full Text Request
Related items