Design And Research Of Some Algorithms For Power System Steady-state Security Analysis Under CPU+GPU Hyper Computing Architecture | | Posted on:2023-12-04 | Degree:Doctor | Type:Dissertation | | Country:China | Candidate:Y J Feng | Full Text:PDF | | GTID:1522307298952409 | Subject:Electrical engineering | | Abstract/Summary: | PDF Full Text Request | | With the development of a new generation of dispatching and control cloud,the power grid modeling and data acquisition are extended to the distribution network,the calculation range of the network is constantly expanding,and the contingencies are more comprehensive.It is necessary to further improve the timeliness of calculation methods on the premise of ensuring the calculation accuracy.The traditional model and the simplified processing method of the algorithm are difficult to take into account the calculation accuracy and speed,and the parallelism of the partition and block parallel algorithm and the parallel computing ability of the CPU also have certain limitations.The Graphics Processing Unit(GPU)has become a force that cannot be ignored in the field of general parallel computing due to its massive computing power,and has great potential in the application of steady-state security analysis of power systems.This paper studies the GPU high-performance implementation of several algorithms in the steady-state security analysis of power systems.First,based on the extremely sparse matrix of the power system,the irregular change of the algorithm parallelism,and the rapid decline of parallelism in the later stage,the LU factorization method based on the left-looking domino recursion strategy is proposed.The experimental results show that although the matrix characteristics and the GPU adaptation method in the steady-state analysis of the power system are fully considered,compared with the CPU multithreaded commercial solution library,the ideal acceleration effect has not been achieved.The reason can be attributed to the irregularity of the problem and the system scale are not sufficient to fully utilize GPU computing resources.In the acceleration of matrix inversion computation involved in the steady-state security analysis,a multi-parallel strategy on GPU is proposed,and a dual parallel mode of 2D expansion and 3D expansion is constructed to further increase the computation scale and parallelism of the problem,and improve the efficiency of irregular data access.Experiments show that in the 3D expansion mode,the original sparse problem is transformed into a dense vector problem due to the construction of the scene vectorization,which makes the algorithm make full use of the memory bandwidth and computing resources of the GPU,making it better than 2D expansion modeling.When compared with the multi-threaded KLU version,the maximum speedup effect is more than 30 times.Then,based on the scene similarity characteristics of scanning problems under different contingencies in the steady-state security analysis,referring to the multiple parallel design idea of GPU acceleration of matrix inversion,a rule modeling and packaging process for a series of problems with the same structure is constructed.The strategy implements the framework for problem solving under the dual parallel acceleration of GPU.In the new acceleration framework,a generalized batch matrix solution technique is proposed to solve different sparse linear equations in multiple scenarios on GPU.According to the requirements of the normalized modeling of network fault scenarios under multi parallel scheme,the redundant modeling of admittance matrix under various three-phase symmetrical and asymmetric faults and the normalized modeling of power flow calculation equations under different faults and scenarios are considered and implemented.Both N-1 scenario and N-x scenario can realize unified modeling under threephase symmetrical fault and single-phase and three-phase short circuit fault.The results show that under the strategy of regularized modeling and scene vectorized design,GPU-accelerated average solving time of single sparse linear equations is 15-20 times faster than 8-thread KLU in multiple examples.And the increase of non-zero elements after redundant modeling does not have a negative impact on the final GPU computing acceleration effect.Then,for the specific application of circuit breaker interrupting capacity scanning and auxiliary decision on the over-limit of short-circuit current,a complete implementation method of the acceleration on GPU is proposed,and the short-circuit current is accurately calculated in the modified admittance iteration method considering the access of new energy.The concepts of2 D expansion and 3D expansion are also introduced in the batch branch current calculation,and it is concluded that the 2D/3D modeling effect is basically the same when the parallelism does not change with the calculation process.The experimental results show that,in terms of overall time,the GPU-accelerated batch short-circuit current calculation under 3D modeling achieves about 4 times the acceleration effect in each calculation example compared with the 8-thread CPU algorithm.After that,an iterative method based on sensitivity matrix for line breaking and N-1 security checking is selected in the auxiliary decision method of short circuit current suppression.Among them,the impedance matrix is obtained by directly calling the GPU accelerated sparse matrix inversion module when calculating the comprehensive sensitivity;the Newton-Raphson method is selected for the N-1 security check to verify the branch power flow and node voltage exceeding the limit after the expected fault by constructing 3D modeling batch redundant Jacobian matrix and calling GPU acceleration batch sparse equations solving algorithm iteratively.The separation strategy of matrix structure and numerical calculation is proposed in the aspect of Jacobian matrix calculation,which further improves the effect of GPU calculation.The experimental results show that,compared with the 8-thread CPU solution scheme,the auxiliary decision-making of short-circuit current exceeding limit considering static security check has achieved more than 30 times acceleration effect in the PEGASE9241 example.Finally,the algorithm parallelism and GPU adaptation of the security constrained optimal power flow used to adjust the system state when the static security analysis does not meet the N-1 criterion are studied.The SCOPF problem contains power flow constraints under multiple fault states,which is similar to the characteristics of 2D expansion of multiple fault power flow.Based on the situation that the efficiency of 2D expansion mode is lower than that of3 D expansion mode in the solution of batch sparse linear equations accelerated by GPU above,the idea of decoupling the original problem into sub-scenario problem and recombining 3D is proposed.In the SCOPF solution based on the primal-dual interior point method,the KKT equations are transformed into blocks,and the block decoupling algorithm is designed.To some extent,an irregular 2D scene problem is transformed into a 3D batch problem.In the decoupling algorithm,the high-performance GPU block solving algorithm is realized by using the dual-batch trigonometric equation solving algorithm and the batch sparse matrix multiplication and addition algorithm.The experiment shows that in the PEGASE2869 example,compared with the 8-thread Pardiso solver,the GPU block matrix solver achieves more than six times the acceleration effect when the expected number of faults is 64. | | Keywords/Search Tags: | Graphics processing unit, High performance computing, Sparse linear equations, Modeling of batch processing, Interrupting capacity scan, Security constrained optimal power flow | PDF Full Text Request | Related items |
| |
|