In recent years,due to the increasing demand for computing power,the three-tier computing deployment solution of "cloud-edge-end"collaboration has gradually become popular.The computing power network further proposes to collaborate and schedule the ubiquitous distributed network,computing,and storage resources in the cloud-edgeend architecture,and to develop efficient and reasonable allocation and scheduling strategies for computing resources based on user needs,in order to improve service quality and computing resource utilization.Computing power resource awareness is one of the key technologies for implementing the computing power network.This thesis focuses on the implementation and improvement of computing power resource awareness,and mainly conducts the following work:(1)Design and implement a resource monitoring system for the computing power network.Computing power resource awareness can be achieved using resource monitoring systems.Currently available resource monitoring systems are not suitable for monitoring computing network scenarios due to their architectural design or performance issues.This thesis proposes design principles for a monitoring system specifically tailored for the computing power network based on its characteristics,and architects the resource monitoring system.according to these principles.The resource monitoring system designed in this thesis is divided into decoupled client and server.The client runs on the computing nodes,including a data collection module and a monitoring management module.The server runs on the management platform,including a data processing module and an operations and management module.Based on this architecture,this thesis designs and improves the technical solutions and implementation mechanisms for each function of the system,builds an experimental platform,and conducts functional testing to verify the normal operation of the system.(2)This thesis proposes a Transformer-based algorithm for predicting computing power resource data.The Transformer algorithm is not only capable of predicting complex data transformations but also adept at handling long sequence problems.It is suitable for predicting resources in computing networks that exhibit characteristics such as fast-changing frequencies and unclear patterns of change.Therefore,a Transformerbased prediction algorithm is adopted in this thesis to predict resource.To address the issues encountered by the Transformer during prediction,logarithmic sparsity strategy and convolutional self-attention mechanism are employed for improvement,aiming to enhance the algorithm’s space efficiency and prediction accuracy.To validate the performance of the proposed algorithm,simulation experiments are conducted using two representative metrics of data variation,namely CPU and disk I/O.The performance of the improved Transformer-based algorithm is compared with that of the ARIMA algorithm and LSTM-based prediction algorithm.The experimental results show that the proposed algorithm performs well in terms of convergence speed and prediction accuracy. |