| With the advancement of Internet technology and the growth of the number of smart terminal devices,a large amount of data is generated at the edge of the network.Despite the improvement in computing power of smart terminal devices,the widely available smart terminals have limited computing power due to the size limitation and cannot fully satisfy user needs under the condition of ensuring user experience.Therefore,there is a need to migrate applications(Augmented Reality / Virtual Reality,AR/VR)of terminals with high real-time requirements to the cloud for execution;however,since cloud computing centers are usually far away from users,migrating tasks to cloud computing centers for processing will incur large network latency and jitter.Edge computing is an emerging computing paradigm.By placing small data centers on the user side to sink some functions of the core network to the edge of the network,edge computing can process applications at the edge side of the network,effectively reducing network transmission latency and guaranteeing users’ real-time requirements.In this paper,we focus on the edge server placement problem in edge computing to optimize the traditional network structure,reduce the task transmission delay,and improve the user experience.Reducing task transmission latency is the primary purpose of placing edge servers,however,energy consumption and workload balancing of edge server systems are also issues that cannot be ignored.Therefore,this paper investigates the delay,energy consumption and load balancing problems of edge server placement,and divides the edge server placement problem into two scenarios: simulated and real.In the simulation scenario,all the base station data are simulated.In this scenario,the edge server placement problem is modeled and analyzed with the objective of optimizing delay and load balancing.Considering that the edge server placement process is similar to the clustering process,an improved K-Means algorithm based on Grey Wolf Optimization(GWO)is used as the edge server placement algorithm,which improves the problem that K-Means is sensitive to the initial clustering center.Since there is a difference between the simulated scenario and the real scenario,the edge server placement problem in the real scenario is further investigated.Shanghai is selected as a real edge server placement scenario,and a model for jointly optimizing the latency and energy consumption of the edge server placement problem is developed.For the delay part,the task processing fraction is introduced and the delay is divided into three parts: forwarding delay,queuing delay and remote transmission delay,which effectively simulates the real task migration process.The edge server placement problem in the real scenario is solved in two phases,the first phase is used to determine the edge server placement location,and the second phase uses the Modified Gray Wolf Optimization(MGWO)algorithm to determine the assignment relationship from the base station to the edge server,which can effectively solve the binary discrete optimization problem.Simulation results based on real data sets show that the proposed method performs better than other comparative algorithms in optimizing the network delay and energy consumption,and has good convergence. |