Font Size: a A A

Research And Implementation Of Service Caching Method Considering Mobility In Edge Computing

Posted on:2022-08-04Degree:MasterType:Thesis
Country:ChinaCandidate:A Q YangFull Text:PDF
GTID:2518306524489744Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,advances in microelectronics and low-power technologies have fueled advances in Io T systems and an explosion of compute-intensive applications,such as self-driving and augmented reality,which require network infrastructures that can provide lower latency and greater computational power.Several studies have shown that edge computing is an essential and highly promising solution.Edge computing proposes a new cloud-edge-end architecture that deploys servers closer to end devices,thereby significantly reducing time latency,network bandwidth pressure,and storage computation pressure in cloud computing centers.In edge computing system architecture,there are many works studied static edge mechanism,i.e.,choosing fixed locations to deploy edge servers and connecting them with base stations or access points,however,due to the limited coverage of edge servers,the cost of server deployment is very high when doing large area coverage.Therefore,the mobile edge mechanism is created,i.e.,the mobility of edge servers is achieved with the help of vehicles,drones,etc.,so that fewer number of servers can be deployed in the same coverage area.According to the survey,most of the existing research work in mobile edge mechanism is based on the assumption that the task computation time of the end device offloading to the mobile edge server is short compared to the task transmission time.However,with the emergence of computation-intensive applications and the development of 5G communication networks,the task transmission time is greatly reduced and the computation time increases,so the system design can no longer be based on the above assumptions.Meanwhile,the problem of service caching on edge servers needs to be addressed urgently.As the resources of the edge servers are limited,there is a great need for high-efficient service caching decisions to reduce task completion latency and thus improve system performance.In this thesis,we propose a new service caching scheme for mobile edge servers by focusing on application types with high computational volume and relaxed latency requirements,such as periodic data collection,video surveillance,and people flow counting.Unlike the traditional scheme in which an edge server must complete the device task before leaving the device.This thesis realizes the parallelism of task upload time,server movement time and task processing time,and allows the server to send back the corresponding calculation results when it arrives at the device for the second time.This thesis addresses the service caching problem of multiple mobile edge servers in a fixed area,with special consideration to front-end device assignment and path planning and service scheduling of mobile edge servers.To solve the problem,this thesis proposes a hierarchical iterative on-demand caching algorithm,which jointly optimizes and decouples the front-end device allocation problem of multi-mobile edge servers and the path planning and service scheduling problems of mobile edge servers in a hierarchical manner,allowing the server to return the computation results when it arrives at the device for the second time,and update the scheme after multiple iterations of computation,and introducing rejection probability to avoid the algorithm from falling into local optimum.Finally,this thesis evaluates the performance of the above algorithm using extensive simulation experiments,and the results show that compared with the existing work,the algorithm reduces the total task delay of the server by 50%,and significantly reduces the system deployment cost.
Keywords/Search Tags:Mobile edge computing, service caching, path planning, service scheduling
PDF Full Text Request
Related items