Font Size: a A A

Research On QoS-Aware Resource Allocation Strategy In Fog Computing Network

Posted on:2022-03-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y F CuiFull Text:PDF
GTID:2518306575967609Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
The explosive growth of Internet of Things Devices(IDs)has led to a surge in demand for Quality of Service(Qo S),which has brought huge challenges to the network capacity and backhaul links of the 5th-Generation information center network.To reduce the burden of traditional cloud computing data centers,fog computing,as an intermediary between the Internet of Things and cloud computing,can support widely distributed,delay-sensitive,and Qo S aware Io T applications.In this thesis,we focus on the Qo Saware resource allocation strategy in fog computing networks,including the task offloading decisions and the content caching decisions.Firstly,this thesis studies the Qo S aware task offloading and resource allocation scheme in fog-enabled Internet of Things networks.In this thesis,to minimize the overhead of the fog computing network,including the task process delay and energy consumption,while ensuring multiply Qo S requirements of different types of IDs,we propose a Qo S-aware resource allocation algorithm,which jointly considers the association between fog nodes(FNs)and IDs,transmission and computing resource allocation to optimize the offloading decisions while minimizing the network overhead.Firstly,an Analytic Hierarchy Process-based valuation framework is established to find the preference of Qo S parameters and the priority of different types of ID tasks.Secondly,a resource block(RB)allocation algorithm is proposed to allocate RBs to IDs based on the IDs priority,satisfaction degree and the quality of RBs.Moreover,a Qo S-aware bilateral matching game is introduced to optimize the association between FNs and IDs.Finally,the offloading decisions are based on the previous steps to minimize the network overhead.Simulation results demonstrate that the proposed scheme could efficiently ensure the loading balance of the network,improve the RB utilization,and reduce the network overhead.Furthermore,to minimize the content fetch delay of IDs,this thesis studies a proactive caching scheme based on federated deep learning in fog computing networks.In the scenario,IDs can obtain content through Device to Device or FNs.Firstly,we establish a Deep Neural Networks model on the user side,independently train the model based on local data.Then based on the K-Nearest Neighbor algorithm,we retrieval the IDs to recommend contents.Secondly,a gradient compression algorithm based on Kmeans is introduced.IDs compress the updated model gradients in each communication round and upload them to FNs.In addition,FN aggregates model parameters according to the user activity-based online popularity prediction algorithm and selects the most popular files for caching.Finally,based on the FL model,IDs train model locally and FN aggregates model parameters iteratively until the global model accuracy is reached.The simulation is based on the Movie Lens-1M real dataset,which shows that the proposed algorithm can predict the popularity of the content with higher accuracy,reduce the content fetch delay,and improve the cache hit rate.
Keywords/Search Tags:fog computing, quality of service, resource allocation, task offloading, content caching
PDF Full Text Request
Related items