As a typical representative of next-generation network architecture,Named Data Networking(NDN)can integrate "network" and "storage",and realize the multiplexing and efficient forwarding of static content within the network through name routing and distributed intra-network caching.With the maturity of network virtualization technology,the next stage of development of NDN is to deploy an in-network computing architecture that provides computing services on network devices.The NDN’s named addressing mechanism helps discover and route computing services within the network,and the NDN’s caching function enables cache reuse of computing results within the network.Although the in-network computing based on the named data network architecture is characterized by proximity computing,dynamic addressing/routing and cache reuse of computing results,there are still many urgent needs to optimize the deployment location selection for in-network computing services,routing selection for multiple available services,and optimization of computing power supply capacity at software NDN routers.In view of these problems,this thesis explores the deployment optimization techniques,routing optimization techniques and arithmetic power provisioning optimization techniques for named in-network computing.The main work of this thesis is as follows:To address the deployment optimization problem of in-network named computing,this thesis proposes an in-network named computing service deployment strategy(SB-INSDS)based on the in-network service yield.The strategy first proposes two metrics: the average idle rate and the average service yield,which measure the computational task pressure on different NDN routers and the utilization of the available computational resources,respectively,and then designs a corresponding along-route computing network resource-aware mechanism,which collects the evaluation metrics of NDN routers on the transmission path of interest packets and aggregates them to the source server,which in turn determines which NDN routers should be deployed according to the computational network resource pressure on the transmission The source server then determines which NDN routers meet the deployment conditions based on the computing network resource pressure of each NDN router on the transmission path,and finally the source server makes a probabilistic selection among the candidate nodes based on the average idle rate to decide the best deployment location of the service.The simulation results show that this strategy can improve the overall computational efficiency of all NDN routers in the network in handling computational traffic and also reduce the number of container replacements in the network,compared with the Io T-NCN strategy that prioritizes the deployment of computational services on NDN routers near the user side.For the route optimization problem of intra-network computing,this thesis proposes a probabilistic routing strategy based on service invocation time(SIT-PRS)based on the design of an intra-network neighbor router state synchronization mechanism.Among them,the synchronization mechanism adopts a streamlined notification interest packet to announce the real-time status,and the in-network NDN routers only obtain information about the neighboring routers with status changes,including their updated deployed services and the estimated invocation service time;on this basis,combined with the perceived in-network service distribution information,the NDN routers evaluate the average invocation delay of the forwarding interface for each entry in the FIB table,and then Based on the interface invocation delay,the NDN router performs probabilistic forwarding of the received packets of interest.Compared with the native NDN routing mechanism,the mechanism designed in this thesis has lower synchronous communication overhead and can significantly reduce the average invocation delay of services on the user side.To address the computational power supply problem of named in-network computing,this thesis proposes a DPU(Data Processing Unit)offloading mechanism(ICPO-DUM)for innetwork computing power optimization.The mechanism takes the available control plane ARM(Advanced RISC Machine)core resources on the DPU as the total amount of resources for deploying companion containers,transforms the companion container offloading and deployment problem into a "0-1" backpack problem,and maximizes the DPU local service processing revenue as the goal of the mechanism to derive the optimal companion container offloading and deployment list.Compared with the offload mechanism that prioritizes the service processing revenue,the ICPO-DUM mechanism can significantly improve the service processing efficiency of the companion containers on the DPU and help NDN routers release more CPU(Central Processing Unit)resources to deploy more other in-network computing service containers. |