| In machine learning tasks,in order to effectively fit potential models,datasets are becoming larger,and the collection of data is often separated in time and space.Traditional centralized algorithms cannot effectively cope with such changes,while many problems in machine learning can be attributed to decentralized optimization problems.Distributed optimization algorithms decompose large optimization problem into several subproblems and build a communication network by combining multiple nodes.Each node locally processes a subproblem and communicates with neighboring nodes in the network.In the end,the global optimization problem is solved.In practical applications,the objective functions of many optimization problems vary with time,and the information transmitted between nodes must be quantized.Therefore,it is of great significance to study distributed online optimization algorithms under quantized communication.This paper utilizes graph theory,matrix theory,and optimization theory to study distributed online optimization algorithms,which cover both ideal communication and dynamic online quantization.The two main contributions of this paper are concluded as follows.1.For distributed online optimization problems,we first design an one-step gradient descent and multi-step consensus distributed online optimization algorithm,which uses a fixed gradient descent step size to ensure that the algorithm can track the optimal solutions of the dynamically changing objective functions.For the case where the objective functions are strongly convex and smooth,we analyze the algorithm’s dynamic regret and prove that the algorithm can adjust the coefficient of the linear growth term in the dynamic regret to be arbitrarily small by adjusting the gradient descent step size.Finally,this paper analyzes the impact of parameters on the algorithm’s performance through numerical simulations.2.Further consideration was given to the scenario of quantized communication Based on the proposed algorithm above.A finite dynamic difference coding strategy was proposed with online-generated quantization intervals and variable center quantizers.Each node is equipped with a variable center encoder,which performs online encoding based on the current state of the algorithm and the node’s internal state.Meanwhile,each communication channel is equipped with a corresponding decoder,which can control the encoding and decoding errors dynamically.We first design a distributed online gradient descent algorithm under dynamic online quantization,then we analyze the algorithm’s dynamic regret and derive the total number of bits costed by the network.Finally,the impact of the number of bits on the algorithm’s performance is analyzed through numerical simulations. |