Font Size: a A A

Distributed Coopertive Learning Over Peer-to-Peer Networks

Posted on:2018-11-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:W AiFull Text:PDF
GTID:1368330542492876Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Learning from large-scale datasets is one of the typical tasks for knowledge discovery and is seen as a fundamental data mining problem in social,financial,medical and industrial domains.Such datasets are being collected and accumulated at a dramatic pace,resulting in the emergence of new research in the field of large-scale machine learning.Most of existing work in this area has focused on addressing how to seek fast and efficient solutions in a parallel processing manner,where the datasets are allocated to different units for processing and all local results are sent back to the fusion center for a final decision.However,parallel techniques are confronted with significant challenges in some occasions: First,the datasets may be presented in physically,geographically or logically a distributed manner where there is no any centralized authority.Second,security and privacy concerns create great barriers for customers to adopt such a system,because the datasets often contain sensitive or personal information related to companies or individuals,such as business financial records,unique device identities,personal health information,etc.,that the customer may not be willing to share such information with a parallel system.Thus,it is necessary to address novel learning algorithms in a fully distributed manner instead of parallel ones.This dissertation is concerned with fully distributed cooperative learning problems in a peer-to-peer network setting.These problems are of inferring functions in the case where learning behaviors are distributed among multiple separated units in which communication is restricted only to neighboring agents without a fusion center.The main contributions of this dissertation are as follows.Firstly,we develop a new distributed learning algorithm for feedforward neural networks with random weights.To this end,we reformulate the centralized learning problem into a separable form with consensus constraints among nodes.A zero-gradient-sum-based optimization scheme is introduced to solve the problem.In this algorithm,there is no fusion center that collects and processes all data,and the nodes do not have any global knowledge about the network.We prove convergence of the algorithm to the centralized data pooling solution.The proposed algorithm is simple and requires less computational and communication resources,which is well suited for potential applications,such as wireless sensor networks,artificial intelligence,and computational biology,involving datasets that are often extremely large and located on distributed data sources.Secondly,we extend the distributed learning algorithm for feedforward neural networks with random weights to the case in which the communication is event-triggered.Different from the time-triggered communication scheme,the event-triggered communication is driven by a trigger condition for each node where the node exchange information with its neighbors only when it is crucially required.This is particularly useful for the case when communication resource is limited.We prove the exponential convergence of the proposed algorithm under the condition that the network topology is strongly connected and weightbalanced when the design parameter is properly chosen.Thirdly,we consider the problem of adaptive neural network output-feedback control for a group of uncertain multi-agent systems from the viewpoint of distributed cooperative learning.In this scheme,all systems have identical unknown nonlinear dynamic model,but carry out different periodic control tasks,i.e.,each system has its own periodic reference trajectory.We propose a new consensus-based distributed cooperative learning law for the unknown weights of radial basis function neural networks appearing in output-feedback control laws.The main advantage of such a learning scheme is that all estimated weights converge to a small neighborhood of the optimal value over the union of all system estimated state orbits.The learned neural network weights have better generalization ability than that obtained by traditional neural network learning laws.The control approach also guarantees the convergence of tracking errors and the stability of closed-loop system.Under the assumption that the network topology is undirected and connected,a strict proof is given by verifying the cooperative persisting excitation condition of radial basis function regression vectors.Finally,we present a population-based solution for the distributed optimization problem where the whole objective function is defined as an average of local cost functions corresponding to the nodes of a network.Populations are introduced for the nodes to cooperatively find the global optimum of the whole objective function.The main challenge is that each population can not know the whole objective function and thus can not evaluate directly the quality of their individuals in each iteration.To overcome this difficulty,we present a general scheme which mainly consists of consensus search,consensus evaluation,population evolution and local stopping steps.Compared with mathematical methods,this scheme solves in-network optimization problems without convexity assumption on the objective functions,which contributes to the solution of the nonconvex learning problems.
Keywords/Search Tags:Distributed cooperative learning, distributed optimization, peer-to-peer network, event-trigger, metaheuristics
PDF Full Text Request
Related items