Font Size: a A A

Study On Cloud-Assisted Convolutional Neural Networks Renewal Framework For Embedded Devices

Posted on:2019-05-30Degree:MasterType:Thesis
Country:ChinaCandidate:S M LiFull Text:PDF
GTID:2428330566977985Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Recently deep convolutional neural networks(DCNNs)have essentially achieved the state-of-the-art accuracies in image classification and recognition.Currently,an enormous amount of research are devoted to prune and quantize DCNNs,in order to make time-consuming and computation-intensive DCNNs run directly in embedded mobile devices.Nevertheless,there exists a problem: the trained model deployed on embedded devices is only used to make predictions and cannot effectively handle the unknown data in so variable environments,which could lead to low accuracy and poor user experience.Hence,it would be crucial to retrain a better model via future unknown data continually.However,with tremendous computing cost and memory usage as well as a lot of training data,training DCNNs on embedded devices with limited hardware resources is intolerable in practical.To solve this issue,in this paper,using the power of cloud to assist embedded devices to continually train a deep neural network is a promising solution.In this paper,we proposes a cloud-assisted framework with incremental learning concept,to assist embedded devices to update DCNN models deployed on devices.For some data transmission problems in this framework,we will optimize them from four aspects.Firstly,to reduce data transmission during collecting new data,we propose a strategy,called Distiller,to selectively upload the data that is worth learning.Secondly,in order to initialize the weights of convolutional neural networks in the retraining process in the cloud,we intend to use the learned weights of old model to initialize neural networks,similar to fine-tuning way,so that help extract less weights in updating process.Thirdly,to avoid the catastrophic forgetting problem from incremental learning of neural networks,we plan to mix the new dataset and the old dataset to generate an incremental dataset.Fourthly,to reduce data transmission during updating old models deployed on devices,we develop an extracting strategy,called Juicer,to choose light amount of weights from the new model generated in the cloud,to update the corresponding old ones on devices.Experimental results show that the Distiller strategy can reduce 39.4% data transmission of uploading based on a certain dataset,as those filtered data make no contributions to the performance improvement of the CNN model.It indicates that this strategy is feasible and effective.Furthermore,the proposed Juicer strategy for updating process reduces by more than 60% data transmission,which is verified by multiple DCNNs and datasets.It also suggests that the Juicer strategy is useful.
Keywords/Search Tags:Cloud-assisted, convolutional neural networks, weights extraction, model update, bandwidth optimization
PDF Full Text Request
Related items