Font Size: a A A

Research And Implementation Of Wideband High Resolution Frequency Synthesizer

Posted on:2020-06-08Degree:MasterType:Thesis
Country:ChinaCandidate:X Y ZhangFull Text:PDF
GTID:2428330596476527Subject:Engineering
Abstract/Summary:PDF Full Text Request
Deep neural networks have defeated many traditional machine learning algorithms and even surpassed the human level in many practical applications,such as image classification and object detection.In order to obtain better predictive indicators,deepening the number of neural network layers is inevitable,which means that huge computing resources need to be consumed.It is difficult to accept for resource-constrained devices such as embedded devices,and the computational bottleneck greatly hinders the development of artificial intelligence industrialization.DenseNet is a recently proposed neural network architecture that has reached state of the art in many visual tasks.However,due to the dense connection between layers in the internal structure,it has great redundancy,resulting in such a dense network having high computational cost in the actual deployment inference procedure.In order to solve this problem,the neural architecture search method(NAS)is used to automate the learning and design of the optimal sub-network architecture with limited computing resources.Specifically,the DenseNet topology is defined as the search space.Based on the pre-trained DenseNet model,an A2 C reinforcement learning framework is designed as the search strategy.The long short-term memory network LSTM is used to represent the policy network,and the multi-layer perception MLP represents the value network.The output of the policy network is a high-dimensional Bernoulli distribution.Sampling from the strategic distribution,a new internal connection mode in the original DenseNet structure can be established,thereby obtaining a new sub-network topology.For the learned sub-network,a neural network architecture performance evaluation strategy is also developed,which comprehensively considers floating-point operation amount of the model and the prediction accuracy in the classification task as the reward feedback of the reinforcement learning agent.Using the policy gradient to update the policy network parameters is equivalent to the agent evaluating the importance of each connection between any two block layers,depending on the importance,ultimately making the decision to tailor or retain the connection.Use the layer-wise pruning(LWP)method for different tasks to search for efficient DenseNet architecture while preserving the original DenseNet advantages,such as feature reuse,short paths,and more.In addition,a novel reward shaping technique was introduced to enable the pruned DenseNet to achieve a better balance between accuracy and floating point operations(FLOPs)while speeding up the strategy search process.Ample experiments on the CIFAR and ImageNet datasets have been done in this thesis.Compared to other existing methods,the LWP-based DenseNet is more compact and efficient.This thesis also explores the hyperparameters in the algorithm framework and finds the specific influence of hyperparameters on the training results of the model.The sub-network learned from the pre-training model DenseNet-40-12 on the data-set CIFAR-10 is visualized.The new connection mode of each layer can be seen clearly,and the weight of each connection in each layer of the sub-network is analyzed.The weight values are found to be comparable and it can be further considered that the reinforcement learning agent based on the LWP framework can well discriminate the importance of the connection,retain the useful connection,and discard the redundant connection.
Keywords/Search Tags:DenseNet, Reinforcement learning, Neural architecture search, LSTM, Neural compression
PDF Full Text Request
Related items