Font Size: a A A

Regularization Method To Solve The Sparsification Problem Of Neural Network

Posted on:2016-10-02Degree:MasterType:Thesis
Country:ChinaCandidate:J D YuFull Text:PDF
GTID:2308330461978186Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
There is some useless information in the input information of neural network, called the redundant information. If the redundant information is included in the input information, it is important to find it out. To find the information is called sparsification problem of neural networks. If we find the redundant information, we can get rid of it.In this thesis, some regularization methods are proposed to solve the sparsification problem of neural networks. The regularization methods include L0 regularization, L1, regularization, L2 regularization and L1/2 regularization. Because the problem of Lo regularization is very difficult and L2 regularization can’t solve the sparsification problem, we use the L1 regularization and L1/2 regularization. Gradient-descent method is used for solving the resulting optimization problem.Numerical experiments are conducted for an approximation problem and a classification problem. The numerical results show the efficiency of the algorithm.
Keywords/Search Tags:Neural network, L1 regularization, L1/2 regularization, Sparsification
PDF Full Text Request
Related items