Fixed-point iteration is a popular method for the multilinear PageRank problem.Recently,Meini et al.established the convergence of this algorithm.In this work,we first point out that the theory established by them is incomplete through an example,and then discuss the convergence of this algorithm again,and extend the relevant theory.Higher-order Markov chain plays an important role in analyzing a variety of stochastic processes over time in multi-dimensional spaces.It has many important practical applications,one of which is the higher-order PageRank problem.There have been many discussions about this problem,and the truncated power method proposed recently supplies a new way to solve it.However,for large-scale problems,this algorithm also has low computational efficiency and accuracy.In this work,we reconsider how to efficiently solve this problem.First,we propose a truncated power method with partially updating to release the overhead,in which one only needs to update some important columns of the approximation in each iteration.However,the truncated power method solves a modified higher-order PageRank problem,which is not mathematically equivalent to the original one.Thus,the second contribution of this part is to propose a truncated power method with partially updating for the original higher-order PageRank problem.We discuss the convergence of all the proposed methods.Numerical results on both real-word and synthetic data sets show that our new algorithms are superior to some state-of-the-art ones for large and sparse higher-order PageRank problems.Deep learning has attracted more and more attention in recent years.Stochastic gradient descent(SGD)is a very popular optimization method for training deep neural networks(DNNs)in deep learning.There is a large amount of work on speeding up the convergence of SGD.Recently,Wang et al.proposed a scheduled restart stochastic gradient descent(SRSGD)method.In this part,we propose a modified SRSGD method(MSRSGD)to enhance the performance of the SRSGD method.The main idea is to further reduce the value of momentum coefficient by introducing a parameter in SRSGD.The convergence of the proposed method is established,and how to choose the parameter in practice is also discussed.Numerical experiments show that MSRSGD has smaller train loss and better generalization than SRSGD,under comparable computational cost. |