As the basis of artificial intelligence,machine learning has achieved great improvements in recognition accuracy and speed,and achieved excellent performance.Therefore,it has received extensive attention.As an important category of machine learning,multi-task learning(MTL)plays an important role in learning multiple related tasks at the same time and using the commonality and differences among tasks to enhance the prediction accuracy and generalization performance of the models.However,there is an issue in multi-task learning that it is difficult to balance multiple tasks,and multi-task deep learning network models often have millions of parameters.With the increase of the dimension and the number of tasks,traditional optimization algorithms exhibit low efficiency,high complexity,and poor results.The multi-gradient descent algorithm(MGDA)has a simple framework,fast convergence speed,and high convergence accuracy.It has unique advantages in parameter optimization of deep network models.This thesis studies multi-task learning method based on multi-gradient descent.The main contributions are summarized as follows:(1)Considering that the multi-gradient descent algorithm does not consider the task uncertainty when balancing the relationship among different tasks,a multi-task learning method based on multi-gradient descent and homoscedastic uncertainty(HU-MGDA)is proposed.A Gaussian likelihood estimation based on homoscedasticity uncertainty is introduced.By capturing the correlation confidence among tasks,it reflects the inherent uncertainty among tasks.A multi-gradient descent algorithm based on Frank-Wolfe algorithm and gradient normalization is proposed to solve the multi-task learning problems with high-dimensional parameters.Experimental results show that the proposed method can have a good tradeoff for multi-task learning and improve the performance of all tasks.(2)Considering that the current multi-task learning methods are difficult to obtain a set of uniformly distributed Pareto optimal solutions,a multi-task learning method based on hybrid multi-gradient descent and preference vector(PV-HMGDA)is proposed.The proposed method formulates a multi-task learning problem as a multiobjective optimization,and then decomposes the multi-objective optimization problem into a set of constrained subproblems with different trade-off preferences.Then,a hybrid multi-gradient descent algorithm is proposed to efficiently solve these subproblems in parallel.On this basis,an efficient multi-task learning strategy based on large-scale optimization is introduced.Experiments show that the proposed method can generate well-representative solutions and outperform state-of-the-art algorithms.(3)In the traditional froth flotation process,two independent models are learned for judging abnormal working conditions and predicting mineral grade,leading to low accuracy and slow speed of recognition.So this thesis puts forward the innovative working condition recognition method based on multi-task learning to determine simultaneously abnormal conditions and predict mineral grade.Firstly,the key features of froth images are extracted and a feature optimization algorithm based on binary state transition algorithm is introduced to solve the problem of feature redundancy.Then,a multi-task learning-based recognition model for froth flotation process is established,and the proposed HU-MGDA and PV-HMGDA are adopted to solve the problems.The experiment of antimony flotation condition is carried out,and the experimental results have verified the effectiveness of the method.Finally,a multi-task learning-based working condition recognition system for froth flotation process is developed.It can provide information to help achieve steady production condition and reduce resource consumption. |