Font Size: a A A

Research On Generalization Performance Of Multi-Task Learning Based On Multi-Gaussian Kernels

Posted on:2021-07-06Degree:MasterType:Thesis
Country:ChinaCandidate:Y SunFull Text:PDF
GTID:2518306539456644Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Multi-task learning is a promising field in machine learning and it aims to use the inner relationship between multiple related tasks to improve generalization performance.There are two main methods of multi-task learning,one is feature-based multi-task learning,and the other is parameter-based multi-task learning.Multi-task learning of multi-Gaussian kernel(MGK-MTL)is a parameter-based multi-task learning method with good experimental results.MGK-MTL assumes that the optimal prediction function of the target task and the optimal prediction function of the related task are located in the Reproducing Kernel Hilbert Space(RKHS)with the same but unknown Gaussian kernel width.Then the samples of the relevant task are used to choose the Gaussian kernel width as a shared parameter,and finally make the target task choose the optimal prediction function in the RKHS with the given Gaussian kernel width.These tasks have been completed under the assumption that the samples are independent and identically distributed(i.i.d.).However,i.i.d.is a very strict concept,the assumption of i.i.d.cannot be strictly proved in practical problems.Therefore,in this paper,taking exponentially strong mixed sequences,?-mixed sequences,and uniform traversal Markov chains as examples,we study the excess generalization error of MGK-MTL based on non-independent and identical distribution.We also compare the convergence speed of single-task learning of multi-Gaussian kernel and MGK-MTL based on non-independent and identical distribution,and compare the convergence speed of MGK-MTL based on non-independent and identical distribution.
Keywords/Search Tags:Learning performance, Multi-task learning, Multi-Gaussian kernel
PDF Full Text Request
Related items