| Traditional supervised learning requires data with accurate and complete labelling information.Although supervised learning has achieved good results in various applications,it is generally difficult and expensive to accurately label each instance.Unsupervised learning utilizes unlabeled data for training and does not require strong supervision information.However,due to the complete loss of labelling guidance,the process of unsupervised learning is very complicated and the effects are hard to quantify.Weakly supervised learning builds a predictive model using less precisely labeled data,which is widely available in the network,easier to obtain and more realistic.Therefore,the framework of weakly supervised learning has attracted extensive attention.Partial label learning is an important branch of weakly supervised learning framework,which has become a hot research direction in recent years.Each instance in the partially labeled data set is associated with a set of candidate labels,and only one of these labels is true.The task of partial label learning is to learn a multi-class classifier from this data set.The difficulties of partial label learning are mainly the following three aspects.Firstly,the true label is concealed in the candidate labels and cannot be directly obtained and utilized in learning algorithm.Secondly,the feature space of the data usually has noise,which adversely affects the model.Thirdly,the relationship between the sample feature space and the label space is difficult to be fully utilized.Based on the above analysis,this paper conducts in-depth research on partial label learning,and proposes the following two innovations.In order to reveal the inherent manifold structure of the data more accurately,this paper proposes a prior knowledge constrained adaptive graph framework for partial label learning.The algorithm exploits an adaptive graph fused with prior label information to construct more robust instance relationships,and achieves better performance of label disambiguation.Firstly,Jaccard similarity coefficient of the label space is introduced to effectively filter unreliable examples in the k-nearest neighbors,which guarantees that two instances belong to different classes if they do not share any common candidate labels.Secondly,the adaptive graph model incorporating the above prior label knowledge more robustly reveals the internal structure of the instances by simultaneously optimizing the similarity matrix and the labelling confidence matrix.Thirdly,considering that only one candidate label of each instance is correct,the discrimination term well guarantees the mutually exclusive relationship among candidate labels,and widens the difference between highly probable and unlikely labels.Finally,comparative experiments on various data sets show that the proposed algorithm has advantages over other existing methods.In order to eliminate the adverse effects of feature noise and redundant label information in the data,this paper proposes a partial label learning algorithm based on noisy side information,which improves the disambiguation performance by simultaneously taking the feature noise and the contribution of other candidate labels into consideration.Firstly,a low-rank matrix recovery model is introduced to eliminate noise from the corrupted observations.The original feature matrix is decomposed into a lowrank ideal feature matrix and a sparse noisy feature matrix through its own linear reconstruction,which reduces the influence of feature noise on the algorithm.Secondly,the algorithm introduces a labelling confidence matrix and leverages the latent label distribution to emphasize the different contributions of other candidate labels.Considering that examples close to each other in feature space tend to have identical label in label space,the manifold assumption principle is used to explore the consistency of the ideal feature matrix and the labelling confidence matrix,so as to eliminate the redundant information of label space and identify the ground-truth label.Finally,comparative experiments on various data sets show that the proposed algorithm has a good disambiguation effect. |