| As an important application of computer vision in real life,person re-identification(Re-ID)has been paid more and more attention because of the increasing need of intelligent monitoring.At present,person Re-ID based on supervised learning has been able to get the recognition accuracy beyond human.But it still has the following two serious issues hampering the practical application of this technology: Firstly,it is not robust enough for the training models to get reliable results only based on RGB image in low-light situations,which greatly limits the application range of person Re-ID technology.Secondly,large-scale model training requires massive data and labels,which greatly increases labor costs.In addition,models obtained only by supervised learning cannot adapt to unknown pedestrian data.In this case,it is particularly important to study person Re-ID without label information.According to the two issues,the cross-modal pedestrian re-recognition problem and the unsupervised pedestrian re-recognition problem are studied respectively in this thesis.Unsupervised person Re-ID has attracted more and more attention in recent years because it does not need manual information of labels.Currently,clustering and fine-tuning methods are the mainstream methods for unsupervised cross-domain person Re-ID and pure unsupervised person Re-ID.However,such methods will abandon valuable information brought by outliers,thus causing irreparable errors in model training.In addition,simply using cross entropy loss with pseudo labels ignores the importance of distance between interclass and intro-class.In this thesis,some improvement is proposed according to the two shortages respectively in unsupervised cross-domain person Re-ID and pure unsupervised person Re-ID.In cross-domain person Re-ID,this thesis suggests using Arc Face loss instead of contrastive loss to enlarge the distance between classes and composing features linearly to increase the density of classes.Then,this thesis applies the two improvements on a recent mainstream baseline.It proves the effectiveness and high accuracy.In unsupervised person Re-ID,this thesis proposes a baseline method based on dynamic memory and contrastive learning,which saves all sample instance features in memory to use features of the current data and memory features to realize contrastive loss and dynamic updates of memory,ensuring that each sample is contributing to the training of the model.A large number of experiments show that Our baseline method has achieved advanced results on all the mainstream datasets.Then,combined with the characteristics of baseline,this thesis proposes the contrastive loss based on hard sample mining,and applies it to both the current batch features and all memory features simultaneously.Furthermore,the mean teacher model based on probability distillation algorithm is proposed to constrain the training process of the model to prevent the information loss caused by the rapid change of model parameters.With these two improvements,the proposed method achieves the most advanced experimental results on both unsupervised person Re-ID and unsupervised cross-domain person Re-ID.Since infrared images can effectively compensate the limitations of RGB images in lowlight conditions,the study of cross-modality person Re-ID is of great practical significance for the construction of all-day intelligent pedestrian monitoring system.Images of two modality existing at the same time,the key to cross-modality person Re-ID is to remove the feature differences of different modalities and only retain the uniform identity features,based on which,a new cross-modality person Re-ID method is proposed using hybrid two-stream network.It analyzes the effect of dual network on cross-modality person Re-ID by setting layers of shared and unshared modality parameters.We use cross-entropy loss to constrain identities features and use the consistency of intra-class distribution and the coefficient of inter-class correlation to constrain the features from different modalities.Large numbers of experiments prove that our proposed method achieves leading recognition results on two main cross-modality person Re-ID standard datasets. |