Font Size: a A A

Progressive Training Enabled Fine-grained Recognition

Posted on:2024-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:F WuFull Text:PDF
GTID:2558307136497294Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Fine-grained recognition aims to classify subcategories within large categories and is an important research branch in the field of computer vision.Related research has broad application prospects in fields such as retail,transportation,medical imaging,and animal and plant protection.The sample progressive strategy has been proven to accelerate the convergence speed of the model and significantly improve recognition accuracy.However,the current mainstream fine-grained recognition mainly focuses on network structure design,and there has been no in-depth research on whether incremental learning strategies can play an advantage in fine-grained recognition tasks.There are two challenges in studying progressive learning strategies in fine-grained recognition: 1)Due to unavoidable factors such as lighting,occlusion,and scale changes,the intra-class and inter-class variances of different subcategories have unique characteristics,that is,the intra-class variance is small while the inter-class variance is large.This special attribute makes it difficult for traditional progressive learning strategies to be directly applicable to fine-grained recognition fields;2)The design of progressive learning strategies includes difficulty metrics and training schedulers.How coordinate the collaborative relationship between the two and making training more flexible and smooth is also a challenge.In response to these two challenges,the specific research innovations of this paper are as follows:(1)Aiming at the special challenge of large inter-class differences and small intra-class differences in fine-grained recognition sample subsets,a sample subset combination sorting method based on submodular optimization is proposed.Specifically,the innovation of the designed sorting method lies in breaking the traditional model of evaluating the difficulty of course learning on a sample-by-sample basis.With the strong theoretical support of submodel optimization in continuous subset selection,the combination of multiple class subsets is used as the basic unit to determine the difficulty level.This scheme can effectively use the submodularity of subset combinatorial optimization to avoid subset combinatorial optimization falling into local extreme points;Based on the sorting of sample subsets,the paper further designs a progressive learning strategy.The designed training strategy can dynamically adjust the ratio of difficult subsets to ordinary subsets based on recognition performance,ensuring that fine-grained recognition networks achieve stable performance gains.(2)A progressive learning strategy based on network feedback is proposed to address the issue of low correlation between difficulty group quantization and training regulators.In order to effectively quantify the combination of class subsets and achieve smooth progressive learning,the innovation of the proposed network feedback training strategy lies in its introduction of submodular optimization into the self-paced learning framework and the construction of multi-tasksk joint optimization model.The constructed joint learning model can further screen the combined data of class subsets in batches using self-paced learning based on the quantification of difficulty in class subsets and training state feedback,ensuring that the fine-grained model can effectively utilize submodular optimization and classification loss to select the most suitable batch data for the current iteration state.Joint optimization is defined as a progressive sample screening framework,where the entire framework iterates through the process of "generating ordered sample groups-optimizing deep models".As the training progresses,more and more samples are judged as easy samples,and the recognition ability of the model is gradually enhanced.
Keywords/Search Tags:Fine-grained Identification, Progressive Strategy, Submodular Optimization, Model Feedback
PDF Full Text Request
Related items