Font Size: a A A

Prototype And Uncertainty Based Interpretable Image Recognition Model

Posted on:2024-03-02Degree:MasterType:Thesis
Country:ChinaCandidate:J Q WangFull Text:PDF
GTID:2568307106499764Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In image recognition,neural network models are highly accurate and effective in many scenarios.However,in high-risk decision-making fields,such as medical diagnosis and autonomous driving,the high accuracy of neural network models does not provide enough trust for humans.Every incorrect decision by the model can have immeasurable consequences.If the model can provide explanations for decisions and early warnings for risky decisions,it can bring trust to humans.Therefore,studying interpretable neural network models that can explain decision-making and error warnings is very important.Selfexplanatory models can produce human-understood explanations of the model’s decisions.The prototype-based self-explanatory model-Proto PNet,simulates the evidence reasoning used by humans in image recognition and can generate explanations that humans understand based on the training set by the model itself.Proto PNet realizes the interpretation of the recognition decision by the model.However,Proto PNet lacks risk warning capabilities,which will cause the model to be unable to warn risk decisions and prevent mistakes from happening.To realize the early warning of risk,the current common method is to estimate the uncertainty of model decision-making and measure whether there is risk in decision-making through uncertainty.Therefore,to construct an interpretable neural network model applied to high-risk scenarios,this thesis first introduces uncertainty into Proto PNet,allowing the self-explanatory model to measure the reliability of decision-making;Standard design risk thresholds to realize early warning for risk decisions.In addition,this thesis also uses model interpretation and uncertainty to realize the analysis of model capabilities and finds that current selfexplanatory models have shortcomings,such as low prototype quality and insufficient predictive ability.Finally,this thesis uses methods such as model architecture design and loss function adjustment to solve these defects.In summary,the research content and innovations of this thesis are as follows:(1)Introduction and estimation of uncertainty in self-explanatory models.To realize Proto PNet’s early warning of risk decision-making,it is necessary to introduce uncertainty into Proto PNet to measure the reliability of model decision-making.First,this thesis approximates the Bayesian network corresponding to Proto PNet through a model ensemble method.Probability improves the decision reliability of the model.Then,the prediction probability information entropy of the integrated multiple models is used to represent the reliability of model decision-making.In this thesis,the prediction probability information entropy is called prediction uncertainty,representing all the uncertainties generated by model prediction.Second,to distinguish whether model parameters or images cause prediction uncertainty,this thesis uses mathematical decomposition to divide the prediction uncertainty into model uncertainty caused by model parameters and data uncertainty caused by image information.Finally,this thesis also establishes the explanatory uncertainty for the self-explanatory model to measure the reliability of each prototype-based explanation.The explanatory uncertainty can be used to analyze the model’s decision-making process to explore the reasons for errors and possible improvements.In addition,after completing the estimation of various uncertainties,this thesis will set up a risk threshold based on the value of the forecast uncertainty to distinguish whether there is risk in the decision-making and intervene in risky decisions that exceed the risk threshold,which can reduce the risk of occurrence.To sum up,the application of uncertainty enables the model to realize risk warnings,analyze the existing problems,and provide ideas for further improvement,making the interpretable model scheme proposed in this thesis suitable for high-risk decision-making areas.(2)Optimization of Proto PNet.Through the uncertainty analysis and the analysis of the interpretation of the model,this thesis finds that there are some defects in Proto PNet,including the low quality of some prototypes generated by the model,which will affect the explanatory ability of the entire model,and the prediction ability of the model is not strong enough.These defects make the practical value of the model is low.To improve the prototype quality,interpretation ability and the prediction performance of the model,this thesis proposes a new self-explanatory model – High Quality Prototype Network(HQProto PNet)based on Proto PNet.Compared with existing work,this thesis first adds random erasing operations to traditional data augmentation methods to enhance images,which improves the quality of prototype generation and increases the effect of the model prediction.Then this thesis introduces a multi-scale transformation operation and improves the similarity calculation,so the prototype has multi-scale information and stronger matching ability.The above operations have improved the prototype quality and improved the model prediction ability,reducing the proportion of low-quality prototypes in HQProto PNet from more than 30% of Proto PNet to less than 3%.The accuracy is also increased by about 5% compared with Proto PNet,which can reach or even exceed many models that cannot be explained.At the same time,due to the improvement of prototype quality,stacking multiple models can improve the model’s prediction accuracy without reducing the model’s interpretation ability,so that the model has real stackability.This thesis’ s interpretable model based on prototype and uncertainty solves the need for models to explain decision-making and early warning of risky decisions in high-risk scenarios.It has significant practical value and can be extended to more fields.
Keywords/Search Tags:High-risk Decision-making, Interpretable model, Risk warning, Prototype-base, Uncertainty
PDF Full Text Request
Related items