Font Size: a A A

Research On The Intelligent Modeling Method With Better Transparency And Interpretability

Posted on:2018-10-14Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ChenFull Text:PDF
GTID:2334330518975152Subject:digital media technology
Abstract/Summary:PDF Full Text Request
In some complex scenes such as face recognition and speech recognition,the intelligent models represented by neural network have been able to achieve a high recognition accuracy.In the specific fields,like intelligent medical diagnosis,there is a higher demand for transparency and interpretability of intelligent modeling methods.Models with better interpretability can help people find the inherent laws of things.In general,the traditional statistical learning methods are simple,and therefore easy to understand and explain,while those intelligent models like black boxes.It is difficult to explain the inference process within the models since their transparency is poor.The inference process based on fuzzy rules is more semantic,which makes the fuzzy system perform better on interpretability.The complexity of the early fuzzy system is low,only a small amount of fuzzy rules to form a rule base.Domain experts can participate in the development of the rules,and thus the constructed fuzzy system is quite transparent.However,in the trend of integrating fuzzy systems into neural networks,the complexity of fuzzy rules and system structures leads to the loss of interpretability.In order to obtain a transparent and interpretive intelligent model,the following study was carried out:1)The interpretability of artificial intelligence models,such as neural network and fuzzy system,is compared and analyzed.For example,how the number of neurons and the number of fuzzy rules have influences on the interpretability.The influences on the interpretability of the classification models constructed by different classification strategies.For example,"one-against-one" or "one-against-rest" strategy have their own advantages and disadvantages.2)Based on the minimax probability decision technique,combined with neural network,fuzzy system and kernel trick,the generalized hidden mapping minimax probability machine is obtained,and the physical interpretation of index ? is pointed out.Through simple experiments,we examine the different performances of the intelligent models on interpretability in the classification problem.3)For epilepsy EEG signal recognition,the single hidden layer radial basis function neural network is connected with the classification tree,based on the minimax probability decision technique.Paying much attention to the different separabilities between classes,radial basis minimax probability classification tree with better interpretability is obtained.Its inference process is clear,so it is easy to be understood and explained.4)Based on interval type-2 TSK fuzzy system,using fuzzy subspace clustering and grid partition to generate sparse and structured rule centers to obtain semantic more concise and clearer rule antecedent,and simplifying the consequent to be zero-order,the interval type-2 fuzzy subspace zero-order TSK system is built.Experiments on a large number of medical data verifies the effectiveness and advantages of the proposed method.
Keywords/Search Tags:intelligent model, fuzzy system, neural network, minimax probability, interpretability
PDF Full Text Request
Related items