Font Size: a A A

An Interpretable Classifier With Linear Discriminant Analysis Based On AFS Theory

Posted on:2021-01-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y J DengFull Text:PDF
GTID:2428330620476901Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In today's era,machine learning is on the rise,attracting a large number of researchers to invest time and energy to develop it.Humans desperately need a powerful algorithm with certain intelligence to improve productivity.Machine learning seems to give a direction.Common machine learning algorithms,such as support vector machines,naive Bayes,neural networks and other algorithm models provide solutions to many practical problems.In many fields including industry,image recognition and business strategy,machine learning is showing Its increasingly powerful capabilities.As far as the tasks of machine learning need to be completed,classification tasks are undoubtedly very important.In the real world,classification tasks are always everywhere.Such as the identification of license plate numbers,the identification of animal pictures,etc.,classification is one of the most basic tasks in nature.Therefore,the research on classification theory and classification model has always been the focus of scholars' attention and research.In the era when artificial intelligence has become a research hotspot,different algorithms and models have ushered in explosive development,which includes both traditional classification models and newly emerging intelligent classification models including neural networks.Most researchers pay attention to the accuracy and speed of learning models,and they ignore the interpretability and understandability of the models.However,certain interpretability and comprehension are very important for a model.The black box model always makes people feel incomprehensible,and even makes people feel slightly unreliable,because we cannot guarantee that it will not make mistakes under certain circumstances,nor can we predict when and where it will make mistakes and the reasons for its mistakes.The black box model is like a magic machine,it will help us solve some problems,but we can never know how it solves the problem.In order to make the model interpretable and understandable,this paper combines axiom fuzzy sets and linear discriminant analysis.Linear discriminant analysis is used to reduce the dimensionality of the samples in the data set to a hyperplane,so that different types of samples in the data set can have better distinguishing characteristics.Through the semantic description given by the axiom fuzzy set theory,a reasonable class description can be determined for different categories and used as the basis for classification.The new sample will be judged to belong to the class with the highest degree of membership in the class description.This sort of classification makes the model interpretable to a certain extent and easy for people to understand.To a certain extent,we can predict the behavior of the model.When an error occurs,it may be able to locate the logic of the error.The algorithm model of this paper is compared with other machine learning classification algorithms on open source data sets.The experimental results show that the algorithm model of this paper has higher accuracy on most data sets,and the variance of the result prediction is also smaller.It shows that the algorithm model in this paper has more powerful generalization ability and stability.Finally,this article uses the latest popular data analysis language Python and network framework Django to build a network analysis platform,using a visual interface to facilitate the use and debugging of the algorithm model in this article.
Keywords/Search Tags:Feature Extraction, Semantic Interpretation, Axiomatic Fuzzy Set Theory, Machine Learning
PDF Full Text Request
Related items