Font Size: a A A

Disentangled Representation Learning And Constructing Conceptual Space

Posted on:2020-10-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z J LiFull Text:PDF
GTID:1368330572496599Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
One of the key problems of artificial intelligence is the formalized representation of knowledge and data.A well-defined representation of knowledge and data could allow the downstream learning and applications of models to be more simple and efficient.An important criterion to evaluate a representation is the interpretability.An interpretable representation not only helps human to understand the result of methods but also improves the robustness and generability of models.It even helps the computer to capture the data distribution more precisely.Disentangled representation is one of the advanced research areas on interpretable representations.It aims to extract interpretable attributes from data,so as to provide robust features for downstream machine learning method and to enable computers to perform human-like reasoning.This thesis is mainly focused on the learning methods of disentangled presentation and its application on the construction of conceptual space.In the first part,two unsupervised learning methods are presented to learn disentangled representation.(1)The learning method based on analogical reasoning.This method is based on the insight that each interpretable attribute corresponds to an analogical relation.The method represents the analogical relation between sample pairs via the parallelogram in the hidden space and designs a classifier to learn the variation between the analogical sample pairs.Finally,the generator and the classifier are trained together to capture the interpretable attributes.It is shown that the learning method indirectly maximizes the mutual information between hidden codes and sample pairs.(2)The disentanglement learning method based on the assumption that attributes are pairwise independent.The statistical analysis on real-world data reveals that interpretable attributes tend to be pairwise independent instead of mutual independent.As a result,the proposed method is rooted in the pairwise independence assumption and designs a term to measure pairwise independence,including a computation method and a sampling method.Besides,it also utilizes a new variational lower-bound of log-likelihood,which restricts the marginal distribution but not the joint distribution of attributes.Therefore,the lower bound can be combined with the pairwise independence term to accomplish the proposed algorithm.In the second part,the application of disentangled representation learning in conceptual space construction and concept learning is presented.According to the consistence between disentangled representation and conceptual space in definition,the conceptual space is approximated with the hidden space of disentangled representation.In addition,two methods to represent concepts in conceptual space are introduced:the representation based on neighborhood models and the representation based on density peaks.Specifically,the formalized definition of the neighborhood-based representation and its representation of a single concept are reviewed,and the representation and reasoning method of multiple concepts are explored.For the density-peak-based representation,the definition and the learning method are described.Finally,the joint experiment with disentanglement learning and concept learning are demonstrated.In summary,this thesis explores new unsupervised learning methods of disentangled representation and reveals the important observation that interpretable attributes tend to be pairwise independent.The proposed methods are expected to provide better data representation for other machine learning algorithms and applications.Particularly,the proposed methods extract features from data.The feature is helpful in data reduction and visualization.It can also be used in the supervised learning task to avoid overfitting and to improve the generability of models.For unsupervised learning,the feature could introduce more priors and help to uncover the structure of data.
Keywords/Search Tags:representation learning, unsupervised learning, disentangled representation, interpretability, conceptual space
PDF Full Text Request
Related items