Font Size: a A A

Research On Representation Learning Of Entity Knowledge

Posted on:2018-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:J P ZhuFull Text:PDF
GTID:2348330518983390Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the explosive growth of big data,data becomes more and more huge,it is difficult to obtain accurate information from the internet.Recently,with the development of deep learning,the representation of knowledge base learning technology caused great attention.Representation learning is also applied to the construction of large scale knowledge graph.Although there are many kinds of expressions of words,how to express them more efficiently becomes an important and basic work.At present,a representation learning or distributed learning is widely concerned.The current term is to use context information mapped to a low dimensional space,is also a kind of vector representation.Different from the traditional vector representation,the word vector is mapped into low-dimensional,real-valued,dense vector.low-dimensional effectively reduces the computational complexity.Real value can be between the words of the semantic representation is very clear.The dense vector makes the values of each dimension involved in the calculation.Representation learning is a very efficient method of expression.Generally,knowledge is expressed in terms of triples(head entities,relationships,tail entities)in the knowledge base.Using the word vector representation method,the knowledge triples are expressed as(head entity vector h,relation vector r,tail entity vector t).Since the word vector has the characteristics of translation invariance,after a series of linear transformations,irn the entity knowledge triplet,the h + r = t equation holds.Add the head entity h to a certain relation r to get a tail entity t.According to this feature,some scholars have proposed the translation model,which considers the relation r as the translation of the head entity h to the tail entity t.Because the translation model is too simple,the representation of complex relation is not ideal,and the relation and entity are mixed in the same space.Through a series of matrix mapping,this paper presents a new model(Translating for Mapping Matrix,TMM).The head entities and the tail entities are mapped into the same relational semantic space,and new head,tail entities and relational representations are obtained.In complex relational representation,especially for ambiguity and noise,TMM does not capture this information effectively.Therefore,this paper proposes a Gaussian.disambiguation model(Translating for Mapping Matrix and Gaussian,TMMG),which regards the relationship and the entity as a Gaussian distribution,effectively eliminating the ambiguous relationship.In the link prediction experiment,the experimental results show that the new model has better performance.In the reasoning of entity knowledge,reasoning includes rule-based reasoning and graph-based reasoning.Rule-based reasoning relies on background knowledge,experience,etc.,and adds large numbers of rules to reasoning.The advantage is that the reasoning accuracy is high,and the disadvantage is that the generality is poor,and it depends on the manual rule making.Graph-based reasoning can automatically discover reasoning relationships through algorithms.The advantages are automatically recognized,good versatility,and the disadvantage is the lower accuracy.In the knowledge graph,the node represents the entity knowledge,and the edge represents the relation.For the more important nodes,the weighted,the sparse relationship path between the two nodes can be considered secondary.Based on this idea,this paper proposes a weighted graph reasoning model(Path Inference with Weight,PIW).Finally,the experimental results show that the PIW model has a great improvement in the accuracy of the search.
Keywords/Search Tags:Word Embedding, Representation Learning, Knowledge Graph
PDF Full Text Request
Related items