Font Size: a A A

A Research On Knowledge Representation Learning Based On Recurrent Neural Network

Posted on:2021-09-02Degree:MasterType:Thesis
Country:ChinaCandidate:T HuangFull Text:PDF
GTID:2568306194475874Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Recently,artificial intelligence(AI)develops rapidly and draws much attention in related field.Knowledge graph is an important part in AI,but far from complete,since the world is crowded with massive information.Knowledge graph completion is an important research object because of its high efficiency.A powerful method for this task is knowledge representation,which encodes entities and relations of knowledge graphs into low-dimensional vector space and can predict missing parts of triplets efficiently through the semantic connections between vectors.Some simple methods represent knowledge through basic vector operations.Although the structure is simple,the semantic expression ability is limited.Models like Conv E introduce neural network into knowledge representation,which have strong semantic representation ability,but have difficulties in modeling triplet order.Triplet order is crucial for knowledge representation.To represent knowledge more efficiently,we introduce recurrent neural network(RNN)into this task,because RNN can model triplets in the form of sequences and capture the sequence information in between.We propose two models,one is the knowledge representation model encoded by RNN called Rec E,the other is the knowledge representation models pre-trained by the RNN called Re_Dist Mult and Re_Compl Ex.The details are as follows:The first model applies RNN to model triplets and capture the sequence information and semantic association in between.Specifically,to obtain the score of a triplet,the model puts the head entity and relation into RNN in a sequence form,the output is then combined with the tail entity through an inner product operation.Considering that different recurrent network structures have different performances on various knowledge graphs,in order to make the model more robust,we select three recurrent neural networks SRN,LSTM,and GRU to model the knowledge graphs respectively,and the corresponding model names are Rec E(SRN),Rec E(LSTM)and Rec E(GRU).The second model uses RNN to pre-train triplets and capture the interactive information between them,then the model puts the triplets into the benchmark models for continue training,hoping to improve the performances of the benchmark models.In order to verify the pre-training ability of RNN completely,we select two benchmark models Dist Mult and Compl Ex for pre-training.The former represents entities and relations as real vectors,the latter represents knowledge in complex space.Similarly,we select SRN,LSTM and GRU for pre-training,the corresponding model names are Re_Dist Mult(SRN),Re_Dist Mult(LSTM),Re_Dist Mult(GRU),Re_Compl Ex(SRN),Re_Compl Ex(LSTM)and Re_Compl Ex(GRU).Finally,experimental results of link prediction task on different datasets show that our models have excellent performance,surpassing the benchmark models and other comparison models.This indicates that RNN can effectively capture the structure information of knowledge graphs to better encode triplets,and when pretraining triplets,it can also capture internal interaction information to improve the performance of the benchmark models.
Keywords/Search Tags:artificial intelligence, knowledge graph, knowledge representation, recurrent neural network
PDF Full Text Request
Related items