Font Size: a A A

Multi-Source Co-Attention Networks For Composite Question Generation

Posted on:2021-04-16Degree:MasterType:Thesis
Country:ChinaCandidate:Z H SongFull Text:PDF
GTID:2428330623469117Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Question generation is a task in natural language processing field,which plays an important role in dialogue system,education and extension of question answering dataset.Nowadays plenty of knowledge bases' foundation makes the study of question generation become more popular and the quality of generated questions better.To generate composite questions from knowledge bases,this thesis proposed the multi-source co-attention based neural network.In previous works,there was a kind of method about question generation based on a single subject-predicate-object triple.But it required modeling the relations of multiple entities in this problem and then questions which contained multiple entities can be output,namely composite questions.If template-based methods are utilized,they are likely to suffer from the inflexibility and need much human labor for designing good template to improve the quality of generated questions.And simple neural-based models are not sufficient to address the problem on account of the complex linguistic structure and diverse language patterns.Therefore,this thesis introduced the multi-source co-attention encoder-decoder network.The encoder encoded multiple triple-sequences through an identical bidirectional LSTM network and co-attention mechanism.The decoder utilized hierarchical attention mechanism to fuse the encoder's outputs to generate composite questions.In the meanwhile,answer-aware decoding module was introduced to mitigate the issue of mismatch between generated question types and the answer.Extensive experiments were conducted on the newly released FreebaseQA dataset.Simple end-to-end baselines and model variants are treated as comparison.Compared with K2Q-RNN,the BLEU-4 score of the proposed model were up 81.9%,the ROUGEL score were up 21.1%,the Dist-1 score were up 24.4%,the Dist-2 score were up 102.7%,the Ent-4 score were up 18.8%,the Answerability score were up 88.2%,the Fidelity score were up 77.9%,the Naturalness score were up 34.4%,the Richness score were up 63.9%.Compared with MCN,the average recall about generated question types of the proposed model were up 3.4% and the recall increases about where and how type reached up to 13.3% and 9.8%.
Keywords/Search Tags:Question Generation, Knowledge Bases, Co-attention Mechanism, Neural Network
PDF Full Text Request
Related items