| Text summarization,one of the main applications of natural language processing,has been attracting increase attention with the rise of deep language model.Text summarization is namely to refine the core content of a context and present it using simple language.Today,most researches focus on supervised text summarization,modeling on supervised text summarization,while the research related to unsupervised text summarization is rather limited.The annotation and collation of summary data require high human resource costs.In this way,research on unsupervised text abstracts is of high academic value and practical significance.Among researches on unsupervised summarization task,the mainstream approach ignores the impact of context.Meanwhile,this approach stays on sentence level without considering the summary as a whole.Basing on the issues above,several researches have been done on unsupervised algorithm.An unsupervised abstract model has been established through the improvement of the original model.The main contributions of this research are as follows:Firstly,vectors including information of the context have been successfully generated by adjusting the input structure basing on the pretraining language model.Also,the significance of context information in text summarization task has been proved by experiments.Secondly,combining three elements of abstract,the model has been diversified and simplified in order to generate abstract more accurately.Thirdly,further improvements have been achieved by calculating the overall semantic similarity between the abstract and the text,which proves the significance of integrity of the abstract.Finally,an unsupervised extractive summary model comes into being. |