| Machine reading comprehension,as a brand-new method of intelligent question answering technology,allows machines to analyze questions entered by users,understand user intentions,and quickly and accurately retrieve and find fine-grained answer fragments from articles through machine reading comprehension models.It can provide users with an efficient answer retrieval method,and is widely used in Baidu search,automatic question answering robots,intelligent customer service robots and other scenarios,and has good development prospects and application value.With the increase in the types of Chinese/English data sets for machine reading comprehension,the expansion of complexity,and the substantial increase in the computing power of hardware graphics processors,machine reading comprehension systems have gradually shifted from traditional shallow semantic analysis to deep semantic understanding.Most of the existing machine reading comprehension models have the following problems: the use of traditional word vectors cannot efficiently capture rich word relationships;the method of encoding text using bidirectional cyclic neural networks or convolutional neural networks cannot effectively utilize the long-distance dependencies in the text,and the current deep learning model does not consider linguistic clues such as deep semantic information;the semantic information of the question and the article has not been fully interactively fused,and the model cannot accurately locate the answer from the article.Based on the above problems,this thesis uses the Dureader2.0 data set,based on the BERT model to improve text representation and feature fusion,and constructs a Chinese machine reading based on multiple high-level networks to capture advanced semantic feature information and integrate self-attention mechanisms Understand the model,and use the hybrid attention mechanism to improve model performance,and realize multi-layer attention interaction for text and questions.The resulting model has reduced training overhead and improved accuracy in predicting answers to questions.The main research work of this thesis is as follows:First,two new machine reading comprehension models are proposed.In order to improve the current situation that traditional word vectors cannot capture rich word relationships,this thesis uses the pre-trained model BERT as the text encoder for initial encoding,and simulates multiple high-level networks to capture advanced semantic feature information according to the Transformer encoder principle,so as to realize text and question analysis.Multilayer attention interaction.However,the C-S Reader model still has significant defects in the construction of global semantic relations and long-distance semantic reasoning.Based on this model,this thesis incorporates a mixed attention mechanism and adds two attention synthesizers,Random Synthesizer and Dense Synthesizer,to further enhance the overall model.Relationship building ability and local information focusing ability.Second,complete the training and testing of the model.In this thesis,the training of two model parameters is carried out on the basis of the code parameters disclosed by the Xunfei Joint Laboratory of Harbin Institute of Technology.After repeated experiments and parameter adjustments,multiple sets of experimental results are finally obtained.In order to verify whether the ability of the model has been improved,multiple comparison experiments were designed based on the Dureader2.0 dataset to verify the effectiveness,and an ablation experiment was designed to prove the contribution ratio of each part to the improvement of the overall ability of the model.The experimental results show that the accuracy of the C-S Reader model proposed in this thesis is 14.5% higher than that of the traditional BIDAF model in the test set,and8.6% higher than that of the BERT baseline model,which effectively improves the semantic extraction ability of the model.After incorporating the mixed attention mechanism on the basis of this model,it has been verified that the accuracy of the model is 3.5% higher than that of the former,and the generalization ability of the model has been further improved.Third,machine reading comprehension model deployment and application.This thesis designs and implements an online web Chinese machine reading comprehension system.The system builds a system architecture with front-end and back-end separation based on the VUE and FLASK frameworks,and uses the BERT model incorporating mixed attention as the main prediction model to deploy question-and-answer services.Provides an exquisite and beautiful visual operation interface,the main functions include machine reading comprehension answer prediction,data analysis,model upload,user rights management,data set list display,etc.The system performance test results show that the system has fast concurrent response and stable operation. |