Font Size: a A A

Research And Application Of Resource Allocation Technology Based On Bigru Model

Posted on:2022-01-06Degree:MasterType:Thesis
Country:ChinaCandidate:T T JiangFull Text:PDF
GTID:2492306491453444Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
At present,the electric power industry does not have a complete set of enterprise resource intelligent allocation system.The process of enterprise resource allocation often involves process approval by multiple departments,and the approval time is long,resulting in enterprise resources that cannot be used immediately.How to efficiently allocate resources is the focus of this article.The goal of this article is to establish an enterprise resource intelligent allocation model.When managers process applications,the model can provide managers with decision-making support.It can allocate resources efficiently and reduce resource allocation in real time.Time,improve work efficiency.Nowadays,the rapid development of text classification technology provides technical support for the intelligent allocation of resources or intelligent approval research of this article.This article uses text classification related technology to model,and introduces the related technology and modeling ideas used,and finally conducts experimental verification and compares with the existing model to verify the effectiveness of the model.In the field of natural language processing,text classification is a key daily task.The main goal is to obtain effective features from text information,find the correspondence between feature representations and category labels,so as to classify the text.From the perspective of data flow,it is mainly divided into five stages: text preprocessing,natural language vector representation,feature extraction,classifier classification and model training to complete text classification tasks.This paper mainly focuses on the two stages of natural language vector representation and feature extraction,and proposes a text classification hybrid model that combines a BERT model with a two-way GRU and uses a self-attention mechanism to adjust the weights.Among the existing deep learning models,the Word2 Vec word embedding model cannot solve the problem of ambiguity of a word,and the uniqueness of the word vector representation directly affects the semantic representation of the text sequence.The BERT model pre-trained on large-scale data can learn multiple representations of words according to the sentence in which it is located,so that it can dynamically represent word vectors in different contexts and solve the problem of polysemous words.GRU can learn long-term dependencies from learned local features,and bidirectional GRU can learn hidden features in sentences.The self-attention mechanism exhibits superior performance in many fields in natural language processing.It can mine the autocorrelation of data and highlight key information by adjusting the weight of keywords.Therefore,the two-way mechanism and the self-attention mechanism are connected in series to reduce the dependence on external information and solve the degradation of classification performance due to the lack of semantic information.Therefore,in view of the shortcomings of existing models in terms of word embedding representation and text global information modeling,this paper combines bidirectional GRU and self-attention mechanism to extract deep semantic features.The specific implementation methods are as follows: Based on the BERT pre training language model as the word embedding layer,the bidirectional GRU layer coding is added,the semantic information is integrated,and the output of bidirectional GRU is taken as the corresponding direction of self-attention The parameters of attention mechanism layer can reduce the influence of irrelevant information.The output of two directions is pooled and spliced averagely,and then fused with the sentence features obtained from word embedding layer to get the final feature representation.Finally,the final feature representation of text is predicted by classifier.The experimental results show that the accuracy of the model is 92.34%.
Keywords/Search Tags:BiGRU, BERT, Text classification, Self-attention mechanism, Resource allocation
PDF Full Text Request
Related items