Font Size: a A A

Research On Listwise Context Reranking Algorithm Based On Multi-Attention

Posted on:2021-08-23Degree:MasterType:Thesis
Country:ChinaCandidate:C F GuoFull Text:PDF
GTID:2518306107453074Subject:Computer technology
Abstract/Summary:PDF Full Text Request
By machine learning,ranking results of search engine is a very popular research area in recent years.In information retrieval,the general practice is to use a scoring function to convert the relationship between search term and documents into a relevance estimate,and then sort the document sequence according to the estimate and return the result to the user.However,as the number of factors affecting the relevance and candidate documents increases,the difficulty of sorting becomes higher and higher.For search engines,we are just concerned about the top-relevant part of the documents,which is the topN documents for a keyword with large number of candidate documents.Therefore,the topN documents can be reranked after a simple initial sorting,which reduces the algorithm complexity and improves the user experience.However,the existing learning to rank(LTR)algorithms have lots of drawbacks.On one hand,they often fail to consider the relative relationship between documents and so a large amount of information is lost,such as Ranking SVM;on the other hand,the calculation efficiency and the ability to extract interactive information are still insufficient,such as Deep Listwise Context Model.In order to solve the above problems,we propose Multi-Attention Listwise Context Model(MA-LCM).First,we use Ranking SVM to pre-sort all the documents;then truncate the topN documents and map them to a stronger characterization vector to send to an encoder composed of multiple encode units,which are based on Multi-heads Attention and Feed-Forward Networks;finally,a decoder composed of multiple decoding units maps the encoding vector to a sequence of scores,and the final sorting result of the document is obtained by sorting.In addition,we also combined a Listwise loss strategies with forward attention to make the learning goals of the model closer to the evaluation criteria.The Multi Decoder Framework proposed in MA-LCM can adapt to the combination of different decoding strategies.Due to the use of parallel encoding,MA-LCM is very stable for the variation of sequence length.Finally,experiments show that MA-LCM has better sorting ability than the existing optimization algorithms.
Keywords/Search Tags:Information retrieval, Learning to Rank, Listwise Context Reranking, Multi-Attention, Multi Decoder Framework
PDF Full Text Request
Related items