Font Size: a A A

The Design And Implementation Of Crowdsourcing Evaluation System Based On Text Retrieval Results

Posted on:2020-02-01Degree:MasterType:Thesis
Country:ChinaCandidate:X WangFull Text:PDF
GTID:2518305732474034Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
With the continuous deepening and rapid development of information technology,the amount of stored data in various industries has increased dramatically,which has promoted the vigorous development of big data related technologies.Text retrieval,as an indispensable part of text big data processing and analysis research,has also received increasing attention and is widely used in many fields.However,there may be many different text retrieval algorithms in a specific application scenario.Therefore,it is particularly important to evaluate the retrieval results of different algorithms to determine which algorithm to use.In the related research,the retrieval results obtained by different algorithms are generally evaluated by automatic and manual methods.Among them,the automated evaluation scheme is not only difficult to design,but also relies on manual processing of data.The labor of manual evaluation is huge,which will consume a lot of research staff and may delay research time.Therefore,this thesis studies and implements a text retrieval result evaluation assistant system,which is used to free researchers from the complicated comparison and evaluation work.The system adopts the crowdsourcing method,assigns the task of comparing the retrieval results of different algorithms to the mass group that meets certain requirements,and combines the score results of all participants to give the contrast effect of different retrieval algorithms.The manual evaluation of the retrieval algorithm is mainly to compare the advantages and disadvantages of the results list of different algorithms.Generally speaking,the size of the result list is determined,but different evaluation schemes will bring different evaluation workloads and different accuracy evaluation results.The higher the accuracy,the more work you need.This thesis designs and implements three optional evaluation schemes with different accuracy and task quantity.Users can choose specific solutions according to their own needs.Among the three alternative evaluation schemes,the algorithm for generating the evaluation task and the method for processing the evaluation score are different,but the overall process is the same,that is,(1)user publishes evaluation project and uploads data source;(2)generate assessment tasks based on the evaluation plan selected by the user;(3)assign assessment tasks to evaluators;(4)process evaluator's assessment data;(5)get the best recommendation algorithm.The second step involves generating evaluation task module,which is the core content of this thesis.The third step is to reflect the core of the crowdsourcing concept.The crowdsourcing considered in the retrieval algorithm evaluation task is a scenario in which the number of participants and the number of tasks is determined.Therefore,in the third step,this thesis designs a simple task allocation scheme,which is to ensure that participants complete the diversification and quantity minimization of task content.
Keywords/Search Tags:Crowdsourcing, Text Retrieval, Evaluation, Recommendation Algorithm
PDF Full Text Request
Related items