Font Size: a A A

Research On The Quality Evaluation Method Of Crowdsourced Test Report Based On Text Analysis

Posted on:2022-06-19Degree:MasterType:Thesis
Country:ChinaCandidate:H KeFull Text:PDF
GTID:2518306350979029Subject:FINANCE
Abstract/Summary:PDF Full Text Request
With the digital transformation of businesses in various industries,testing is a very important part of the software life cycle and plays an indispensable role.However,in order to solve the contradiction between high cost,tight testing human resources and high demand,many Package testing has become a better solution than traditional testing.However,in the actual crowdsourcing test process,the large number of crowdsourcing test reports with many redundancies,and the degree of defect importance is unknown,which consumes a lot of time for software managers to review.There is a lack of effective and objective evaluation methods for the quality of crowdsourced test reports.At the same time,there is no relatively scientific and reasonable method to determine the rewards of crowdsourced testers and determine the working ability and attitude of crowdsourced testers.The evaluation of crowdsourcing test reports in academia mainly focuses on the measurement of the degree of regulation,lacking a complete indicator system,and most of the comprehensive evaluation methods rely on subjective expert survey methods.Therefore,we hope to obtain the influencing factors of the crowdsourcing test report through a comprehensive and objective analysis and establish an objective comprehensive evaluation model to evaluate the quality of the report.In view of the above situation,this article mainly studies and analyzes the following three issues: 1.How to identify the types of defects described in the report through clustering and improve the efficiency of report reading.2.From what factors should the quality of the crowdsourcing test report be considered and evaluated.3.How to comprehensively evaluate the quality of crowdsourcing test reports,so that valuable reports can be screened out more effectively.The main research results are as follows:1.In order to improve the efficiency of software managers in the process of crowdsourcing test reports,this paper puts forward the problem of how to classify reports through text clustering.In order to solve the problem,this analyzes the text classification algorithm,and uses the LDA topic model and the K-means and DBSCAN algorithms based on three word vectors of Bert,Word2 vec,and TF-idf for clustering.Among them,combining with LDA improved K-means algorithm for clustering,the number of iterations is significantly reduced,and the clustering effect is also increased from the perspective of the contour coefficient.Through comparison,the K-means algorithm can obtain the best clustering effect,and the number of clusters can be used as an indicator of quality evaluation.2.In order to help software managers predict whether a test report should be selected for review within limited resources,this article raises the issue of crowdsourced test report quality evaluation.This article attempts to solve the question of which influencing factors should be considered in quality assessment.Based on the literature collation and comprehensive analysis,this paper has obtained 8 indicators including defect impact degree,report standardization degree,defect reproducibility measurement and the level of crowdsourcing testers based on the analysis of crowdsourcing test reports,and established an indicator framework system.A more complete and comprehensive summary of the factors affecting the quality of the crowdsourced test report can provide a reference for subsequent research.3.In order to solve how to scientifically and objectively evaluate the quality of public test reports,this article constructs a multi-index comprehensive evaluation method.4kinds of objective weighting methods and BP neural network comprehensive evaluation methods are applied to evaluate the quality of crowdsourced test reports,and it is concluded that the coefficient of variation method is a better comprehensive evaluation method,which can achieve the overall prediction success rate 71.83%,the high-score report recognition accuracy rate is 82.12%,and the results obtained can be used as data support for judging whether a test report should be selected for review.According to the existing research conclusions,this paper clusters the text of the crowdsourced test reports,and derives a relatively comprehensive index system,and studies the quality comprehensive evaluation model,and finds the objective weighting method with the highest accuracy.It is hoped that the indicator system for the quality evaluation of crowdsourcing test reports will be enriched to a certain extent,and the research on quality evaluation will be supplemented;at the same time,in order to improve the efficiency of the crowdsourcing test process,objectively evaluate the quality of the report,and evaluate and reward the crowdsourcing testers.The distribution is sure to provide some help.
Keywords/Search Tags:crowdsourcing testing, text clustering, index system, quality evaluation
PDF Full Text Request
Related items