Font Size: a A A

Research On Evaluation And Integrated Optimization Of Code Static Analysis Tools

Posted on:2018-05-30Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhangFull Text:PDF
GTID:2428330569498581Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Static analysis has been an important means of software quality assurance,and a large number of excellent static analysis tools has been developed.However,different tools have different characteristics and advantages.Meanwhile,high false positives and false negatives are still a major problem in existing static analysis techniques and tools.How to efficiently select the right tools and improve the accuracy has become a major challenge of using static analysis tools.Based on these two issues,the paper carried out four aspects of the study.1.based on 51399 test cases of Juliet benchmark,from the evaluation dimensions of CWE type and Code Structure Complexity type,we conducted a systematic evaluation of three well-known open-source static analysis tools and a mainstream commercial tool A,using Precision,Recall,F-Score,Discrimination Rate,Overlap Rate,Coverage Rate.Firstly,the overall performance of the four tools on the test set is evaluated.Then,four tools are evaluated in detail based on the 91 types of CWE and 48 types of Code Structure Complexity,and defect types that the tool is expert in are obtained.Then the tool's overlap-analysis is carried out to evaluate the similarity between tools and to study the tool's overlap defects.Finally,the improvement of the tools simple integration compared with single tool is evaluated.2.Based on the simple integration of the four static analysis tools,we used four kinds of machine learning classification algorithms,such as Naive Bayesian,Logistic Regression,Decision Tree and Support Vector Machine(SVM),to classify defects into two types: true positives and false positives,and then removed false positives.The precision of the simply integrated defect report is greatly improved based on a small reduction of the overall recall rate.3.benchmark extension which is the key problem of static analysis tool evaluation is explored and researched.First,we manually reviewed defect reports of 2 open-source tools and commercial tool A on 7 open source and closed source code such as Nginx and Kylin operating system,recorded cause and kernel code of more than 200 defects,and evaluated the above 3 tools.Secondly,a special bug crawler based on Bugzilla and JIRA is implemented,which provides the interface of whether there is a patch file in the defect,which can be used to achieve a more rapid benchmark extension.4.The code static analysis tool evaluation framework is designed and implemented to realize the automation and visualization of the evaluation work.The framework can realize the following functions: batch detection of benchmark,defect report's parsing and standardization,tool evaluation,which has good usability and expansibility.
Keywords/Search Tags:static analysis tool evaluation, tool integration optimization, benchmark extension, evaluation framework
PDF Full Text Request
Related items