Font Size: a A A

Bug Interpretation-based Input Generation For Software Testing

Posted on:2022-10-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:D J ChenFull Text:PDF
GTID:1488306725471634Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Software has been the infrastructure in our daily lives.However,software bugs are also common,and they affect user experience,result in huge economic losses or even endanger personal safety.Therefore,it is important to improve software qualities.Software testing,as one of the most effective software quality assurance techniques,is currently the most widely-used technique in industrial practice.Automatically input generation for testing software is always one of the most important topics in the software engineering research field.Though many techniques have been proposed after a few decades of research and complex software systems(e.g.,kernel systems,concurrent server systems)have been tested by these techniques,there are still a large number of bugs manifested and reported by users in these complex systems nowadays.So,how to generate inputs to test complex systems better is still a challenging problem.We find the limitation of existing work in generating test inputs.Though blackbox testing and fuzzing testing are the first choices for testing complex systems because of their automation,they almost treat software systems whose implementations and executions are extremely complicated as block boxes,and use coarse-grained information,usually the “input-output” or “input-coverage” relation,to guide input generation.However,these information cannot reflect the exact execution of an input,so inputs generated with only high-level information cannot drive software systems into deep states and manifest bugs.Model-based testing in theory is able to generate inputs to cover diverse states that have been introduced for a long time,but deploying model-based testing in complex systems is always challenging because constructing a model is tricky and costly.This paper developed a framework for automatically generating testing inputs for complex software systems via the "bug interpretable" hypothesis.The framework is built on the hypothesis that "most bugs in complex software systems have understandable interpretations".It contains a novel methodology of modeling the “inputs-bug interpretation” relation and is able to transform inputs and bug interpretations automatically.Thus,inputs generated by this framework is effective in driving systems into potentially-buggy states and raising testing efficiency.Our framework fills the gap between model-based testing and complex software systems.In summary,the framework is composed as follows.1.Contrary to conventional model-based testing that needs to abstract models from the full complicated specifications,we use execution to locate interesting parts of the specification.Our modeling approach targets inputs,is guided by bug interpretations,and attempts to abstract inputs into “input interpretations” which present diverse triggering conditions of potential bugs.2.We proposed the “bug interpretable” hypothesis as the foundation of our modeling methodology.Consider that most bugs of complex systems can be understood by developers and they only involve a small part of system specification,we can model the relation between(buggy)inputs and(system behaviors contained by)bug interpretations.In this way,we introduced a novel modeling approach that treats inputs as model subjective and bug interpretations as model targets and attempts to map inputs into potential bug interpretations.I.e.,this approach abstracts the potential bug triggering conditions of inputs.Since our model objective is root causes of bugs,this model favors generating effective test inputs.3.We introduced a general presentation of bug interpretations and designed an easyto-use modeling language,which reduces the workload and difficulty of modeling various complex systems.We found that the graph is a simple but general presentation of input interpretations that can be used to present various bug interpretations of diverse complex systems.We also found that inputs of complex systems are usually with sequentiality.We designed the modeling language based on this observation.The language is a meta-meta language and can be used to write annotated contextfree languages that describe the grammar and modeling rules of inputs.We also proposed an algorithm to automatically generate test inputs which have the effect as “sampling interpretations,generating inputs” conceptually.We quantified bug interpretations as values and introduced a sampling algorithm to produce diverse interpretations.We also proposed an input synthesis algorithm that can return an input when given any interpretations.We transformed the input generating problem into a program synthesis program and solved it by a search strategy.We instantiated the framework into real-world complex systems and constructed complete models for systems in different domains.We also presented how to optimize models by domain-specific knowledge to generate inputs more effectively.Experiment results show that our approach is promising,which could detect multiple previously unknown bugs in complex file systems and concurrent server systems.
Keywords/Search Tags:Software Testing, Model-based Testing, File System, Multithreaded Programs
PDF Full Text Request
Related items