Font Size: a A A

A Model Checking Based Approach To Detect Safety-Critical Adversarial Examples On Autonomous Driving Systems

Posted on:2024-08-06Degree:MasterType:Thesis
Country:ChinaCandidate:Z HuangFull Text:PDF
GTID:2542307052495974Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Many types of research reveal a new kind of threat to deep learning models from adversarial attacks,these inputs perturbed with slight disturbances can make the deep learning model return wrong results with high confidence.Since the disturbances are hardly recognized by humans or other measurements based on semantical features,these adversarial attacks are difficult to be detected in advance with conventional methods.The safety of autonomous driving systems(ADS)with machine learning(ML)components is threaten by adversarial examples.The mainstream defending technique against such threats concerns the adversarial examples that make the ML model fail.However,such adversarial example does not necessarily cause safety problems to the entire ADS.Therefore a method for detecting the adversarial examples that will actually lead the ADS to the unsafe states will be helpful to improve the defending technique.This paper proposes an approach to detect such safety-critical adversarial examples in typical autonomous driving scenarios based on model checking technique.The scenario of autonomous driving and the semantic effect of adversarial attacks on object detection are specified with the Network of Timed Automata model.The safety properties of ADS is specified and verified through the UPPAAL model checker to show whether the adversarial examples lead to safety problems.The result from the model checking can reveal the critical time interval of adversarial attacks that will lead to unsafe state for a given scenario.The approach is demonstrated on a popular adversarial attack algorithm in a typical autonomous driving scenario.Its effectiveness is shown through series of simulations on the CARLA platform.The main contribution includes:1.A formal definition for safety-critical adversarial examples(SCAEs)is proposed in terms of the model checking notations.It relates the adversarial examples with the safety-critical properties of the entire system.2.A novel model checking based approach is proposed to detect SCAEs from a given autonomous driving scenario.For particular AE generating algorithm,the approach can return the target and timing for the adversarial attack to ensure the violation of the safety-critical property,which allocates the corresponding SCAEs.3.Experiments of our approach is conducted with the model checker UPPAAL and the simulation platform CARLA.From the simulation results,we can conclude that with the SCAEs found by the model checker can definitely cause collision accidents.
Keywords/Search Tags:Autonomous driving, Model checking, Adversarial examples
PDF Full Text Request
Related items