Font Size: a A A

Adversarial Robustness Of Distance-based Machine Learning Models

Posted on:2021-03-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:L WangFull Text:PDF
GTID:1488306500466754Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Real-world applications of machine learning models are complicated,and the environment could be unstable.In mission-critical systems,machine learning models would suffer from safety risks if we simply assume the training and testing environments are identical.The problem of adversarial perturbations is a typical example:machine learning models,especially deep neural networks,are vulnerable to small adversarial perturbations,i.e.,a small carefully crafted perturbation added to the input may significantly change the prediction result.In other words,machine learning models lack adversarial robustness.In this thesis,we focus on adversarial robustness of particular machine learning models based on distances.Despite of being widely used,adversarial robustness of distance-based models is less studied.We start with the robustness evaluation method,and then study the robustness enhancement method,and try to apply the research results to general machine learning models including deep neural networks.1.Robustness evaluation for 1-NN?Existing robustness evaluation methods for the nearest neighbor classifier(1-NN)depend on differentiable substitutes for 1-NN,and they are far from optimal.We formalize robustness evaluation of the nearest neighbor classifier(1-NN)as a list of convex quadratic programming problems.In the primal-dual framework,we propose an efficient algorithm to exactly compute the minimal adversarial perturbation,leading to both the optimal attack method and the optimal robustness verification method.2.Robustness verification for K-NN?Different from 1-NN,the time complexity of the optimal robustness verification method for K-NN grows exponentially with K.To tackle this issue,we propose two robustness verification methods — constraint relaxation for K-NN and random smoothing for smoothed K-NN.The two methods complement each other,applying to the small K and large K situations respectively,and achieve favorable performance.3.Robustness enhancement for metric learning?The traditional metric learning methods do not take adversarial robustness into consideration.We extend our robustness verification method for K-NN to the metric learning problem.Based on the robustness verification method,we propose a novel metric learning method ARML(Adversarially Robust Metric Learning),which could also be seen as a robustness enhancement method.This method not only improves classification accuracy,but also enhances adversarial robustness including both empirical robustness and certified robustness.4.Robustness evaluation for black-box models?Black-box attacks serve as a general way to evaluate robustness of machine learning models.Based on theoretical analysis on adversarial robustness for distance-based models,we find that the minimal adversarial perturbation tends to be in a small subspace.This inspires us to improve efficiency of black-box attacks via constraining the search space.This technique not only reduces attack costs,but also improves attack success rates.Therefore,we extend the results about adversarial robustness of distance-based models to a general case of black-box models.
Keywords/Search Tags:machine learning, adversarial robustness, nearest neighbor, metric learning, black-box attack
PDF Full Text Request
Related items