Ismail R. Alkhouri, Alvaro Velasquez, George K. Atia
{"title":"Adversarial Perturbation Attacks on Nested Dichotomies Classification Systems","authors":"Ismail R. Alkhouri, Alvaro Velasquez, George K. Atia","doi":"10.1109/mlsp52302.2021.9596336","DOIUrl":null,"url":null,"abstract":"The study of robustness of deep classifiers has exposed their vulnerability to perturbation attacks. Prior work has largely focused on adversarial attacks targeting one-stage-classifiers. By contrast, here we investigate the susceptibility of Nested Dichotomies Classifiers (NDCs), which decompose a multiclass problem into a collection of binary ones, to such types of individual attacks. First, we show that the overall regret of an NDC is the sum of regrets of the binary classifiers along the path from the root to the leaf nodes of these dichotomies. Then, we formulate an optimization program to generate perturbations fooling NDCs and propose an algorithmic solution based on a convex relaxation. A solution is obtained by developing an ADMM-based solver to the convex programs. The experiments show that NDCs are more robust than their single stage counterpart in that the optimal perturbations inducing misclassifications are more perceptible.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/mlsp52302.2021.9596336","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The study of robustness of deep classifiers has exposed their vulnerability to perturbation attacks. Prior work has largely focused on adversarial attacks targeting one-stage-classifiers. By contrast, here we investigate the susceptibility of Nested Dichotomies Classifiers (NDCs), which decompose a multiclass problem into a collection of binary ones, to such types of individual attacks. First, we show that the overall regret of an NDC is the sum of regrets of the binary classifiers along the path from the root to the leaf nodes of these dichotomies. Then, we formulate an optimization program to generate perturbations fooling NDCs and propose an algorithmic solution based on a convex relaxation. A solution is obtained by developing an ADMM-based solver to the convex programs. The experiments show that NDCs are more robust than their single stage counterpart in that the optimal perturbations inducing misclassifications are more perceptible.