Adversities in Abstract Interpretation: Accommodating Robustness by Abstract Interpretation: ACM Transactions on Programming Languages and Systems: Vol 0, No ja
IF 1.5 2区 计算机科学Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
{"title":"Adversities in Abstract Interpretation: Accommodating Robustness by Abstract Interpretation: ACM Transactions on Programming Languages and Systems: Vol 0, No ja","authors":"Roberto Giacobazzi, Isabella Mastroeni, Elia Perantoni","doi":"10.1145/3649309","DOIUrl":null,"url":null,"abstract":"<p>Robustness is a key and desirable property of any classifying system, in particular, to avoid the ever-rising threat of adversarial attacks. Informally, a classification system is robust when the result is not affected by the perturbation of the input. This notion has been extensively studied, but little attention has been dedicated to <i>how</i> the perturbation affects the classification. The interference between perturbation and classification can manifest in many different ways, and its understanding is the main contribution of the present paper. Starting from a rigorous definition of a standard notion of robustness, we build a formal method for accommodating the required degree of robustness — depending on the amount of error the analyst may accept on the classification result. Our idea is to precisely model this error as an <i>abstraction</i>. This leads us to define weakened forms of robustness also in the context of programming languages, particularly in language-based security — e.g., information-flow policies — and in program verification. The latter is possible by moving from a quantitative (standard) model of perturbation to a novel <i>qualitative</i> model, given by means of the notion of abstraction. As in language-based security, we show that it is possible to confine adversities, which means to characterize the degree of perturbation (and/or the degree of class generalization) for which the classifier may be deemed <i>adequately</i> robust. We conclude with an experimental evaluation of our ideas, showing how weakened forms of robustness apply to state-of-the-art image classifiers.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"22 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Programming Languages and Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3649309","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Robustness is a key and desirable property of any classifying system, in particular, to avoid the ever-rising threat of adversarial attacks. Informally, a classification system is robust when the result is not affected by the perturbation of the input. This notion has been extensively studied, but little attention has been dedicated to how the perturbation affects the classification. The interference between perturbation and classification can manifest in many different ways, and its understanding is the main contribution of the present paper. Starting from a rigorous definition of a standard notion of robustness, we build a formal method for accommodating the required degree of robustness — depending on the amount of error the analyst may accept on the classification result. Our idea is to precisely model this error as an abstraction. This leads us to define weakened forms of robustness also in the context of programming languages, particularly in language-based security — e.g., information-flow policies — and in program verification. The latter is possible by moving from a quantitative (standard) model of perturbation to a novel qualitative model, given by means of the notion of abstraction. As in language-based security, we show that it is possible to confine adversities, which means to characterize the degree of perturbation (and/or the degree of class generalization) for which the classifier may be deemed adequately robust. We conclude with an experimental evaluation of our ideas, showing how weakened forms of robustness apply to state-of-the-art image classifiers.
期刊介绍:
ACM Transactions on Programming Languages and Systems (TOPLAS) is the premier journal for reporting recent research advances in the areas of programming languages, and systems to assist the task of programming. Papers can be either theoretical or experimental in style, but in either case, they must contain innovative and novel content that advances the state of the art of programming languages and systems. We also invite strictly experimental papers that compare existing approaches, as well as tutorial and survey papers. The scope of TOPLAS includes, but is not limited to, the following subjects:
language design for sequential and parallel programming
programming language implementation
programming language semantics
compilers and interpreters
runtime systems for program execution
storage allocation and garbage collection
languages and methods for writing program specifications
languages and methods for secure and reliable programs
testing and verification of programs