{"title":"动态防御和对抗性例子的可转移性","authors":"Sam Thomas, Farnoosh Koleini, Nasseh Tabrizi","doi":"10.1109/TPS-ISA56441.2022.00041","DOIUrl":null,"url":null,"abstract":"Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little- explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic defenses and the transferability of adversarial examples\",\"authors\":\"Sam Thomas, Farnoosh Koleini, Nasseh Tabrizi\",\"doi\":\"10.1109/TPS-ISA56441.2022.00041\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little- explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.\",\"PeriodicalId\":427887,\"journal\":{\"name\":\"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TPS-ISA56441.2022.00041\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPS-ISA56441.2022.00041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic defenses and the transferability of adversarial examples
Artificial learners are generally open to adversarial attacks. The field of adversarial machine learning focuses on this study when a machine learning system is in an adversarial environment. In fact, machine learning systems can be trained to produce adversarial inputs against such a learner, which is frequently done. Although can take measures to protect a machine learning system, the protection is not complete and is not guaranteed to last. This is still an open issue due to the transferability of adversarial examples. The main goal of this study is to examine the effectiveness of black-box attacks on a dynamic model. This study investigates the currently intractable problem of transferable adversarial examples, as well as a little- explored approach that could provide a solution, implementing the Fast Model-based Online Manifold Regularization (FMOMR) algorithm which is a recent published algorithm that seemed to fit the needs of our experiment.