{"title":"对抗性机器学习","authors":"L. Reznik","doi":"10.1002/9781119771579.ch6","DOIUrl":null,"url":null,"abstract":"The chapter introduces novel adversarial machine learning attacks and the taxonomy of its cases, where machine learning is used against AI‐based classifiers to make them fail. It investigates a possible data corruption and quality decrease influence on the classifier performance. The module proposes data restoration procedures and other measures to protect against adversarial attacks. Generative adversarial networks are introduced, and their use is discussed. Multiple algorithm examples and use cases are included.","PeriodicalId":318786,"journal":{"name":"Intelligent Security Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Machine Learning\",\"authors\":\"L. Reznik\",\"doi\":\"10.1002/9781119771579.ch6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The chapter introduces novel adversarial machine learning attacks and the taxonomy of its cases, where machine learning is used against AI‐based classifiers to make them fail. It investigates a possible data corruption and quality decrease influence on the classifier performance. The module proposes data restoration procedures and other measures to protect against adversarial attacks. Generative adversarial networks are introduced, and their use is discussed. Multiple algorithm examples and use cases are included.\",\"PeriodicalId\":318786,\"journal\":{\"name\":\"Intelligent Security Systems\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Security Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/9781119771579.ch6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Security Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/9781119771579.ch6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The chapter introduces novel adversarial machine learning attacks and the taxonomy of its cases, where machine learning is used against AI‐based classifiers to make them fail. It investigates a possible data corruption and quality decrease influence on the classifier performance. The module proposes data restoration procedures and other measures to protect against adversarial attacks. Generative adversarial networks are introduced, and their use is discussed. Multiple algorithm examples and use cases are included.