{"title":"使用sgec方法防止人工智能模型中音频和图像的对抗性攻击","authors":"Gerasimov V.M., Maslova M.А., Khalilayeva Е.I.","doi":"10.18413/2518-1092-2022-8-2-0-7","DOIUrl":null,"url":null,"abstract":"In the modern world, the use of artificial intelligence (AI) is increasingly facing the risk of adversarial attacks on audio and images. This article explores this issue and presents the SGEC method as a means to minimize these risks. Various types of attacks on audio and images are discussed, including label manipulation, white-box and black-box attacks, leakage through trained models, and hardware-level attacks. The main focus is on the SGEC method, which offers data encryption and ensures their integrity in AI models. The article also examines other approaches to protect audio and images, such as dual verification and ensemble methods, access restriction and data anonymization, as well as the use of provably robust AI models.","PeriodicalId":424277,"journal":{"name":"Research Result Information Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PROTECTION AGAINST ADVERSARIAL ATTACKS ON AUDIO AND IMAGES IN ARTIFICIAL INTELLIGENCE MODELS USING THE SGEC METHOD\",\"authors\":\"Gerasimov V.M., Maslova M.А., Khalilayeva Е.I.\",\"doi\":\"10.18413/2518-1092-2022-8-2-0-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the modern world, the use of artificial intelligence (AI) is increasingly facing the risk of adversarial attacks on audio and images. This article explores this issue and presents the SGEC method as a means to minimize these risks. Various types of attacks on audio and images are discussed, including label manipulation, white-box and black-box attacks, leakage through trained models, and hardware-level attacks. The main focus is on the SGEC method, which offers data encryption and ensures their integrity in AI models. The article also examines other approaches to protect audio and images, such as dual verification and ensemble methods, access restriction and data anonymization, as well as the use of provably robust AI models.\",\"PeriodicalId\":424277,\"journal\":{\"name\":\"Research Result Information Technologies\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research Result Information Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18413/2518-1092-2022-8-2-0-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Result Information Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18413/2518-1092-2022-8-2-0-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PROTECTION AGAINST ADVERSARIAL ATTACKS ON AUDIO AND IMAGES IN ARTIFICIAL INTELLIGENCE MODELS USING THE SGEC METHOD
In the modern world, the use of artificial intelligence (AI) is increasingly facing the risk of adversarial attacks on audio and images. This article explores this issue and presents the SGEC method as a means to minimize these risks. Various types of attacks on audio and images are discussed, including label manipulation, white-box and black-box attacks, leakage through trained models, and hardware-level attacks. The main focus is on the SGEC method, which offers data encryption and ensures their integrity in AI models. The article also examines other approaches to protect audio and images, such as dual verification and ensemble methods, access restriction and data anonymization, as well as the use of provably robust AI models.