{"title":"基于快速梯度符号的人工智能对抗性攻击分析","authors":"Sigit Wibawa","doi":"10.58291/ijec.v2i2.120","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has become a key driving force in sectors from transportation to healthcare, and is opening up tremendous opportunities for technological advancement. However, behind this promising potential, AI also presents serious security challenges. This article aims to investigate attacks on AI and security challenges that must be faced in the era of artificial intelligence, this research aims to simulate and test the security of AI systems due to adversarial attacks. We can use the Python programming language for this, using several libraries and tools. One that is very popular for testing the security of AI models is CleverHans, and by understanding those threats we can protect the positive developments of AI in the future. this research provides a thorough understanding of attacks in AI technology especially in neural networks and machine learning, and the security challenge we face is that adding a little interference to the input data causes the AI model to produce wrong predictions in adversarial attacks there is the FGSM model which with an epsilon value of 0.1 causes the model suffered a drastic reduction in accuracy of around 66%, which means that the attack managed to mislead the model and lead to incorrect predictions. in the future understanding this threat is the key to protecting the positive development of AI. With a thorough understanding of AI attacks and the security challenges we address, we can build a solid foundation to effectively address these threats.","PeriodicalId":388974,"journal":{"name":"International Journal of Engineering Continuity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method\",\"authors\":\"Sigit Wibawa\",\"doi\":\"10.58291/ijec.v2i2.120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has become a key driving force in sectors from transportation to healthcare, and is opening up tremendous opportunities for technological advancement. However, behind this promising potential, AI also presents serious security challenges. This article aims to investigate attacks on AI and security challenges that must be faced in the era of artificial intelligence, this research aims to simulate and test the security of AI systems due to adversarial attacks. We can use the Python programming language for this, using several libraries and tools. One that is very popular for testing the security of AI models is CleverHans, and by understanding those threats we can protect the positive developments of AI in the future. this research provides a thorough understanding of attacks in AI technology especially in neural networks and machine learning, and the security challenge we face is that adding a little interference to the input data causes the AI model to produce wrong predictions in adversarial attacks there is the FGSM model which with an epsilon value of 0.1 causes the model suffered a drastic reduction in accuracy of around 66%, which means that the attack managed to mislead the model and lead to incorrect predictions. in the future understanding this threat is the key to protecting the positive development of AI. With a thorough understanding of AI attacks and the security challenges we address, we can build a solid foundation to effectively address these threats.\",\"PeriodicalId\":388974,\"journal\":{\"name\":\"International Journal of Engineering Continuity\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Engineering Continuity\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58291/ijec.v2i2.120\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Engineering Continuity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58291/ijec.v2i2.120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method
Artificial intelligence (AI) has become a key driving force in sectors from transportation to healthcare, and is opening up tremendous opportunities for technological advancement. However, behind this promising potential, AI also presents serious security challenges. This article aims to investigate attacks on AI and security challenges that must be faced in the era of artificial intelligence, this research aims to simulate and test the security of AI systems due to adversarial attacks. We can use the Python programming language for this, using several libraries and tools. One that is very popular for testing the security of AI models is CleverHans, and by understanding those threats we can protect the positive developments of AI in the future. this research provides a thorough understanding of attacks in AI technology especially in neural networks and machine learning, and the security challenge we face is that adding a little interference to the input data causes the AI model to produce wrong predictions in adversarial attacks there is the FGSM model which with an epsilon value of 0.1 causes the model suffered a drastic reduction in accuracy of around 66%, which means that the attack managed to mislead the model and lead to incorrect predictions. in the future understanding this threat is the key to protecting the positive development of AI. With a thorough understanding of AI attacks and the security challenges we address, we can build a solid foundation to effectively address these threats.