为学生和学童准备的大师班“图像识别神经网络的对抗性攻击”的教育和方法论材料

D. V. Pantiukhin
{"title":"为学生和学童准备的大师班“图像识别神经网络的对抗性攻击”的教育和方法论材料","authors":"D. V. Pantiukhin","doi":"10.32517/0234-0453-2023-38-1-55-63","DOIUrl":null,"url":null,"abstract":"The problem of neural network vulnerability has been the subject of scientific research and experiments for several years. Adversarial attacks are one of the ways to “trick” a neural network, to force it to make incorrect classification decisions. The very possibility of adversarial attack lies in the peculiarities of machine learning of neural networks. The article shows how the properties of neural networks become a source of problems and limitations in their use. The materials of the corresponding researches of the author were used as a basis for the master class “Adversarial attacks on image recognition neural networks”.The article presents the educational materials of the master class: the theoretical background of the class, practical materials (in particular, the attack on a single neuron is described, the fast gradient sign method for attacking a neural network is considered), examples of experiments and calculations (the author uses the convolutional network VGG, Torch and CleverHans libraries), as well as a set of typical errors of students and the teacher’s explanations of how to eliminate these errors. In addition, the result of the experiment is given in the article, and its full code and examples of approbation of the master class materials are available at the above links.The master class is intended for both high school and university students who have learned the basics of neural networks and the Python language, and can also be of practical interest to computer science teachers, to developers of courses on machine learning and artificial intelligence as well as to university teachers.","PeriodicalId":277237,"journal":{"name":"Informatics and education","volume":"126 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Educational and methodological materials of the master class “Adversarial attacks on image recognition neural networks” for students and schoolchildren\",\"authors\":\"D. V. Pantiukhin\",\"doi\":\"10.32517/0234-0453-2023-38-1-55-63\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem of neural network vulnerability has been the subject of scientific research and experiments for several years. Adversarial attacks are one of the ways to “trick” a neural network, to force it to make incorrect classification decisions. The very possibility of adversarial attack lies in the peculiarities of machine learning of neural networks. The article shows how the properties of neural networks become a source of problems and limitations in their use. The materials of the corresponding researches of the author were used as a basis for the master class “Adversarial attacks on image recognition neural networks”.The article presents the educational materials of the master class: the theoretical background of the class, practical materials (in particular, the attack on a single neuron is described, the fast gradient sign method for attacking a neural network is considered), examples of experiments and calculations (the author uses the convolutional network VGG, Torch and CleverHans libraries), as well as a set of typical errors of students and the teacher’s explanations of how to eliminate these errors. In addition, the result of the experiment is given in the article, and its full code and examples of approbation of the master class materials are available at the above links.The master class is intended for both high school and university students who have learned the basics of neural networks and the Python language, and can also be of practical interest to computer science teachers, to developers of courses on machine learning and artificial intelligence as well as to university teachers.\",\"PeriodicalId\":277237,\"journal\":{\"name\":\"Informatics and education\",\"volume\":\"126 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Informatics and education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.32517/0234-0453-2023-38-1-55-63\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics and education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32517/0234-0453-2023-38-1-55-63","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

多年来,神经网络脆弱性问题一直是科学研究和实验的主题。对抗性攻击是“欺骗”神经网络的一种方法,迫使它做出错误的分类决策。对抗性攻击的可能性在于神经网络机器学习的特性。这篇文章展示了神经网络的特性如何在使用中成为问题和限制的来源。作者的相关研究资料作为大师课“图像识别神经网络的对抗性攻击”的基础。本文介绍了大师班的教学材料:课程的理论背景,实践材料(特别是描述了对单个神经元的攻击,考虑了攻击神经网络的快速梯度符号方法),实验和计算的例子(作者使用了卷积网络VGG, Torch和CleverHans库),以及一组典型的学生错误和老师如何消除这些错误的解释。此外,文章中给出了实验结果,并在上面的链接中提供了完整的代码和大师课材料的批准示例。大师班面向已经学习了神经网络和Python语言基础知识的高中生和大学生,也可用于计算机科学教师、机器学习和人工智能课程开发人员以及大学教师的实践兴趣。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Educational and methodological materials of the master class “Adversarial attacks on image recognition neural networks” for students and schoolchildren
The problem of neural network vulnerability has been the subject of scientific research and experiments for several years. Adversarial attacks are one of the ways to “trick” a neural network, to force it to make incorrect classification decisions. The very possibility of adversarial attack lies in the peculiarities of machine learning of neural networks. The article shows how the properties of neural networks become a source of problems and limitations in their use. The materials of the corresponding researches of the author were used as a basis for the master class “Adversarial attacks on image recognition neural networks”.The article presents the educational materials of the master class: the theoretical background of the class, practical materials (in particular, the attack on a single neuron is described, the fast gradient sign method for attacking a neural network is considered), examples of experiments and calculations (the author uses the convolutional network VGG, Torch and CleverHans libraries), as well as a set of typical errors of students and the teacher’s explanations of how to eliminate these errors. In addition, the result of the experiment is given in the article, and its full code and examples of approbation of the master class materials are available at the above links.The master class is intended for both high school and university students who have learned the basics of neural networks and the Python language, and can also be of practical interest to computer science teachers, to developers of courses on machine learning and artificial intelligence as well as to university teachers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信