{"title":"基于部分摄动的时间序列数据对抗实例","authors":"Jun Teraoka, Keiichi Tamura","doi":"10.1109/iiaiaai55812.2022.00011","DOIUrl":null,"url":null,"abstract":"Recently, adversarial examples have become a significant threat, which intentionally misleads deep learning models by small perturbations beyond human recognition. Adversarial examples have been studied mainly in the field of image recognition, but recently they have been applied to other fields, including time series data. Perturbations are usually added to all regions of the data, but in the case of time series data, adding to the entire series would result in unnatural data. In this study, we show that it is possible to generate less unnatural adversarial examples for the time series data classification problem by partially using perturbations generated by existing attack methods. We also experiment with evaluating the performance and show that for some datasets, even if the range of the perturbations is 1/10, the attack is still possible with almost no degradation in attack performance.","PeriodicalId":156230,"journal":{"name":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Examples of Time Series Data based on Partial Perturbations\",\"authors\":\"Jun Teraoka, Keiichi Tamura\",\"doi\":\"10.1109/iiaiaai55812.2022.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, adversarial examples have become a significant threat, which intentionally misleads deep learning models by small perturbations beyond human recognition. Adversarial examples have been studied mainly in the field of image recognition, but recently they have been applied to other fields, including time series data. Perturbations are usually added to all regions of the data, but in the case of time series data, adding to the entire series would result in unnatural data. In this study, we show that it is possible to generate less unnatural adversarial examples for the time series data classification problem by partially using perturbations generated by existing attack methods. We also experiment with evaluating the performance and show that for some datasets, even if the range of the perturbations is 1/10, the attack is still possible with almost no degradation in attack performance.\",\"PeriodicalId\":156230,\"journal\":{\"name\":\"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iiaiaai55812.2022.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iiaiaai55812.2022.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Examples of Time Series Data based on Partial Perturbations
Recently, adversarial examples have become a significant threat, which intentionally misleads deep learning models by small perturbations beyond human recognition. Adversarial examples have been studied mainly in the field of image recognition, but recently they have been applied to other fields, including time series data. Perturbations are usually added to all regions of the data, but in the case of time series data, adding to the entire series would result in unnatural data. In this study, we show that it is possible to generate less unnatural adversarial examples for the time series data classification problem by partially using perturbations generated by existing attack methods. We also experiment with evaluating the performance and show that for some datasets, even if the range of the perturbations is 1/10, the attack is still possible with almost no degradation in attack performance.