元风格 CNN:通过自适应学习和风格转移提高鲁棒性

Arun Prasad Jaganathan
{"title":"元风格 CNN:通过自适应学习和风格转移提高鲁棒性","authors":"Arun Prasad Jaganathan","doi":"10.1007/s41870-024-02150-z","DOIUrl":null,"url":null,"abstract":"<p>Recent studies reveal that standard Convolutional Neural Networks (CNNs)—conventionally struggle—when the training data is corrupted, leading to significant performance drops with noisy inputs. Therefore, real-world data, influenced by various sources of noise like sensor inaccuracies, weather fluctuations, lighting variations, and obstructions, exacerbates this challenge substantially. To address this limitation—employing style transfer on the training data has been proposed by various studies. However, the precise impact of different style transfer parameter settings on the resulting model’s robustness remains unexplored. Therefore, in this study, we systematically investigated various magnitudes of style transfer applied to the training data, assessing their effectiveness in enhancing model robustness. Our findings indicate that the most substantial improvement in robustness occurs when applying style transfer with maximum magnitude to the training data. Furthermore, we examined the significance of the dataset’s composition from which the styles are derived. Our results demonstrate that utilizing a limited subset of just 64 diverse, randomly selected styles is adequate to observe desired performance generalization even under corrupted testing conditions. Therefore, instead of uniformly selecting styles from the dataset, we developed a probability distribution for selection. Notably, styles with higher selection probabilities exhibit qualitatively distinct characteristics compared to those with lower probabilities, suggesting a discernible impact on the model’s robustness. Utilizing style transfer with styles having maximum likelihood according to the learned distribution led to a 1.4% increase in mean performance under corruption compared to using an equivalent number of randomly chosen styles.</p>","PeriodicalId":14138,"journal":{"name":"International Journal of Information Technology","volume":"32 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Meta-styled CNNs: boosting robustness through adaptive learning and style transfer\",\"authors\":\"Arun Prasad Jaganathan\",\"doi\":\"10.1007/s41870-024-02150-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recent studies reveal that standard Convolutional Neural Networks (CNNs)—conventionally struggle—when the training data is corrupted, leading to significant performance drops with noisy inputs. Therefore, real-world data, influenced by various sources of noise like sensor inaccuracies, weather fluctuations, lighting variations, and obstructions, exacerbates this challenge substantially. To address this limitation—employing style transfer on the training data has been proposed by various studies. However, the precise impact of different style transfer parameter settings on the resulting model’s robustness remains unexplored. Therefore, in this study, we systematically investigated various magnitudes of style transfer applied to the training data, assessing their effectiveness in enhancing model robustness. Our findings indicate that the most substantial improvement in robustness occurs when applying style transfer with maximum magnitude to the training data. Furthermore, we examined the significance of the dataset’s composition from which the styles are derived. Our results demonstrate that utilizing a limited subset of just 64 diverse, randomly selected styles is adequate to observe desired performance generalization even under corrupted testing conditions. Therefore, instead of uniformly selecting styles from the dataset, we developed a probability distribution for selection. Notably, styles with higher selection probabilities exhibit qualitatively distinct characteristics compared to those with lower probabilities, suggesting a discernible impact on the model’s robustness. Utilizing style transfer with styles having maximum likelihood according to the learned distribution led to a 1.4% increase in mean performance under corruption compared to using an equivalent number of randomly chosen styles.</p>\",\"PeriodicalId\":14138,\"journal\":{\"name\":\"International Journal of Information Technology\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s41870-024-02150-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s41870-024-02150-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究表明,当训练数据受到破坏时,标准卷积神经网络(CNN)通常会陷入困境,导致在输入噪声时性能大幅下降。因此,真实世界的数据受到各种噪声源的影响,如传感器误差、天气波动、光照变化和障碍物等,大大加剧了这一挑战。为了解决这一局限性,许多研究都提出了在训练数据中采用风格转移的方法。然而,不同的风格转换参数设置对所生成模型鲁棒性的确切影响仍有待探索。因此,在本研究中,我们系统地研究了应用于训练数据的各种风格转移幅度,评估了它们在增强模型稳健性方面的效果。我们的研究结果表明,在对训练数据进行最大程度的风格转移时,稳健性会得到最大幅度的提高。此外,我们还考察了数据集构成的重要性,因为风格是从数据集中衍生出来的。我们的结果表明,即使在测试条件受到破坏的情况下,利用有限的 64 种随机选择的风格子集也足以观察到理想的性能泛化。因此,我们没有从数据集中统一选择样式,而是开发了一种选择概率分布。值得注意的是,与选择概率较低的风格相比,选择概率较高的风格表现出截然不同的特征,这表明它对模型的鲁棒性有明显的影响。与使用同等数量的随机选择风格相比,根据所学分布利用具有最大可能性的风格进行风格转移可使腐败情况下的平均性能提高 1.4%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Meta-styled CNNs: boosting robustness through adaptive learning and style transfer

Meta-styled CNNs: boosting robustness through adaptive learning and style transfer

Recent studies reveal that standard Convolutional Neural Networks (CNNs)—conventionally struggle—when the training data is corrupted, leading to significant performance drops with noisy inputs. Therefore, real-world data, influenced by various sources of noise like sensor inaccuracies, weather fluctuations, lighting variations, and obstructions, exacerbates this challenge substantially. To address this limitation—employing style transfer on the training data has been proposed by various studies. However, the precise impact of different style transfer parameter settings on the resulting model’s robustness remains unexplored. Therefore, in this study, we systematically investigated various magnitudes of style transfer applied to the training data, assessing their effectiveness in enhancing model robustness. Our findings indicate that the most substantial improvement in robustness occurs when applying style transfer with maximum magnitude to the training data. Furthermore, we examined the significance of the dataset’s composition from which the styles are derived. Our results demonstrate that utilizing a limited subset of just 64 diverse, randomly selected styles is adequate to observe desired performance generalization even under corrupted testing conditions. Therefore, instead of uniformly selecting styles from the dataset, we developed a probability distribution for selection. Notably, styles with higher selection probabilities exhibit qualitatively distinct characteristics compared to those with lower probabilities, suggesting a discernible impact on the model’s robustness. Utilizing style transfer with styles having maximum likelihood according to the learned distribution led to a 1.4% increase in mean performance under corruption compared to using an equivalent number of randomly chosen styles.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信