Hao Lei, Yu Zhao, Yi Xin, Zhang Shaonan, Ke Liangjun
{"title":"视觉强化学习中用于泛化的分散数据增强","authors":"Hao Lei, Yu Zhao, Yi Xin, Zhang Shaonan, Ke Liangjun","doi":"10.1016/j.neucom.2025.131492","DOIUrl":null,"url":null,"abstract":"<div><div>Data augmentation (DA) has shown a significant potential to enhance generalization performance in visual reinforcement learning (VRL). However, existing research on DA-based methods is predominantly empirical, and the mechanism for why DA enhances generalization remains theoretically under-explored. To bridge this gap, we derive a generalization error upper bound for VRL from the perspective of data distribution distance. Based on this bound, we provide a theoretical explanation of the mechanism by which DA improves generalization: we find that DA that satisfies certain conditions can reduce the distance between the training and test distributions, thus making the training and test samples closer. In addition, we conditionally prove that training data with higher variance can provide a higher generalization performance. Motivated by our analysis, we propose Scattered Data Augmentation (ScDA) framework. ScDA constructs a data transformation system with the agent serving as the discriminator, aiming to provide more diverse training data for agent training. Experiments are conducted across various tasks and numerous test modes in DeepMind Control Generalization Benchmark2 (DMC-GB2) and robotic tasks. Results demonstrate that our ScDA framework can be integrated with different baseline algorithms and significantly enhance policy generalization, outperforming the current state-of-the-art methods in the DMC-GB2 tests, confirming the effectiveness of the theoretical analysis in this work. The code for this work can be found at: <span><span>https://github.com/scdadev/scdadev</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"656 ","pages":"Article 131492"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scattered data augmentation for generalization in visual reinforcement learning\",\"authors\":\"Hao Lei, Yu Zhao, Yi Xin, Zhang Shaonan, Ke Liangjun\",\"doi\":\"10.1016/j.neucom.2025.131492\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Data augmentation (DA) has shown a significant potential to enhance generalization performance in visual reinforcement learning (VRL). However, existing research on DA-based methods is predominantly empirical, and the mechanism for why DA enhances generalization remains theoretically under-explored. To bridge this gap, we derive a generalization error upper bound for VRL from the perspective of data distribution distance. Based on this bound, we provide a theoretical explanation of the mechanism by which DA improves generalization: we find that DA that satisfies certain conditions can reduce the distance between the training and test distributions, thus making the training and test samples closer. In addition, we conditionally prove that training data with higher variance can provide a higher generalization performance. Motivated by our analysis, we propose Scattered Data Augmentation (ScDA) framework. ScDA constructs a data transformation system with the agent serving as the discriminator, aiming to provide more diverse training data for agent training. Experiments are conducted across various tasks and numerous test modes in DeepMind Control Generalization Benchmark2 (DMC-GB2) and robotic tasks. Results demonstrate that our ScDA framework can be integrated with different baseline algorithms and significantly enhance policy generalization, outperforming the current state-of-the-art methods in the DMC-GB2 tests, confirming the effectiveness of the theoretical analysis in this work. The code for this work can be found at: <span><span>https://github.com/scdadev/scdadev</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"656 \",\"pages\":\"Article 131492\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225021642\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225021642","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Scattered data augmentation for generalization in visual reinforcement learning
Data augmentation (DA) has shown a significant potential to enhance generalization performance in visual reinforcement learning (VRL). However, existing research on DA-based methods is predominantly empirical, and the mechanism for why DA enhances generalization remains theoretically under-explored. To bridge this gap, we derive a generalization error upper bound for VRL from the perspective of data distribution distance. Based on this bound, we provide a theoretical explanation of the mechanism by which DA improves generalization: we find that DA that satisfies certain conditions can reduce the distance between the training and test distributions, thus making the training and test samples closer. In addition, we conditionally prove that training data with higher variance can provide a higher generalization performance. Motivated by our analysis, we propose Scattered Data Augmentation (ScDA) framework. ScDA constructs a data transformation system with the agent serving as the discriminator, aiming to provide more diverse training data for agent training. Experiments are conducted across various tasks and numerous test modes in DeepMind Control Generalization Benchmark2 (DMC-GB2) and robotic tasks. Results demonstrate that our ScDA framework can be integrated with different baseline algorithms and significantly enhance policy generalization, outperforming the current state-of-the-art methods in the DMC-GB2 tests, confirming the effectiveness of the theoretical analysis in this work. The code for this work can be found at: https://github.com/scdadev/scdadev.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.