{"title":"面向学习驱动科学可视化的数据增强研究。","authors":"Jun Han, Hao Zheng, Jun Tao","doi":"10.1109/TVCG.2025.3587685","DOIUrl":null,"url":null,"abstract":"<p><p>The success of deep learning heavily relies on the large amount of training samples. However, in scientific visualization, due to the high computational cost, only few data are available during training, which limits the performance of deep learning. A common technique to address the data sparsity issue is data augmentation. In this paper, we present a comprehensive study on nine data augmentation techniques (i.e., noise injection, interpolation, scale, flip, rotation, variational auto-encoder, generative adversarial network, diffusion model, and implicit neural representation) for understanding their effectiveness on two scientific visualization tasks, i.e., spatial super-resolution and ambient occlusion prediction. We compare the data quality, rendering fidelity, optimization time, and memory consumption of these data augmentation techniques using several scientific datasets with various characteristics. We investigate the effects of data augmentation on the method, quantity, and diversity for these tasks with various deep learning models. Our study shows that increasing the quantity and single-domain diversity of augmented data can boost model performance, while the method and cross-domain diversity of the augmented data do not have the same impact. Based on our findings, we discuss the opportunities and future directions for scientific data augmentation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Study of Data Augmentation for Learning-Driven Scientific Visualization.\",\"authors\":\"Jun Han, Hao Zheng, Jun Tao\",\"doi\":\"10.1109/TVCG.2025.3587685\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The success of deep learning heavily relies on the large amount of training samples. However, in scientific visualization, due to the high computational cost, only few data are available during training, which limits the performance of deep learning. A common technique to address the data sparsity issue is data augmentation. In this paper, we present a comprehensive study on nine data augmentation techniques (i.e., noise injection, interpolation, scale, flip, rotation, variational auto-encoder, generative adversarial network, diffusion model, and implicit neural representation) for understanding their effectiveness on two scientific visualization tasks, i.e., spatial super-resolution and ambient occlusion prediction. We compare the data quality, rendering fidelity, optimization time, and memory consumption of these data augmentation techniques using several scientific datasets with various characteristics. We investigate the effects of data augmentation on the method, quantity, and diversity for these tasks with various deep learning models. Our study shows that increasing the quantity and single-domain diversity of augmented data can boost model performance, while the method and cross-domain diversity of the augmented data do not have the same impact. Based on our findings, we discuss the opportunities and future directions for scientific data augmentation.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2025.3587685\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3587685","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Study of Data Augmentation for Learning-Driven Scientific Visualization.
The success of deep learning heavily relies on the large amount of training samples. However, in scientific visualization, due to the high computational cost, only few data are available during training, which limits the performance of deep learning. A common technique to address the data sparsity issue is data augmentation. In this paper, we present a comprehensive study on nine data augmentation techniques (i.e., noise injection, interpolation, scale, flip, rotation, variational auto-encoder, generative adversarial network, diffusion model, and implicit neural representation) for understanding their effectiveness on two scientific visualization tasks, i.e., spatial super-resolution and ambient occlusion prediction. We compare the data quality, rendering fidelity, optimization time, and memory consumption of these data augmentation techniques using several scientific datasets with various characteristics. We investigate the effects of data augmentation on the method, quantity, and diversity for these tasks with various deep learning models. Our study shows that increasing the quantity and single-domain diversity of augmented data can boost model performance, while the method and cross-domain diversity of the augmented data do not have the same impact. Based on our findings, we discuss the opportunities and future directions for scientific data augmentation.