{"title":"STAR-SNR:用于少镜头视频生成的时空自适应调节和信噪比优化","authors":"Xian Yu, Jianxun Zhang, Siran Tian, Hongyu Yi","doi":"10.1007/s40747-025-01872-2","DOIUrl":null,"url":null,"abstract":"<p>In recent years, text-to-image generation technology based on diffusion models has made significant progress, but extending it to the field of video generation, especially under few-shot conditions, still faces huge challenges. Existing methods usually rely on a large amount of text-video pair data or consume a lot of training resources. Based on this, this paper proposes a new few-shot video generation framework, <b>STAN-SNR</b>, which combines spatio-temporal feature regulation, feature scrolling enhancement and dynamic signal-to-noise ratio (SNR) weighting strategies, using 8–16 videos on a single A6000 training, effectively improving the quality and efficiency of video generation and reducing the amount of calculation. Specifically, the spatio-temporal feature regulation module effectively extracts spatio-temporal features and reduces computational complexity. The feature scrolling enhancement module enhances the ability to capture local features to avoid overfitting. In addition, the dynamic SNR weighting strategy adjusts the loss calculation according to the time step, which improves the convergence speed of the model, which is 2.44 times faster compared with the baseline model. The experimental results show that the STAN-SNR framework generates videos with higher text alignment, consistency, and diversity under few-shot conditions.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"103 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"STAR-SNR: spatial–temporal adaptive regulation and SNR optimization for few-shot video generation\",\"authors\":\"Xian Yu, Jianxun Zhang, Siran Tian, Hongyu Yi\",\"doi\":\"10.1007/s40747-025-01872-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In recent years, text-to-image generation technology based on diffusion models has made significant progress, but extending it to the field of video generation, especially under few-shot conditions, still faces huge challenges. Existing methods usually rely on a large amount of text-video pair data or consume a lot of training resources. Based on this, this paper proposes a new few-shot video generation framework, <b>STAN-SNR</b>, which combines spatio-temporal feature regulation, feature scrolling enhancement and dynamic signal-to-noise ratio (SNR) weighting strategies, using 8–16 videos on a single A6000 training, effectively improving the quality and efficiency of video generation and reducing the amount of calculation. Specifically, the spatio-temporal feature regulation module effectively extracts spatio-temporal features and reduces computational complexity. The feature scrolling enhancement module enhances the ability to capture local features to avoid overfitting. In addition, the dynamic SNR weighting strategy adjusts the loss calculation according to the time step, which improves the convergence speed of the model, which is 2.44 times faster compared with the baseline model. The experimental results show that the STAN-SNR framework generates videos with higher text alignment, consistency, and diversity under few-shot conditions.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"103 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-025-01872-2\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01872-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
STAR-SNR: spatial–temporal adaptive regulation and SNR optimization for few-shot video generation
In recent years, text-to-image generation technology based on diffusion models has made significant progress, but extending it to the field of video generation, especially under few-shot conditions, still faces huge challenges. Existing methods usually rely on a large amount of text-video pair data or consume a lot of training resources. Based on this, this paper proposes a new few-shot video generation framework, STAN-SNR, which combines spatio-temporal feature regulation, feature scrolling enhancement and dynamic signal-to-noise ratio (SNR) weighting strategies, using 8–16 videos on a single A6000 training, effectively improving the quality and efficiency of video generation and reducing the amount of calculation. Specifically, the spatio-temporal feature regulation module effectively extracts spatio-temporal features and reduces computational complexity. The feature scrolling enhancement module enhances the ability to capture local features to avoid overfitting. In addition, the dynamic SNR weighting strategy adjusts the loss calculation according to the time step, which improves the convergence speed of the model, which is 2.44 times faster compared with the baseline model. The experimental results show that the STAN-SNR framework generates videos with higher text alignment, consistency, and diversity under few-shot conditions.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.