{"title":"E2D-GS:事件增强去模糊高斯飞溅","authors":"Lifeng Lin, Shuangjie Yuan, Lu Yang","doi":"10.1016/j.eswa.2025.129802","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, implicit neural representations and explicit 3D Gaussian Splatting(3DGS) have demonstrated substantial advancements in the domain of novel view synthesis. Nevertheless, the efficacy of these approaches is predominantly contingent upon the availability of well-defined, clear imagery and precise camera pose information. Consequently, they exhibit a pronounced susceptibility to motion blur, which impedes the rendering of sharp images. Event cameras, which measure intensity changes with microsecond temporal precision, possess an inherent robustness to motion-induced blur. This characteristic offers new avenues for 3D reconstruction in challenging scenarios characterized by high-speed motion or low-light conditions. This paper introduces E2D-GS, a novel algorithm for deblurring and reconstruction based on event cameras and 3D Gaussian Splatting. To enhance reconstruction accuracy, our proposed framework leverages event streams to physically model the formation process of motion blur. This is achieved by optimizing the discrepancy between synthesized data and the observed blurry images, while simultaneously recovering the camera’s motion trajectory. Additionally, to enhance robustness in real-world scenarios, this paper proposes a differential consistency module. This module effectively mitigates noise within the event data and regularizes the optimization of Gaussian parameters, thereby improving reconstruction quality under non-ideal conditions. Comprehensive experimental evaluations on both simulated and real-world benchmarks validate the proposed method’s capability to reconstruct latent sharp imagery via the learned 3DGS representations, and further demonstrate its capacity for stable reconstruction under adverse scenarios. The results show that our approach surpasses the performance of previous works.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"298 ","pages":"Article 129802"},"PeriodicalIF":7.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"E2D-GS: Event-enhanced deblurring gaussian splatting\",\"authors\":\"Lifeng Lin, Shuangjie Yuan, Lu Yang\",\"doi\":\"10.1016/j.eswa.2025.129802\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, implicit neural representations and explicit 3D Gaussian Splatting(3DGS) have demonstrated substantial advancements in the domain of novel view synthesis. Nevertheless, the efficacy of these approaches is predominantly contingent upon the availability of well-defined, clear imagery and precise camera pose information. Consequently, they exhibit a pronounced susceptibility to motion blur, which impedes the rendering of sharp images. Event cameras, which measure intensity changes with microsecond temporal precision, possess an inherent robustness to motion-induced blur. This characteristic offers new avenues for 3D reconstruction in challenging scenarios characterized by high-speed motion or low-light conditions. This paper introduces E2D-GS, a novel algorithm for deblurring and reconstruction based on event cameras and 3D Gaussian Splatting. To enhance reconstruction accuracy, our proposed framework leverages event streams to physically model the formation process of motion blur. This is achieved by optimizing the discrepancy between synthesized data and the observed blurry images, while simultaneously recovering the camera’s motion trajectory. Additionally, to enhance robustness in real-world scenarios, this paper proposes a differential consistency module. This module effectively mitigates noise within the event data and regularizes the optimization of Gaussian parameters, thereby improving reconstruction quality under non-ideal conditions. Comprehensive experimental evaluations on both simulated and real-world benchmarks validate the proposed method’s capability to reconstruct latent sharp imagery via the learned 3DGS representations, and further demonstrate its capacity for stable reconstruction under adverse scenarios. The results show that our approach surpasses the performance of previous works.</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":\"298 \",\"pages\":\"Article 129802\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417425034177\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425034177","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
In recent years, implicit neural representations and explicit 3D Gaussian Splatting(3DGS) have demonstrated substantial advancements in the domain of novel view synthesis. Nevertheless, the efficacy of these approaches is predominantly contingent upon the availability of well-defined, clear imagery and precise camera pose information. Consequently, they exhibit a pronounced susceptibility to motion blur, which impedes the rendering of sharp images. Event cameras, which measure intensity changes with microsecond temporal precision, possess an inherent robustness to motion-induced blur. This characteristic offers new avenues for 3D reconstruction in challenging scenarios characterized by high-speed motion or low-light conditions. This paper introduces E2D-GS, a novel algorithm for deblurring and reconstruction based on event cameras and 3D Gaussian Splatting. To enhance reconstruction accuracy, our proposed framework leverages event streams to physically model the formation process of motion blur. This is achieved by optimizing the discrepancy between synthesized data and the observed blurry images, while simultaneously recovering the camera’s motion trajectory. Additionally, to enhance robustness in real-world scenarios, this paper proposes a differential consistency module. This module effectively mitigates noise within the event data and regularizes the optimization of Gaussian parameters, thereby improving reconstruction quality under non-ideal conditions. Comprehensive experimental evaluations on both simulated and real-world benchmarks validate the proposed method’s capability to reconstruct latent sharp imagery via the learned 3DGS representations, and further demonstrate its capacity for stable reconstruction under adverse scenarios. The results show that our approach surpasses the performance of previous works.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.