Zhaoxin Liu, Jinjian Wu, Guangming Shi, Wen Yang, Jupo Ma
{"title":"An event-based motion scene feature extraction framework","authors":"Zhaoxin Liu, Jinjian Wu, Guangming Shi, Wen Yang, Jupo Ma","doi":"10.1016/j.patcog.2024.111320","DOIUrl":null,"url":null,"abstract":"<div><div>Integral cameras cause motion blur during relative object displacement, leading to degraded image aesthetics and reduced performance of image-based algorithms. Event cameras capture high-temporal-resolution dynamic scene changes, providing spatially aligned motion information to complement images. However, external modules for event-based motion feature extraction, such as optical flow estimation, introduce additional computational costs and inference time. Moreover, achieving a globally optimal solution becomes challenging without joint optimization. In this paper, we propose a cross-modal motion scene feature extraction framework for motion-sensitive tasks, addressing challenges in motion feature extraction and dual-path feature fusion. The framework, serving as a versatile feature encoder, can adapt its feature extractor structure to meet diverse task requirements. We initially analyzed and identified the spatially concentrated and temporally continuous feature extraction tendency of spiking neural networks (SNNs). Based on this observation, we propose the hybrid spiking motion object feature extractor (HSME). Within this module, a novel fusion block is introduced to avoid feature-level blurring during the fusion of spike-float features. Furthermore, to ensure the acquisition of complementary scene features by the two-modal networks, we devise a spatial feature disentanglement that constraints the network during the optimization process. Event-based motion deblurring represents a prototypical motion-sensitive task, and our approach was assessed on prevalent datasets, attaining a state-of-the-art performance while maintaining an exceptionally low parameter count. We also conducted ablation experiments to evaluate the influence of each framework component on the results. Code and pre-trained models will be published after the paper is accepted.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111320"},"PeriodicalIF":7.5000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324010719","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Integral cameras cause motion blur during relative object displacement, leading to degraded image aesthetics and reduced performance of image-based algorithms. Event cameras capture high-temporal-resolution dynamic scene changes, providing spatially aligned motion information to complement images. However, external modules for event-based motion feature extraction, such as optical flow estimation, introduce additional computational costs and inference time. Moreover, achieving a globally optimal solution becomes challenging without joint optimization. In this paper, we propose a cross-modal motion scene feature extraction framework for motion-sensitive tasks, addressing challenges in motion feature extraction and dual-path feature fusion. The framework, serving as a versatile feature encoder, can adapt its feature extractor structure to meet diverse task requirements. We initially analyzed and identified the spatially concentrated and temporally continuous feature extraction tendency of spiking neural networks (SNNs). Based on this observation, we propose the hybrid spiking motion object feature extractor (HSME). Within this module, a novel fusion block is introduced to avoid feature-level blurring during the fusion of spike-float features. Furthermore, to ensure the acquisition of complementary scene features by the two-modal networks, we devise a spatial feature disentanglement that constraints the network during the optimization process. Event-based motion deblurring represents a prototypical motion-sensitive task, and our approach was assessed on prevalent datasets, attaining a state-of-the-art performance while maintaining an exceptionally low parameter count. We also conducted ablation experiments to evaluate the influence of each framework component on the results. Code and pre-trained models will be published after the paper is accepted.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.