Yechao Xu , Zhengxing Sun , Qian Li , Yunhan Sun , Yi Wang
{"title":"Spatiotemporal semantic structural representation learning for image sequence prediction","authors":"Yechao Xu , Zhengxing Sun , Qian Li , Yunhan Sun , Yi Wang","doi":"10.1016/j.neucom.2025.130159","DOIUrl":null,"url":null,"abstract":"<div><div>Image sequence prediction is a fundamental task in computer vision that neural networks predict what happens in next frames given by a sequence of images. Despite remarkable progress in recent years, it is still challenging for predictive models to learn robust representations avoiding blurry objects, due to several aspects. (i) <em>Incomplete semantic structures.</em> Spatial structures including textures, positions and shapes of potential semantics are not comprehensively modeled, where existing methods directly learn from raw pixels or limited structural features within specific categories. (ii) <em>Absent structural correlation.</em> Correlations between images are commonly modeled by recurrent and convolutional architectures in this direction, while leaving correlations among explicit structures lack of explorations. (iii) <em>Non-selective structural fusion</em>. Existing fusion of structures in this task focuses on the concatenation of data or intermediate features, resulting features being equally treated and representative ones are ignored. In order to address above problems, we propose a spatiotemporal semantic structural representation learning pipeline in this study. (i) For comprehensively spatial structural modeling, this pipeline starts by SAM-based segmenting potential semantic objects, then providing spatial structures including textures, positions and shapes. (ii) To realize structural correlation modeling, temporal self-attention modules are exploited in this pipeline to extract compressed temporal features in corresponding structures. (iii) Finally for fusion of structural features, spatiotemporal semantic structural representations are accessed by integrating compressed features through cross-attention based fusion modules. Extensive experiments are conducted on both KITTI, KTH and Sthv2 datasets, showing superior performance of our model and especially better visual quality of semantic parts.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"637 ","pages":"Article 130159"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225008318","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Image sequence prediction is a fundamental task in computer vision that neural networks predict what happens in next frames given by a sequence of images. Despite remarkable progress in recent years, it is still challenging for predictive models to learn robust representations avoiding blurry objects, due to several aspects. (i) Incomplete semantic structures. Spatial structures including textures, positions and shapes of potential semantics are not comprehensively modeled, where existing methods directly learn from raw pixels or limited structural features within specific categories. (ii) Absent structural correlation. Correlations between images are commonly modeled by recurrent and convolutional architectures in this direction, while leaving correlations among explicit structures lack of explorations. (iii) Non-selective structural fusion. Existing fusion of structures in this task focuses on the concatenation of data or intermediate features, resulting features being equally treated and representative ones are ignored. In order to address above problems, we propose a spatiotemporal semantic structural representation learning pipeline in this study. (i) For comprehensively spatial structural modeling, this pipeline starts by SAM-based segmenting potential semantic objects, then providing spatial structures including textures, positions and shapes. (ii) To realize structural correlation modeling, temporal self-attention modules are exploited in this pipeline to extract compressed temporal features in corresponding structures. (iii) Finally for fusion of structural features, spatiotemporal semantic structural representations are accessed by integrating compressed features through cross-attention based fusion modules. Extensive experiments are conducted on both KITTI, KTH and Sthv2 datasets, showing superior performance of our model and especially better visual quality of semantic parts.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.