{"title":"Dual-branch adjacent connection and channel mixing network for video crowd counting","authors":"Miaogen Ling , Jixuan Chen , Yongwen Liu , Wei Fang , Xin Geng","doi":"10.1016/j.patcog.2025.111709","DOIUrl":null,"url":null,"abstract":"<div><div>This paper focuses on the problem of video crowd counting, which usually uses the spatial and temporal correlations of the consecutive frames to achieve better performance than the single-image crowd counting methods. However, most of the current video crowd counting methods either use only two or three frames for optical flow or frame-difference feature extraction or construct a single-branch network to extract spatiotemporal correlated features. The interactions of features for multiple adjacent frames, which can effectively prevent disturbances caused by background noise, are mostly overlooked. Considering the above problems, we propose a dual-branch adjacent connection and channel mixing network for multi-frame video crowd counting. For the upper branch, an adjacent layer connection method is proposed to capture the multi-scaled spatiotemporal correlations among multiple consecutive frames instead of the traditional dense connections in decomposed 3D convolutional blocks. It achieves better performance and low computation cost. For the lower branch, adaptive temporal channel mixing blocks are proposed to exchange partial channel information among the adjacent frames for feature interaction. The partial channel transpose operation is first proposed to exchange information. It is parameter-free and flexible to achieve interactions among features of any number of consecutive frames. The proposed method outperforms the current image-based and video-based crowd counting models, achieving state-of-the-art performance on six publicly available datasets. The code is available at: <span><span>https://github.com/aaaabbbbcccccjxzxj/mfvcc</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"167 ","pages":"Article 111709"},"PeriodicalIF":7.5000,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325003693","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper focuses on the problem of video crowd counting, which usually uses the spatial and temporal correlations of the consecutive frames to achieve better performance than the single-image crowd counting methods. However, most of the current video crowd counting methods either use only two or three frames for optical flow or frame-difference feature extraction or construct a single-branch network to extract spatiotemporal correlated features. The interactions of features for multiple adjacent frames, which can effectively prevent disturbances caused by background noise, are mostly overlooked. Considering the above problems, we propose a dual-branch adjacent connection and channel mixing network for multi-frame video crowd counting. For the upper branch, an adjacent layer connection method is proposed to capture the multi-scaled spatiotemporal correlations among multiple consecutive frames instead of the traditional dense connections in decomposed 3D convolutional blocks. It achieves better performance and low computation cost. For the lower branch, adaptive temporal channel mixing blocks are proposed to exchange partial channel information among the adjacent frames for feature interaction. The partial channel transpose operation is first proposed to exchange information. It is parameter-free and flexible to achieve interactions among features of any number of consecutive frames. The proposed method outperforms the current image-based and video-based crowd counting models, achieving state-of-the-art performance on six publicly available datasets. The code is available at: https://github.com/aaaabbbbcccccjxzxj/mfvcc.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.