Jingyi Liu , Yuanyuan Shang , Mengyuan Yang , Zhuhong Shao , Hui Ding , Tie Liu
{"title":"CFGMamba: Cross frame group Mamba for video-based depression recognition","authors":"Jingyi Liu , Yuanyuan Shang , Mengyuan Yang , Zhuhong Shao , Hui Ding , Tie Liu","doi":"10.1016/j.bspc.2025.108113","DOIUrl":null,"url":null,"abstract":"<div><div>Depression recognition is a significant research topic in the field of affective computing, which has important value for promoting clinical diagnosis and screening of depression. Video-based depression recognition methods utilize Convolutional Neural Networks (CNNs) or Transformers to capture relevant visual features and achieve promising performance. However, the limited receptive field of CNNs, the high computational resource consumption of Transformer long sequence modeling, and the high dimensionality of video data are key issues to be addressed. Considering these factors, this work introduces the State Space Model (SSM) for depression recognition and proposes a Cross Frame Group Mamba (CFGMamba) framework. CFGMamba alleviates the limitations of CNNs through global receptive fields and can effectively model long-range sequences with linear complexity. Technically, CFGMamba models cross-frame grouping of video data, dividing video frames into several distinct groups at time intervals and then performing bidirectional scanning for each group in the spatial–temporal dimension. This cross-frame grouping strategy efficiently captures richer emotional features while minimizing computational overhead. Meanwhile, CFGMamba incorporates a multi-stage downsampling approach, where multiple CFGMamba blocks are stacked at each stage to progressively capture multi-scale spatial–temporal emotional features from shallow to deep layers. Experimental results on the AVEC 2013 and AVEC 2014 datasets indicate that CFGMamba achieves competitive performance, with MAE/RMSE of 6.01/7.59 and 5.96/7.52, respectively. And the F1-score/AUC is 0.75/0.78 on the EmoReact dataset.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"110 ","pages":"Article 108113"},"PeriodicalIF":4.9000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S174680942500624X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Depression recognition is a significant research topic in the field of affective computing, which has important value for promoting clinical diagnosis and screening of depression. Video-based depression recognition methods utilize Convolutional Neural Networks (CNNs) or Transformers to capture relevant visual features and achieve promising performance. However, the limited receptive field of CNNs, the high computational resource consumption of Transformer long sequence modeling, and the high dimensionality of video data are key issues to be addressed. Considering these factors, this work introduces the State Space Model (SSM) for depression recognition and proposes a Cross Frame Group Mamba (CFGMamba) framework. CFGMamba alleviates the limitations of CNNs through global receptive fields and can effectively model long-range sequences with linear complexity. Technically, CFGMamba models cross-frame grouping of video data, dividing video frames into several distinct groups at time intervals and then performing bidirectional scanning for each group in the spatial–temporal dimension. This cross-frame grouping strategy efficiently captures richer emotional features while minimizing computational overhead. Meanwhile, CFGMamba incorporates a multi-stage downsampling approach, where multiple CFGMamba blocks are stacked at each stage to progressively capture multi-scale spatial–temporal emotional features from shallow to deep layers. Experimental results on the AVEC 2013 and AVEC 2014 datasets indicate that CFGMamba achieves competitive performance, with MAE/RMSE of 6.01/7.59 and 5.96/7.52, respectively. And the F1-score/AUC is 0.75/0.78 on the EmoReact dataset.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.