DD-rPPGNet: De-Interfering and Descriptive Feature Learning for Unsupervised rPPG Estimation

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Pei-Kai Huang;Tzu-Hsien Chen;Ya-Ting Chan;Kuan-Wen Chen;Chiou-Ting Hsu
{"title":"DD-rPPGNet: De-Interfering and Descriptive Feature Learning for Unsupervised rPPG Estimation","authors":"Pei-Kai Huang;Tzu-Hsien Chen;Ya-Ting Chan;Kuan-Wen Chen;Chiou-Ting Hsu","doi":"10.1109/TIFS.2025.3565965","DOIUrl":null,"url":null,"abstract":"Remote Photoplethysmography (rPPG) aims to measure physiological signals and Heart Rate (HR) from facial videos. Recent unsupervised rPPG estimation methods have shown promising potential in estimating rPPG signals from facial regions without relying on ground truth rPPG signals. However, these methods seem oblivious to interference existing in rPPG signals and still result in unsatisfactory performance. In this paper, we propose a novel De-interfered and Descriptive rPPG Estimation Network (DD-rPPGNet) to eliminate the interference within rPPG features for learning genuine rPPG signals. First, we investigate the characteristics of local spatial-temporal similarities of interference and design a novel unsupervised model to estimate the interference. Next, we propose an unsupervised de-interfered method to learn genuine rPPG signals with two stages. In the first stage, we estimate the initial rPPG signals by contrastive learning from both the training data and their augmented counterparts. In the second stage, we use the estimated interference features to derive de-interfered rPPG features and encourage the rPPG signals to be distinct from the interference. In addition, we propose an effective descriptive rPPG feature learning by developing a strong 3D Learnable Descriptive Convolution (3DLDC) to capture the subtle chrominance changes for enhancing rPPG estimation. Extensive experiments conducted on five rPPG benchmark datasets demonstrate that the proposed DD-rPPGNet outperforms previous unsupervised rPPG estimation methods and achieves competitive performances with state-of-the-art supervised rPPG methods. The code is available at: <uri>https://github.com/Pei-KaiHuang/TIFS2025-DD-rPPGNet</uri>","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4956-4970"},"PeriodicalIF":8.0000,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10981460/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Remote Photoplethysmography (rPPG) aims to measure physiological signals and Heart Rate (HR) from facial videos. Recent unsupervised rPPG estimation methods have shown promising potential in estimating rPPG signals from facial regions without relying on ground truth rPPG signals. However, these methods seem oblivious to interference existing in rPPG signals and still result in unsatisfactory performance. In this paper, we propose a novel De-interfered and Descriptive rPPG Estimation Network (DD-rPPGNet) to eliminate the interference within rPPG features for learning genuine rPPG signals. First, we investigate the characteristics of local spatial-temporal similarities of interference and design a novel unsupervised model to estimate the interference. Next, we propose an unsupervised de-interfered method to learn genuine rPPG signals with two stages. In the first stage, we estimate the initial rPPG signals by contrastive learning from both the training data and their augmented counterparts. In the second stage, we use the estimated interference features to derive de-interfered rPPG features and encourage the rPPG signals to be distinct from the interference. In addition, we propose an effective descriptive rPPG feature learning by developing a strong 3D Learnable Descriptive Convolution (3DLDC) to capture the subtle chrominance changes for enhancing rPPG estimation. Extensive experiments conducted on five rPPG benchmark datasets demonstrate that the proposed DD-rPPGNet outperforms previous unsupervised rPPG estimation methods and achieves competitive performances with state-of-the-art supervised rPPG methods. The code is available at: https://github.com/Pei-KaiHuang/TIFS2025-DD-rPPGNet
DD-rPPGNet:无监督rPPG估计的去干扰和描述性特征学习
远程光电容积脉搏波描记(rPPG)旨在测量面部视频中的生理信号和心率(HR)。最近的无监督rPPG估计方法在不依赖于地面真值rPPG信号的情况下从面部区域估计rPPG信号方面显示出了很好的潜力。然而,这些方法似乎忽略了rPPG信号中存在的干扰,仍然导致性能不理想。在本文中,我们提出了一种新的去干扰和描述性rPPG估计网络(DD-rPPGNet),以消除rPPG特征内部的干扰,用于学习真实的rPPG信号。首先,研究了干扰的局部时空相似性特征,设计了一种新的无监督干扰估计模型。接下来,我们提出了一种无监督去干扰的方法来学习两阶段的真实rPPG信号。在第一阶段,我们通过对比学习训练数据和增强数据来估计初始rPPG信号。在第二阶段,我们使用估计的干扰特征来导出去干扰的rPPG特征,并促使rPPG信号与干扰区分开来。此外,我们提出了一个有效的描述性rPPG特征学习,通过开发一个强大的3D可学习描述性卷积(3dlc)来捕捉细微的色度变化,以增强rPPG估计。在5个rPPG基准数据集上进行的大量实验表明,提出的DD-rPPGNet优于以前的无监督rPPG估计方法,并与最先进的有监督rPPG方法取得了竞争性能。代码可从https://github.com/Pei-KaiHuang/TIFS2025-DD-rPPGNet获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信