{"title":"Estimating blood pressure using video-based PPG and deep learning","authors":"Gianluca Zaza, Gabriella Casalino, Sergio Caputo, Giovanna Castellano","doi":"10.1016/j.imavis.2025.105683","DOIUrl":null,"url":null,"abstract":"<div><div>This paper introduces a novel pipeline for estimating systolic and diastolic blood pressure using remote photoplethysmographic (rPPG) signals derived from video recordings of subjects’ faces. The pipeline consists of three main stages: rPPG signal extraction, denoising to transform the rPPG signal into a PPG-like waveform, and blood pressure estimation. This approach directly addresses the current lack of datasets that simultaneously include video, rPPG, and blood pressure data. To overcome this, the proposed pipeline leverages the extensive availability of PPG-based blood pressure estimation techniques, in combination with state-of-the-art algorithms for rPPG extraction, enabling the generation of reliable PPG-like signals from video input.</div><div>To validate the pipeline, we conducted comparative analyses with state-of-the-art methods at each stage and collected a dedicated dataset through controlled laboratory experimentation. The results demonstrate that the proposed solution effectively captures blood pressure information, achieving a mean error of 9.2 ± 11.3 mmHg for systolic and 8.6 ± 9.1 mmHg for diastolic blood pressure. Moreover, the denoised rPPG signals show a strong correlation with conventional PPG signals, supporting the reliability of the transformation process. This non-invasive and contactless method offers considerable potential for long-term blood pressure monitoring, particularly in Ambient Assisted Living (AAL) systems, where unobtrusive and continuous health monitoring is essential.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105683"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002719","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a novel pipeline for estimating systolic and diastolic blood pressure using remote photoplethysmographic (rPPG) signals derived from video recordings of subjects’ faces. The pipeline consists of three main stages: rPPG signal extraction, denoising to transform the rPPG signal into a PPG-like waveform, and blood pressure estimation. This approach directly addresses the current lack of datasets that simultaneously include video, rPPG, and blood pressure data. To overcome this, the proposed pipeline leverages the extensive availability of PPG-based blood pressure estimation techniques, in combination with state-of-the-art algorithms for rPPG extraction, enabling the generation of reliable PPG-like signals from video input.
To validate the pipeline, we conducted comparative analyses with state-of-the-art methods at each stage and collected a dedicated dataset through controlled laboratory experimentation. The results demonstrate that the proposed solution effectively captures blood pressure information, achieving a mean error of 9.2 ± 11.3 mmHg for systolic and 8.6 ± 9.1 mmHg for diastolic blood pressure. Moreover, the denoised rPPG signals show a strong correlation with conventional PPG signals, supporting the reliability of the transformation process. This non-invasive and contactless method offers considerable potential for long-term blood pressure monitoring, particularly in Ambient Assisted Living (AAL) systems, where unobtrusive and continuous health monitoring is essential.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.