Li Zhu, K. Vatanparvar, Migyeong Gwak, Jilong Kuang, A. Gao
{"title":"Contactless SpO2 Detection from Face Using Consumer Camera","authors":"Li Zhu, K. Vatanparvar, Migyeong Gwak, Jilong Kuang, A. Gao","doi":"10.1109/BSN56160.2022.9928509","DOIUrl":null,"url":null,"abstract":"We describe a novel computational framework for contactless oxygen saturation (SpO2) detection using videos recorded from human faces using smartphone cameras with ambient light. For contact pulse oximeter, a ratio of ratios (RoR) metric derived from selected regions of interest (ROI) combined with linear regression modeling is the standard approach. However, when used upon contactless remote PPG (rPPG), the assumptions of this standard approach do not hold automatically: 1) the rPPG signal is usually derived from the face area where the light reflection may not be uniform due to variation in skin tissue composition and/or lighting conditions (moles, hairs, beard, partial shadowing, etc.), 2) for most consumer-level cameras under ambient light, the rPPG signal is converted from light reflection associated with wide-band spectra, which creates complicated nonlinearity for SpO2 mappings. We propose a computational framework to overcome these challenges by 1) determining and dynamically tracking the ROIs according to both spatial and color proximity, and calculating the RoR based on selected individual ROIs which have homogeneous skin reflections, and 2) using a nonlinear machine learning model to mapping the SpO2 levels from RoRs derived from two different color combinations. We validated the framework with 30 healthy participants during various breathing tasks and achieved 1.24% Root Mean Square Error for across-subjects model and 1.06% for within-subject models, which surpassed the FDA-recognized ISO 81060-2-61:2017 standard.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BSN56160.2022.9928509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We describe a novel computational framework for contactless oxygen saturation (SpO2) detection using videos recorded from human faces using smartphone cameras with ambient light. For contact pulse oximeter, a ratio of ratios (RoR) metric derived from selected regions of interest (ROI) combined with linear regression modeling is the standard approach. However, when used upon contactless remote PPG (rPPG), the assumptions of this standard approach do not hold automatically: 1) the rPPG signal is usually derived from the face area where the light reflection may not be uniform due to variation in skin tissue composition and/or lighting conditions (moles, hairs, beard, partial shadowing, etc.), 2) for most consumer-level cameras under ambient light, the rPPG signal is converted from light reflection associated with wide-band spectra, which creates complicated nonlinearity for SpO2 mappings. We propose a computational framework to overcome these challenges by 1) determining and dynamically tracking the ROIs according to both spatial and color proximity, and calculating the RoR based on selected individual ROIs which have homogeneous skin reflections, and 2) using a nonlinear machine learning model to mapping the SpO2 levels from RoRs derived from two different color combinations. We validated the framework with 30 healthy participants during various breathing tasks and achieved 1.24% Root Mean Square Error for across-subjects model and 1.06% for within-subject models, which surpassed the FDA-recognized ISO 81060-2-61:2017 standard.