M. Y. Ahmed, Tousif Ahmed, Md. Mahbubur Rahman, Zihan Wang, Jilong Kuang, A. Gao
{"title":"Deep Audio Spectral Processing for Respiration Rate Estimation from Smart Commodity Earbuds","authors":"M. Y. Ahmed, Tousif Ahmed, Md. Mahbubur Rahman, Zihan Wang, Jilong Kuang, A. Gao","doi":"10.1109/BSN56160.2022.9928461","DOIUrl":null,"url":null,"abstract":"Respiration rate is an important health biomarker and a vital indicator for health and fitness. With smart earbuds gaining popularity as a commodity device, recent works have demonstrated the potential for monitoring breathing rate using such earable devices. In this work, for the first time we utilize deep image recognition techniques to infer respiration rate from earbud audio. We use image spectrograms from breathing cycle audio signals captured using Samsung earbuds as a spectral feature to train a deep convolutional neural network. Using novel earbud audio data collected from 30 subjects with both controlled breathing at a wide range (from 5 upto 45 breaths per minute), and uncontrolled natural breathing from 7-day home deployment, experimental results demonstrate that our model outperforms existing methods using earbuds for inferring respiration rates from regular intensity breathing and heavy breathing sounds with 0.77 aggregated MAE for controlled breathing and with 0.99 aggregated MAE for at-home natural breathing.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BSN56160.2022.9928461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Respiration rate is an important health biomarker and a vital indicator for health and fitness. With smart earbuds gaining popularity as a commodity device, recent works have demonstrated the potential for monitoring breathing rate using such earable devices. In this work, for the first time we utilize deep image recognition techniques to infer respiration rate from earbud audio. We use image spectrograms from breathing cycle audio signals captured using Samsung earbuds as a spectral feature to train a deep convolutional neural network. Using novel earbud audio data collected from 30 subjects with both controlled breathing at a wide range (from 5 upto 45 breaths per minute), and uncontrolled natural breathing from 7-day home deployment, experimental results demonstrate that our model outperforms existing methods using earbuds for inferring respiration rates from regular intensity breathing and heavy breathing sounds with 0.77 aggregated MAE for controlled breathing and with 0.99 aggregated MAE for at-home natural breathing.