Tousif Ahmed, M. Y. Ahmed, Md. Mahbubur Rahman, Ebrahim Nemati, Bashima Islam, K. Vatanparvar, Viswam Nathan, Daniel McCaffrey, Jilong Kuang, J. Gao
{"title":"Automated Time Synchronization of Cough Events from Multimodal Sensors in Mobile Devices","authors":"Tousif Ahmed, M. Y. Ahmed, Md. Mahbubur Rahman, Ebrahim Nemati, Bashima Islam, K. Vatanparvar, Viswam Nathan, Daniel McCaffrey, Jilong Kuang, J. Gao","doi":"10.1145/3382507.3418855","DOIUrl":null,"url":null,"abstract":"Tracking the type and frequency of cough events is critical for monitoring respiratory diseases. Coughs are one of the most common symptoms of respiratory and infectious diseases like COVID-19, and a cough monitoring system could have been vital in remote monitoring during a pandemic like COVID-19. While the existing solutions for cough monitoring use unimodal (e.g., audio) approaches for detecting coughs, a fusion of multimodal sensors (e.g., audio and accelerometer) from multiple devices (e.g., phone and watch) are likely to discover additional insights and can help to track the exacerbation of the respiratory conditions. However, such multimodal and multidevice fusion requires accurate time synchronization, which could be challenging for coughs as coughs are usually concise events (0.3-0.7 seconds). In this paper, we first demonstrate the time synchronization challenges of cough synchronization based on the cough data collected from two studies. Then we highlight the performance of a cross-correlation based time synchronization algorithm on the alignment of cough events. Our algorithm can synchronize 98.9% of cough events with an average synchronization error of 0.046s from two devices.","PeriodicalId":402394,"journal":{"name":"Proceedings of the 2020 International Conference on Multimodal Interaction","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3382507.3418855","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Tracking the type and frequency of cough events is critical for monitoring respiratory diseases. Coughs are one of the most common symptoms of respiratory and infectious diseases like COVID-19, and a cough monitoring system could have been vital in remote monitoring during a pandemic like COVID-19. While the existing solutions for cough monitoring use unimodal (e.g., audio) approaches for detecting coughs, a fusion of multimodal sensors (e.g., audio and accelerometer) from multiple devices (e.g., phone and watch) are likely to discover additional insights and can help to track the exacerbation of the respiratory conditions. However, such multimodal and multidevice fusion requires accurate time synchronization, which could be challenging for coughs as coughs are usually concise events (0.3-0.7 seconds). In this paper, we first demonstrate the time synchronization challenges of cough synchronization based on the cough data collected from two studies. Then we highlight the performance of a cross-correlation based time synchronization algorithm on the alignment of cough events. Our algorithm can synchronize 98.9% of cough events with an average synchronization error of 0.046s from two devices.