Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference最新文献
Chukwuemeka O Ike, Yunshi Wen, John T Wen, Meeko M K Oishi, Lee K Brown, A Agung Julius
{"title":"Personalized Data-Driven State Models of the Circadian Dynamics in a Biometric Signal.","authors":"Chukwuemeka O Ike, Yunshi Wen, John T Wen, Meeko M K Oishi, Lee K Brown, A Agung Julius","doi":"10.1109/EMBC53108.2024.10782819","DOIUrl":"10.1109/EMBC53108.2024.10782819","url":null,"abstract":"<p><p>Circadian rhythms are endogenous 24-hour oscillations that are vital for maintaining our overall well-being. They are driven at a high level by a core circadian clock located in the brain, making their dynamics difficult to track. Various modeling approaches exist to predict the dynamics, but as the models are typically designed on population-level data, their performance is diminished on the individual level. This paper proposes a method for learning personalized latent state models, i.e., dynamical models that explicitly use latent state variables, that relate circadian input(s) to observable biometric signals. Our models combine an autoencoder with a recurrent neural network and use the pair to model the salient dynamics present in the data. We validate our method using experimental data, where the circadian input is light and the biometric data are actigraphy signals. We demonstrate that our method results in models with low-dimensional latent state that can accurately reconstruct and predict the observable biometric signals. Further, we show that the oscillation of the learned latent state agrees with the subjects' circadian clock oscillation as estimated with melatonin measurements.Clinical relevance - This proposes a technique for personal-ized modeling of the circadian system with potential applications in feedback control and individualized circadian studies.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect Investigation of Mask on Facial Expression Recognition Using Cerebral Evoked Potentials.","authors":"Baijun Song, Tomohiko Igasaki, Saori Nishikawa","doi":"10.1109/EMBC53108.2024.10782072","DOIUrl":"10.1109/EMBC53108.2024.10782072","url":null,"abstract":"<p><p>There is little physiological evidence on mechanisms of facial expression recognition, which involves visual processing, feature extraction, emotion-processing, and cognitive integration. Therefore, this study aims to delve into the cerebral mechanisms underlying facial expression recognition using evoked potentials and understand how face masks affect these processes. We recorded the behavioral psychological and electrophysiological responses to facial expression stimuli: behavioral measures (accuracy and response time) and evoked potentials at 19 sites, which were measured in 12 subjects, corresponding to the type of facial expression and mask presence. The latencies and amplitudes of the five components (P1, N1, P2, N2, and P3) were analyzed at each site. For the behavioral measures, repeated two-way analysis of variance (ANOVA) demonstrated that accuracy was significantly affected by the type of facial expression (p < 0.001) and the presence of a mask (p < 0.01), with an interaction effect (p < 0.001). Response time was significantly affected by the type of facial expression (p < 0.001) and interaction (p < 0.01), however, it was not affected by the presence of a mask (p > 0.05). For evoked potentials, there were no significant differences in latency and amplitude for each site and component by type of facial expression (p > 0.05, one-way ANOVA). Nevertheless, there were significant differences in latency and amplitude for many sites and components with and without masks (p < 0.05, paired t-test). In addition, repeated two-way ANOVA revealed an interaction (p < 0.05) between N1 latency at F3, T3, C3, and Pz. In total, all these results suggest that the presence of a mask affects cognitive processing, and the presence of a mask for the type of facial expression affects central resources, both from a behavioral psychological and electrophysiological point of view.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning-based Open-set Person Identification using Radar Extracted Cardiac Signals.","authors":"Zelin Xing, Mondher Bouazizi, Tomoaki Ohtsuki","doi":"10.1109/EMBC53108.2024.10782527","DOIUrl":"10.1109/EMBC53108.2024.10782527","url":null,"abstract":"<p><p>Person identification based on radar-extracted vital signs has become increasingly popular due to its non-contact measurement capabilities. This paper introduces a novel deep learning-based person identification algorithm leveraging radar- extracted vital signs. While current studies mainly focus on closeset conditions with consistent training and testing categories, real-world scenarios often involve open-set circumstances, in which there are more data categories in the testing data. The algorithm involves extracting heart pulse signals from Doppler radar echoes, training two Convolutional Neural Network (CNN)-based models using transfer learning, and utilizing a distribution model for calibration. By combining the models' outputs through a strategic decision-making process, we achieve superior person identification results. Experimental results on a public radar vital signs dataset demonstrate an identification accuracy of 99.61% in close-set conditions and 94.35% in openset conditions, surpassing existing approaches.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander R Guillen, Dennis Q Truong, Paula Cristina Faria, Brian Pryor, Luis De Taboada, Abhishek Datta
{"title":"Computational analysis of light diffusion and thermal effects during Transcranial Photobiomodulation.","authors":"Alexander R Guillen, Dennis Q Truong, Paula Cristina Faria, Brian Pryor, Luis De Taboada, Abhishek Datta","doi":"10.1109/EMBC53108.2024.10782579","DOIUrl":"10.1109/EMBC53108.2024.10782579","url":null,"abstract":"<p><p>Transcranial Photobiomodulation (tPBM) is a non-invasive procedure where light is applied to the scalp to modulate underlying brain activity. tPBM has recently attracted immense interest as a potential therapeutic option for a range of neurological and neuropsychiatric conditions. The common technological questions related to this modality are extent of light penetration and associated scalp and brain temperature increases. Limited computational efforts to quantify these aspects are restricted to simplified models. We consider here a 3D high-resolution (1 mm) and anatomically realistic model to simulate light propagation and thermal effects. We consider a dose of 100 mW /cm<sup>2</sup> and use a single light source targeting the F3 location based on 10-20 EEG. Our simulations reveal that while the induced irradiance distribution largely mimics the shape and extent of the source, there is a blurring effect at the brain. This diffusion is attributed to the scalp, skull, and compounded at the surface of the cerebrospinal fluid. Around 1% of the injected irradiance reaches the gray matter. As expected and aligned with previous efforts, the scalp accounts for the greatest loss (~65%). We observe a nominal 0.38 °C rise in the scalp in regions directly underneath the source. There is negligible temperature rise in the brain. Finally, irradiance reduces to 0.01 mW /cm<sup>2</sup> at ~13.5 cm from the scalp surface.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Lombardi, Piergiorgio Francia, Leonardo Bocchi
{"title":"Exploring Gender-Related Variations in Photoplethysmography.","authors":"Sara Lombardi, Piergiorgio Francia, Leonardo Bocchi","doi":"10.1109/EMBC53108.2024.10782441","DOIUrl":"10.1109/EMBC53108.2024.10782441","url":null,"abstract":"<p><p>Photoplethysmographic signal (PPG) analysis is emerging in healthcare applications due to its affordable cost and noninvasiveness. However, it is well known how PPG is influenced by several factors, potentially including the gender of the subject. This study aims to identify which parameters of the PPG signal show variations in relation to gender. We used a machine learning approach to classify the gender of subjects using a mathematical model of the PPG signal. In a cross-validation procedure, our method correctly classified 90 out of 115 subjects (78%). Heart cycle and systolic phase duration, along with variables related to the reflected wave of the PPG signal, emerged as significant parameters. These findings enhance our understanding of gender-related PPG variability, offering potential insights for future clinical applications in cardiovascular monitoring.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Upper-extremity Function with Raw Kinematic Trajectory Data after Stroke using End-to-end Machine Learning Approach.","authors":"Wanyi Qing, Changjie Pan, Jianing Zhang, Chun-Yan Chau, Chun-Hin Mui, Xiaoling Hu","doi":"10.1109/EMBC53108.2024.10781580","DOIUrl":"10.1109/EMBC53108.2024.10781580","url":null,"abstract":"<p><p>Although there are some studies on the automatic evaluation of impairment levels after stroke using machine learning (ML) models, few have delved into the predictive capabilities of raw motion data. In this study, we captured kinematic trajectories of the trunk and affected upper limb from 21 patients with chronic stroke when performing three reaching tasks. Employing ML models, we integrated the recorded trajectories to predict scores of the Fugl-Meyer Assessment of the Upper Extremity (FMA-UE) of stroke patients. A transformer-based model achieved better metrics than Residual Neural Network (ResNet) and support vector regression (SVR). The trajectory successfully predicted FMA-UE scores, with the forward task (R<sup>2</sup>=0.905±0.028) outperforming the vertical task (R<sup>2</sup>=0.875±0.019) and horizontal task (R<sup>2</sup>=0.868±0.031). This pilot study demonstrated the capability of original trajectory data in tracking personal motor function after stroke and extended possibility of application in telerehabilitation.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can heart rate variability demonstrate progression of mindfulness through two-week repeated practice?","authors":"Yifei Xu, Yanping Wei, Wanlin Chen, Xuanyi Wang, Jing Zheng, Shulin Chen, Hang Chen","doi":"10.1109/EMBC53108.2024.10782128","DOIUrl":"10.1109/EMBC53108.2024.10782128","url":null,"abstract":"<p><p>Mindfulness could benefit on mental and physical health. Through repeated practice, progression of mindfulness could be found. Except for self-report questionnaires, heart rate variability (HRV) is a potential biomarker to demonstrate the effects of mindfulness. However, few studies focus on the changes in HRV which may vary through repeated practice. This study aims to explore whether HRV could demonstrate progression of mindfulness through repeated practice. 20 experienced practitioners and 26 novices were enrolled to practice two-week mindfulness and completed the Five Facet Mindfulness Questionnaire pre and post the training. ECG signals were collected by holter monitors, covering baseline to training and 9 HRV metrics were extracted. The results indicate that the experienced group showed significantly increased parasympathetic activity during mindfulness training and the effects were stable through repeated practice, while the novice group showed high cognitive load, with inconspicuous but probably progressive effects. The findings indicate that HRV could demonstrate progression of mindfulness through repeated practice, suggesting the possibility of assessing mindfulness based on HRV.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A signal processing tool for extracting features from arterial blood pressure and photoplethysmography waveforms.","authors":"R Pal, A Rudas, S Kim, J N Chiang, M Cannesson","doi":"10.1109/EMBC53108.2024.10782973","DOIUrl":"10.1109/EMBC53108.2024.10782973","url":null,"abstract":"<p><p>Arterial blood pressure (ABP) and photoplethysmography (PPG) waveforms contain valuable clinical information and play a crucial role in cardiovascular health monitoring, medical research, and managing medical conditions. The features extracted from PPG waveforms have various clinical applications ranging from blood pressure monitoring to nociception monitoring, while features from ABP waveforms can be used to calculate cardiac output and predict hypertension or hypotension. In recent years, many machine learning models have been proposed to utilize both PPG and ABP waveform features for these healthcare applications. However, the lack of standardized tools for extracting features from these waveforms could potentially affect their clinical effectiveness. In this paper, we propose an automatic signal processing tool for extracting features from ABP and PPG waveforms. Additionally, we generated a PPG feature library from a large perioperative dataset comprising 17,327 patients using the proposed tool. This PPG feature library can be used to explore the potential of these extracted features to develop machine learning models for non-invasive blood pressure estimation.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khin Pa Pa Aung, Hao-Long Yin, Tian-Fang Ma, Wei-Long Zheng, Bao-Liang Lu
{"title":"A Multimodal Myanmar Emotion Dataset for Emotion Recognition.","authors":"Khin Pa Pa Aung, Hao-Long Yin, Tian-Fang Ma, Wei-Long Zheng, Bao-Liang Lu","doi":"10.1109/EMBC53108.2024.10782660","DOIUrl":"10.1109/EMBC53108.2024.10782660","url":null,"abstract":"<p><p>Effective emotion recognition is vital for human interaction and has an impact on several fields such as psychology, social sciences, human-computer interaction, and emotional artificial intelligence. This study centers on the innovative contribution of a novel Myanmar emotion dataset to enhance emotion recognition technology in diverse cultural contexts. Our unique dataset is derived from a carefully designed emotion elicitation paradigm, using 15 video clips per session for three emotions (positive, neutral, and negative), with five clips per emotion. We collected electroencephalogram (EEG) signals and eye-tracking data from 20 subjects, and each subject took three sessions spaced over several days. Notably, all video clips used in experiments have been well rated by Myanmar citizens through the Self-Assessment Manikin scale. We validated the proposed dataset's uniqueness using three baseline unimodal classification methods, alongside two traditional multimodal approaches and a deep multimodal approach (DCCA-AM) under subject-dependent and subject-independent settings. Unimodal classification achieved accuracies ranging from 62.57% to 77.05%, while multimodal fusion techniques achieved accuracies ranging from 75.43% to 87.91%. These results underscore the effectiveness of the models, and highlighting the value of our unique dataset for cross-cultural applications.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Cough Sound Segmentation Algorithms in the Presence of Background Noise.","authors":"Roneel V Sharan, Hao Xiong","doi":"10.1109/EMBC53108.2024.10782675","DOIUrl":"10.1109/EMBC53108.2024.10782675","url":null,"abstract":"<p><p>Automated cough sound segmentation is important for the objective analysis of cough sounds. While various cough sound segmentation algorithms have been proposed over the years, it is not clear how these algorithms perform in the presence of background noise, which can vary in intensity across different environments. Therefore, in this study, we evaluate the performance of cough sound segmentation algorithms in the presence of background noise. Specifically, we examine algorithms employing conventional feature engineering and machine learning methods, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and a combination of CNNs and RNNs. These algorithms are developed using relatively clean cough signals but evaluated under both clean and noisy conditions. The results indicate that, while the performance of all algorithms declined in the presence of background noise, the combination of CNNs and RNNs yielded the best cough segmentation results under both clean and noisy conditions. These findings can contribute to the development of noise-robust cough sound segmentation algorithms for objective cough sound analysis in noisy conditions.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}