{"title":"COVID-19 Detection Exploiting Self-Supervised Learning Representations of Respiratory Sounds","authors":"Adria Mallol-Ragolta, Shuo Liu, B. Schuller","doi":"10.1109/BHI56158.2022.9926967","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926967","url":null,"abstract":"In this work, we focus on the automatic detection of COVID-19 patients from the analysis of cough, breath, and speech samples. Our goal is to investigate the suitability of Self-Supervised Learning (SSL) representations extracted using Wav2Vec 2.0 for the task at hand. For this, in addition to the SSL representations, the models trained exploit the Low-Level Descriptors (LLD) of the eGeMAPS feature set, and Mel-spectrogram coefficients. The extracted representations are analysed using Convolutional Neural Networks (CNN) reinforced with contextual attention. Our experiments are performed using the data released as part of the Second Diagnosing COVID-19 using Acoustics (DiCOVA) Challenge, and we use the Area Under the Curve (AUC) as the evaluation metric. When using the CNNs without contextual attention, the multi-type model exploiting the SSL Wav2Vec 2.0 representations from the cough, breath, and speech sounds scores the highest AUC, 80.37 %. When reinforcing the embedded representations learnt with contextual attention, the AUC obtained using this same model slightly decreases to 80.01 %. The best performance on the test set is obtained with a multi-type model fusing the embedded representations extracted from the LLDs of the cough, breath, and speech samples and reinforced using contextual attention, scoring an AUC of 81.27 %.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124399190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina M. Jaeger, Michael Nissen, R. Richer, Simone Rahm, Adriana Titzmann, P. Fasching, Bjoern M. Eskofier, Heike Leutheuser
{"title":"Machine Learning-based Detection of In-Utero Fetal Presentation from Non-Invasive Fetal ECG","authors":"Katharina M. Jaeger, Michael Nissen, R. Richer, Simone Rahm, Adriana Titzmann, P. Fasching, Bjoern M. Eskofier, Heike Leutheuser","doi":"10.1109/BHI56158.2022.9926804","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926804","url":null,"abstract":"Preterm births account for more than 10 % of all newborns. An adverse fetal presentation is a risk factor for intrapartum and neonatal mortality. To date, no technology enables a longitudinal, ubiquitous, and unobtrusive monitoring of fetal presentation. This study presents a first approach to fetal orientation detection based on non-invasive fetal electrocardiography (NI-fECG) using the non-invasive multi-modal foetal ECG-Doppler data set for antenatal cardiology research. The data set contains 60 recordings from 39 pregnant women (21–27 weeks), including NI-fECG and ultrasound position ground truth. We evaluated both handcrafted and generic features for five different classifiers (k-Nearest-Neighbor, Decision Tree Classifier, Support Vector Classification, AdaBoost Classifier, and Multilayer Perceptron) using cross-validation on subject splits on a cleaned subset. Best results for the distinction between vertex (head down) and breech (head up) were achieved using an AdaBoost classifier with a balanced accuracy of 86.5 ± 15.0 %. With this work, we take a first step towards longitudinal fetal presentation monitoring, which contributes to a better understanding of reduced fetal movements and extends the potential applications of NI-fECG in prenatal care. In future work, we will expand our classification system to detect more detailed fetal presentations using a newly created data set.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130321269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance vs. Privacy: Evaluating the Performance of Predicting Second Primary Cancer in Lung Cancer Survivors with Privacy-preserving Approaches","authors":"Jui-Fu Hong, Y. Tseng","doi":"10.1109/BHI56158.2022.9926935","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926935","url":null,"abstract":"Deep learning has been widely used in the medical field to support medical decision making. Simultaneously, with the rise of data privacy protection, accessing clinical records across different institutions has become a possible challenge. Several approaches, such as federated and transfer learning, have been proposed to train models without accessing all the records from each institution, but the performance of these privacy-preserved models may not be as good as centralized approaches, which aggregate all records to build a centralized model. To explore the potential of privacy-preserving second primary cancer (SPC) prediction of lung cancer survivors using real-world data, we evaluated the performance of federated learning, transfer learning, learning with a single institution, and traditional centralized learning. We trained machine learning models using data from four hospitals and compared the model performances of learning from a single institution, centralized learning, federated learning, and transfer learning. The results show that federated learning outperformed other learning strategies in three of the four sites (AUROC from 0.733 to 0.777). However, only Site 6 showed that federated learning significantly outperformed all the other learning strategies (P < 0.05). In summary, federated learning can develop a unified model for the multiple institutions while maintaining data security.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131706749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lulin Shi, Ivy H. M. Wong, Claudia T. K. Lo, T. T. Wong
{"title":"One-side Virtual Histological Staining Model for Complex Human Samples","authors":"Lulin Shi, Ivy H. M. Wong, Claudia T. K. Lo, T. T. Wong","doi":"10.1109/BHI56158.2022.9926959","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926959","url":null,"abstract":"Virtual histological staining technique with a label-free auto-fluorescence image as an input is a challenging scientific pursuit to visualize complicated biological structures with distinct features. Recently, most of the related methods follow the two-side image translation architecture to get rid of paired data restriction, which is necessary for unprocessed and thick tissue virtual histological staining style transformation. However, the associated cycle consistency loss will inevitably lead to huge calculation consumption and cannot address the problem of incorrect translation among intracellular and extracellular components, which we termed as incorrect staining. In this paper, we propose a novel and computational-efficient one-side image translation framework to transfer unstained auto-fluorescence images into virtual hematoxylin- and eosin-stained counterparts for both thin and thick human samples. To address the incorrect nuclear staining issue, we design a region-classification loss to incorporate partial supervision information. Experimental data on both thin and thick human lung samples are used to demonstrate that our method is computationally efficient while achieving a comparable transformation performance over the two-side framework.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126494948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interpretability with Relevance Aggregation in Neural Networks for Absenteeism Prediction","authors":"Julio Marcos Gomes Junior, Fabricio M. Lopes","doi":"10.1109/BHI56158.2022.9926870","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926870","url":null,"abstract":"The lack of attendance of employees is called absenteeism and occurs for various reasons, such as vigorous physical activity, advanced age and high psychological demands of the work. The absenteeism affects the direct and indirect costs of the companies, and may reach 15% of the payroll. Therefore, it is fundamental to know its main causes and contribute to control and mitigation strategies. Neural networks have been successfully applied in the classification of several problems, but they are black boxes, because they do not explain which aspects are considered in their decisions. This aspect is very important in health applications, in which it is necessary to explain and clearly interpret the results. In this context, this work presents an approach to classify absenteeism through neural networks and Layer-wise relevance propagation (LRP) aggregation in order to identify the most relevant features and to assign relevance scores individually per class and among all classes. The proposed approach was assessed by considering a dataset widely used as a benchmark and compared to the existing literature methods. The proposed approach presented the highest assertiveness rates among the compared methods, reaching an average accuracy of 0.83, identifying the most relevant features for the classification of absenteeism through a relevance score. Therefore, the results allow the interpretability of the causes of each class of absenteeism, which contribute to the management of human resources, occupational medicine and the development of strategies for its mitigation.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134278325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Lopes Soares Lima, A. Pessoa, A. C. D. Paiva, António Cunha, Geraldo Braz Júnior, J. Almeida
{"title":"Classification of Video Capsule Endoscopy Images Using Visual Transformers","authors":"Daniel Lopes Soares Lima, A. Pessoa, A. C. D. Paiva, António Cunha, Geraldo Braz Júnior, J. Almeida","doi":"10.1109/BHI56158.2022.9926791","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926791","url":null,"abstract":"Cancers related to the gastrointestinal tract have a high incidence rate in the population, with a high mortality rate. Videos obtained through endoscopic capsules are essential for evaluating anomalies that can progress to cancer. However, due to their duration, which can reach 10 hours, they demand great attention from the medical specialist in their analysis. Machine learning techniques have been successfully applied in developing computer-aided diagnostic systems since the 1990s, where Convolutional Neural Networks (CNNs) have become very successful for pattern recognition in images. CNNs use convolutions to extract features from the analyzed data, operating in a fixed-size window and thus having problems capturing pixel-level relationships considering the spatial and temporal domains. Otherwise, transformers use attention mechanisms, where data is structured in a vector space that can aggregate information from adjacent data to determine meaning in a given context. This work proposes a computational method for analyzing images extracted from videos obtained by endoscopic capsules, using a transformer-based model that helps diagnose of gastrointestinal tract abnormalities. Preliminary results are promising. The classification task of 11 classes evaluated on the publicly available Kvasir-Capsule dataset yielded an average value of 99.70% of accuracy, 99.64% of precision, 99.86% of sensitivity, and 99.54% of f1-score.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128764777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Gurbuz, Mohammad Mahbubur Rahman, Emre Kurtoğlu, D. Martelli
{"title":"Continuous Human Activity Recognition and Step-Time Variability Analysis with FMCW Radar","authors":"S. Gurbuz, Mohammad Mahbubur Rahman, Emre Kurtoğlu, D. Martelli","doi":"10.1109/BHI56158.2022.9926892","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926892","url":null,"abstract":"Human activity recognition (HAR) and gait analysis are important functions that support aging-in-place and remote health monitoring. Although there have been many works investigating HAR with radar based on single-activity snapshots in time, few works address recognition in continuous streams of radio frequency (RF) data, where in daily life many different activities are conducted. This work proposes a novel variable window averaging method to segment RF data streams containing a mixture of large-scale gross motor activities as well as fine-grain hand gestures, a physics-aware generative adversarial network (PhGAN) to recognize daily activities, and a new technique to estimate step-time variability from RF data. Our results show that extraction of motion detected intervals and GAN-synthesized samples during training boosts HAR accuracy, while the estimation variance of time-step variability from radar compares well with that obtained from a Vicon motion capture system.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134123724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Filipovic, Smiljana Tomasevic, Andjela Blagojević, Branko Arsić, Miloš Anić, T. Djukić
{"title":"Modeling of Plaque Progression in the Carotid Artery Using Coupled Agent Based with Finite Element Method","authors":"N. Filipovic, Smiljana Tomasevic, Andjela Blagojević, Branko Arsić, Miloš Anić, T. Djukić","doi":"10.1109/BHI56158.2022.9926817","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926817","url":null,"abstract":"In study, we presented a new computational model for atheromatic plaque growth progression in the carotid artery using specialized mathematical models and computational simulations which will enable the accurate prediction of the cardiovascular disease evolution. The simulated model with coupled Agent Based Method (ABM) and Finite Element Method (FEM) has been presented. The ABM was coupled with an initial WSS profile, which triggers a pathologic vascular remodeling by perturbing the baseline cellular activity and favoring lipid infiltration and accumulation within the arterial wall. The ABM model takes shear stress and LDL initial distribution from the lumen and starts iterative calculation inside the wall for lipid infiltration and accumulation using a random number generator for each time step. After ABM iterations, both wall lipid distribution and wall geometry are changed. This directly influences the wall artery geometry which is also modeled with finite element, with ABM elements inside these large finite elements. Then, fluid-structure solver is running and lumen domain is calculated again. The change of the shape of the cross-sections of the arterial wall is shown in three specific moments in time (baseline, after 3 months and after 6 months). One main pros of this new approach are the use of realistic 3D reconstructed artery providing in this way a more realistic, patient-specific simulation of plaque progression.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Localisation Study of Deep Learning Models for Chest X-ray Image Classification","authors":"James Gascoigne-Burns, Stamos Katsigiannis","doi":"10.1109/BHI56158.2022.9926904","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926904","url":null,"abstract":"Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132822163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Class Activation Maps for the disentanglement and occlusion of identity attributes in medical imagery","authors":"Laura Carolina Martínez Esmeral, A. Uhl","doi":"10.1109/BHI56158.2022.9926856","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926856","url":null,"abstract":"Deriving patients' identity from medical imagery threatens privacy, as these data are acquired to support diagnosis but not to reveal identity-related features. Still, for many medical imaging modalities, such identity breaches have been reported. To cope with this, some de-identification methods based on the generation of synthetic data have been explored in the literature. However, in this paper, we try to perform, instead, an occlusion of the personal identifiers directly on the data by means of Class Activation Maps, in such a way that diagnosis related features do not get altered.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}