R. Gouripeddi, Le-Thuy T. Tran, Randy Madsen, Tanvi Gangadhar, Peter Mo, Nicole Burnett, Ryan Butcher, K. Sward, J. Facelli
{"title":"An Architecture for Metadata-driven Integration of Heterogeneous Sensor and Health Data for Translational Exposomic Research","authors":"R. Gouripeddi, Le-Thuy T. Tran, Randy Madsen, Tanvi Gangadhar, Peter Mo, Nicole Burnett, Ryan Butcher, K. Sward, J. Facelli","doi":"10.1109/BHI.2019.8834657","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834657","url":null,"abstract":"Exposomic research requires the generation of comprehensive spatio-temporal records of exposures along with capturing associated metadata describing limitations and uncertainties associated with the data. We describe the architecture of a metadata-driven Big Data integration platform for integration of sensor and health data to support diverse translational exposomic research.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122368009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuang Yan, Omar Uribe, Mark M. McDonald, Xuwang Yin, Dhyey Parikh, A. Southerland, G. Rohde
{"title":"F-DIT-V: An Automated Video Classification Tool for Facial Weakness Detection","authors":"Zhuang Yan, Omar Uribe, Mark M. McDonald, Xuwang Yin, Dhyey Parikh, A. Southerland, G. Rohde","doi":"10.1109/BHI.2019.8834563","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834563","url":null,"abstract":"Facial weakness is a common presenting sign of several neurological diseases including stroke, traumatic brain injury (TBI), and Bell's palsy. Tools to improve the accuracy of facial weakness detection can prompt quicker evaluation into these diseases possibly resulting in earlier diagnoses. In this study, we propose an automated video classification detection tool, Facial Deficit Identification Tool for Videos (F-DIT-V), for facial weakness detection. This tool exploits Histogram of Oriented Gradients (HOG) features to perform more accurate facial weakness detection for a given video. Using experimental data we demonstrate that F-DIT-V achieves a classification accuracy of 92.9%, precision of 93.6%, recall of 92.8%, and specificity of 94.2%. F-DIT-V is able to achieve higher and more reliable performance compared to existing (e.g. LBP-TOP, RNN-based) methods which are widely used in previous and current studies for facial weakness video classification. As the proposed camera-based analysis system requires no extra hardware, F-DIT-V could be implemented in a low-cost, portable, and easy to use format for generalizability to real world settings.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131893262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charissa S. L. Cheah, Stephen P. Kaputsos, V. Mandalapu, Truc Tran, Salih Barman, Sarah E. Jung, Kathy T. T. Vu, Travis D. Masterson, Ryan Zuber, L. Boot, Jiaqi Gong
{"title":"Neurophysiological Variations in Food Decision-Making within Virtual and Real Environments","authors":"Charissa S. L. Cheah, Stephen P. Kaputsos, V. Mandalapu, Truc Tran, Salih Barman, Sarah E. Jung, Kathy T. T. Vu, Travis D. Masterson, Ryan Zuber, L. Boot, Jiaqi Gong","doi":"10.1109/BHI.2019.8834497","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834497","url":null,"abstract":"Simple lifestyle changes such as improving one's diet and getting sufficient exercise could significantly reduce the risk of developing obesity and related diseases such as diabetes, heart disease, stroke, and cancer. However, changing eating behavior is challenging because eating is a part of the human behavior system. This paper introduces a pilot study that examined the multifactorial neurophysiological correlates of food decision-making behavior, with potential implications for the development of effective treatments for individuals with dysfunctional eating. The experimental protocol was designed in a virtual reality (VR) and real-life (RL) buffet setting. Eleven participants (aged 18 – 25 years; Mean = 20.45, SD = 2.30) were recruited and equipped with various body sensors (e.g., prefrontal cortex functional near-infrared spectroscopy (fNIRS), electrocardiography (ECG), galvanic skin response (GSR), eye movement and body motion) to capture their neurological and physiological data as they were making food selections. In this exploratory study, we aimed to identify patterns of neural and physiological activity during food selection and associations with the nutritional content of individuals' final food selection in VR and RL buffets. Findings revealed that the left inferior frontal gyrus demonstrated significant differential activation when subjects chose high compared to low density food in both settings. These findings suggest that VR simulations may provide similar neural response to real world environments, particularly in control regions of the brain.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114402253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenghua Chen, Min Wu, Ji-Yan Wu, Jie Ding, Zeng Zeng, K. Surmacz, Xiaoli Li
{"title":"A Deep Learning Approach for Sleep-Wake Detection from HRV and Accelerometer Data","authors":"Zhenghua Chen, Min Wu, Ji-Yan Wu, Jie Ding, Zeng Zeng, K. Surmacz, Xiaoli Li","doi":"10.1109/BHI.2019.8834502","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834502","url":null,"abstract":"Sleep-wake classification is important for measuring the sleep quality. In this paper, we propose a novel deep learning framework for sleep-wake detection by using acceleration and heart rate variability (HRV) data. Firstly, considering the high sampling rate of acceleration data with temporal dependency, we propose a local feature based long short-term memory (LF-LSTM) approach to learn high-level features. Meanwhile, we manually extract representative features from HRV data, as HRV data has a distinct format with acceleration data. Then, a unified framework is developed to combine the features learned by the LF-LSTM from acceleration data and the features extracted from HRV data for sleep-wake detection. We use real data to evaluate the performance of the proposed framework and compare it with some benchmark approaches. The results show that the proposed approach achieves a superior performance over all the benchmark approaches for sleep-wake detection.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Silent Aspiration Detection in High Resolution Cervical Auscultations","authors":"Caroline Yu, Yassin Khalifa, E. Sejdić","doi":"10.1109/BHI.2019.8834576","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834576","url":null,"abstract":"Aspiration is the most serious complication of dysphagia, which may lead to pneumonia. Detection of aspiration is limited by the presence of its signs like coughing and choking, which may be absent in many cases. High resolution cervical auscultations (HRCA) represent a promising non-invasive method intended for the detection of swallowing disorders. In this study, we investigate the potential of HRCA in detection of penetration-aspiration in patients suspected of dysphagia. A variety of features were extracted from HRCA in both time and frequency domains and they were tested for association with the presence of penetration-aspiration. Multiple classifiers were implemented also for aspiration detection using the extracted signal features. The results showed the presence of strong association between some HRCA signal features and penetration-aspiration, furthermore, they direct towards future directions to enhance prediction capability of aspiration using HRCA signals.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133792354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physiology-sensitive Virtual Reality based Strength Training Platform for Post-stroke Grip Task","authors":"Adyasha Dash, Anand Yadav, U. Lahiri","doi":"10.1109/BHI.2019.8834642","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834642","url":null,"abstract":"Post-stroke patients often suffer from deficit in grip strength adversely affecting their independent execution of activities of daily living. Although conventional rehabilitation facilitated by expert therapists can address such deficits, in developing countries like India, with low doctor-patient ratio such therapist-mediated rehabilitation services often remain inaccessible to many. Thus, investigators have been exploring technology-assisted alternatives e.g., robot and computer-based platforms for rehabilitation. Computer-based systems using Virtual Reality (VR) coupled with peripheral devices are advantageous in terms of their controllability, real-time feedback facility, safety, etc. Additionally, these systems coupled with surface electromyogram (sEMG) based biofeedback can administer muscle strengthening exercises by providing audio-visual representation of muscle activation in VR. In our present work, we have developed a VR-based rehabilitation platform augmented with sEMG-based biofeedback for grip strength training. Results of our study with 6 healthy and 6 post-stroke participants show the potential of our platform to be accepted by both the groups. Additionally, multiple exposures (over 3 weeks) indicate the potential of our system to increase the grip strength of post-stroke patients with increased upper limb muscle activation and improvement in one's functional recovery measured through physiological indicator (Electrodermal activity).","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"147 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131579412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. E. Manoochehri, Susmitha Sri Kadiyala, M. Nourani
{"title":"Predicting Drug-Target Interactions Using Weisfeiler-Lehman Neural Network","authors":"H. E. Manoochehri, Susmitha Sri Kadiyala, M. Nourani","doi":"10.1109/BHI.2019.8834572","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834572","url":null,"abstract":"Predicting missing drug-target relationships can help to speed up the process of identifying unknown interactions between chemical drugs and target proteins in pharmaceutical research. In this paper we employ Weisfeiler-Lehman Neural Network method to capture features, purely based on topological network and learn the pattern of drug-target interactions. We show our approach is able to learn sophisticated drug-target topological features and outperform other similarity based methods in terms of AUROC.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114462460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuqiong Wu, M. Nakao, J. Tokuno, T. Chen-Yoshikawa, T. Matsuda
{"title":"Reconstructing 3D Lung Shape from a Single 2D Image during the Deaeration Deformation Process using Model-based Data Augmentation","authors":"Shuqiong Wu, M. Nakao, J. Tokuno, T. Chen-Yoshikawa, T. Matsuda","doi":"10.1109/BHI.2019.8834454","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834454","url":null,"abstract":"Three-dimensional (3D) shape reconstruction is particularly important for computer assisted medical systems, especially in the case of lung surgeries, where large deaeration deformation occurs. Recently, 3D reconstruction methods based on machine learning techniques have achieved considerable success in computer vision. However, it is difficult to apply these approaches to the medical field, because the collection of a massive amount of clinic data for training is impractical. To solve this problem, this paper proposes a novel 3D shape reconstruction method that adopts both data augmentation techniques and convolutional neural networks. In the proposed method, a deformable statistical model of the 3D lungs is designed to augment various training data. As the experimental results demonstrate, even with a small database, the proposed method can realize 3D shape reconstruction for lungs during a deaeration deformation process from only one captured 2D image. Moreover, the proposed data augmentation technique can also be used in other fields where the training data are insufficient.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127084415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Giordano, Francesca Murabito, S. Palazzo, C. Pino, C. Spampinato
{"title":"An AI-based Framework for Supporting Large Scale Automated Analysis of Video Capsule Endoscopy","authors":"D. Giordano, Francesca Murabito, S. Palazzo, C. Pino, C. Spampinato","doi":"10.1109/BHI.2019.8834577","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834577","url":null,"abstract":"Video Capsule Endoscopy (VCE) is a diagnostic imaging technology, based on capsule with a built-in camera, that enables screening of the gastro-intestinal tract by reducing the invasiveness of traditional endoscopy procedures. Despite VCE has been designed mainly for investigations on small intestine, it is a powerful tool, because of its low invasiveness and usage simplicity, for supporting large scale screening. However, each VCE video is typically long about eight hours, and endoscopists usually take about two hours, using simple computing methods, for its analysis, thus limiting its application for large scale studies. In this paper, we propose a novel computational framework leveraging the recent advances in artificial intelligence based on the deep learning paradigm to support effectively the whole screening procedure from video transmission to automated lesion identification to reporting. More specifically, our approach handles multiple video uploads at the same time, processes them automatically with the objective of identifying key video frames with potential lesions (for subsequent analysis by endoscopists) and provides physicians with means to compare the findings with either previously detected lesions or with images and scientific information from relevant retrieved documents for a more accurate final diagnosis.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132017486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Suresha, Ayse S. Cakmak, Giulia Da Poian, Amit J. Shah, V. Vaccarino, D. Bliwise, G. Clifford
{"title":"Obstructive Sleep Apnea Classification in a Mixed-Disorder Elderly Male Population Using a Low-Cost Off-Body Movement Sensor","authors":"P. Suresha, Ayse S. Cakmak, Giulia Da Poian, Amit J. Shah, V. Vaccarino, D. Bliwise, G. Clifford","doi":"10.1109/BHI.2019.8834614","DOIUrl":"https://doi.org/10.1109/BHI.2019.8834614","url":null,"abstract":"Obstructive sleep apnea (OSA) is a common but serious sleep disorder in humans which affects oxygenation due to upper airway collapse, and leads to multiple health sequelae. Common symptoms of OSA include repeated arousal during sleep and gasping for air, which we have previously shown can be identified on video recordings as coarse human movement. In this work, we aimed to use a low cost passive off-body device that would continuously allow monitoring for OSA severity in a privacy-preserving manner. A passive infrared (PIR) sensor, was used to monitor coarse human movement during overnight sleep in a clinical sleep-lab. Simultaneously, polysomnograms were recorded and OSA labels were assigned by an expert. A cohort of 32 elderly male participants, 14 with severe OSA and 18 controls were studied. A Raspberry Pi was used to record the PIR sensor data (a series of time-stamps at which movement was registered). Movement time-stamp data were processed to obtain novel activity metrics based on time series statistics and entropy-based statistics. A logistic regression model was trained and tested with leave one out cross validation to classify participants into OSA vs control. A total of 11 features plus periodic leg movement were used. An out of sample F1-score $(mathcal{F}_{1})=0.89$ and an accuracy $(Acc) =91%$ was obtained using five features when a combination of entropy, distribution statistics, and PLM information were used, and $mathcal{F}_{1}=0.83, Acc=84%$ without PLM from 10 features. This suggests that data from the PIR sensor alone could be used for OSA evaluation on an ongoing basis in this population. The proposed method therefore provides a novel, simple, and scalable method for monitoring OSA.","PeriodicalId":281971,"journal":{"name":"2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130033174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}