G. Bijelic, Nerea Briz Iceta, Č. Stefanović, A. Morschhauser, Ana Belén Carballo Leyenda, L. Paletta, Andreas Falk, M. Kostic, Matija Štrbac, N. Jorgovanovic, Gerhard Jobst, R. Paradiso, G. Magenes, Pablo Fanjul-Bolado, Aleksandar Vujić, Philip Eschenbacher
{"title":"SixthSense: Smart Integrated Extreme Environment Health Monitor with Sensory Feedback for Enhanced Situation Awareness","authors":"G. Bijelic, Nerea Briz Iceta, Č. Stefanović, A. Morschhauser, Ana Belén Carballo Leyenda, L. Paletta, Andreas Falk, M. Kostic, Matija Štrbac, N. Jorgovanovic, Gerhard Jobst, R. Paradiso, G. Magenes, Pablo Fanjul-Bolado, Aleksandar Vujić, Philip Eschenbacher","doi":"10.1109/BSN56160.2022.9928493","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928493","url":null,"abstract":"Natural disasters occurring in inaccessible rural areas are on the rise, leading to the multiplication of first responders’ missions. However, engagement in fighting wildfires or participating in rescue missions includes risks for the well-being of the engaged first responders. Consequently, a system that monitors their actions and provides real-time and actionable information without obstructing their operational capacity is needed. The EU-funded SIXTHSENSE project aims to improve the efficiency and safety of first responders’ engagement in difficult environments by optimizing on-site team coordination and mission implementation. The project proposes an innovative wearable health monitoring system based on multimodal biosensor data that enables first responders to detect risk factors early on and allows real-time monitoring of all deployed responders. This paper is an introduction to the overall concept of the project, to the methodology and the system architecture, moreover details on Alpha version of SixthSense prototype are presented.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114683980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Shu, Junming Wang, Yujie Su, Honghai Liu, Zheng Li, Raymond K. Tong
{"title":"An End-to-end Posture Perception Method for Soft Bending Actuators Based on Kirigami-inspired Piezoresistive Sensors","authors":"Jing Shu, Junming Wang, Yujie Su, Honghai Liu, Zheng Li, Raymond K. Tong","doi":"10.1109/BSN56160.2022.9928494","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928494","url":null,"abstract":"Posture sensing of soft actuators is critical for performing closed-loop control of soft robots. This paper presents a novel end-to-end posture perception method for soft actuators by developing long short-term memory (LSTM) neural networks. A novel flexible bending sensor developed from off-the-shelf conductive silicon material was proposed and used for posture sensing. In the proposed method, the hysteresis of the soft robot and non-linear sensing signals from the flexible bending sensors have also been considered. With one-step calibration from the sensor output, the posture of the soft actuator could be captured by the LSTM network. The method was validated on a finger-size one DOF pneumatic fiber-reinforced bending actuator. Four kirigami-inspired flexible piezoresistive transducers were placed on the top surface of the actuator. Results show that the transducers could sense the posture of the actuator with acceptable accuracy. We believe our work could benefit soft robot dynamic posture perception and closed-loop control.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133203705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey
{"title":"Deep learning semantic segmentation for indoor terrain extraction: Toward better informing free-living wearable gait assessment","authors":"Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey","doi":"10.1109/BSN56160.2022.9928505","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928505","url":null,"abstract":"Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128346594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Zhu, K. Vatanparvar, Migyeong Gwak, Jilong Kuang, A. Gao
{"title":"Contactless SpO2 Detection from Face Using Consumer Camera","authors":"Li Zhu, K. Vatanparvar, Migyeong Gwak, Jilong Kuang, A. Gao","doi":"10.1109/BSN56160.2022.9928509","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928509","url":null,"abstract":"We describe a novel computational framework for contactless oxygen saturation (SpO2) detection using videos recorded from human faces using smartphone cameras with ambient light. For contact pulse oximeter, a ratio of ratios (RoR) metric derived from selected regions of interest (ROI) combined with linear regression modeling is the standard approach. However, when used upon contactless remote PPG (rPPG), the assumptions of this standard approach do not hold automatically: 1) the rPPG signal is usually derived from the face area where the light reflection may not be uniform due to variation in skin tissue composition and/or lighting conditions (moles, hairs, beard, partial shadowing, etc.), 2) for most consumer-level cameras under ambient light, the rPPG signal is converted from light reflection associated with wide-band spectra, which creates complicated nonlinearity for SpO2 mappings. We propose a computational framework to overcome these challenges by 1) determining and dynamically tracking the ROIs according to both spatial and color proximity, and calculating the RoR based on selected individual ROIs which have homogeneous skin reflections, and 2) using a nonlinear machine learning model to mapping the SpO2 levels from RoRs derived from two different color combinations. We validated the framework with 30 healthy participants during various breathing tasks and achieved 1.24% Root Mean Square Error for across-subjects model and 1.06% for within-subject models, which surpassed the FDA-recognized ISO 81060-2-61:2017 standard.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132264314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prototype smartwatch device for prolonged physiological monitoring in remote environments","authors":"B. Rosa, Benny P. L. Lo, E. Yeatman","doi":"10.1109/BSN56160.2022.9928459","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928459","url":null,"abstract":"Wearable technology in the form of wristwatches, armbands, or fit monitors has fast widespread lately among technology enthusiasts that are eager for a quick hands-on experience with their own body parameters. Nonetheless, the accuracy, replicability and reproducibility of the measurements collected by these monitors is still highly debatable outside laboratory settings, thus resulting in their nonacceptance as valid medical diagnostic tools. Furthermore, the inability to collect temporally detailed physiological variables like heartrate, pulse plethysmography, skin temperature and galvanic skin response for extended periods of time has also been appointed as a factor contributing to wearables’ nonacceptance within the biomedical research community. Even more so if the monitoring is to be performed in remote places, usually involving prolonged and arduous physical tasks performed by the participant. In this paper, we propose an inexpensive prototype smartwatch for prolonged physiological monitoring in remote environments. Equipped with sensing channels that monitor the aforementioned body variables, the device can also be instructed to operate in an asynchronous recording mode, thereby saving battery life and memory while recording some ambient variables (humidity, temperature, luminescence, and atmospheric pressure) in order to provide descriptive context awareness to the physiological processes taking place inside the human body at the same time.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115303141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A CNN Model with Discretized Mobile Features for Depression Detection","authors":"Yueru Yan, Mei Tu, Hongbo Wen","doi":"10.1109/BSN56160.2022.9928499","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928499","url":null,"abstract":"Depression has been a serious mental illness for a long time, which significantly influences people’s life quality. Meanwhile, as the smartphone becomes an integral part of people’s lives, it creates the opportunity to analyze users’ feelings through their phone usage and sensor data. However, previous studies mainly adopt machine-learning methods for depression detection, ignoring the sequential patterns hidden in them. In this study, we aim to monitor the symptoms of depression through sequential mobile data collected from phones and their sensors. First, we establish a deep-learning model called Dep-caser to fully utilize the sequential information in mobile data. Next, we introduce a discretization method based on Information Value to deal with data sparsity and outliers. In total, we recruited 257 people to join the study and extracted five-day longitudinal data from their smartphones and electronic bands. We conduct two experiments to examine the effectiveness of the Dep-caser and discretization method respectively. The results demonstrate that Dep-caser outperforms most of the machine learning methods and the discretization further improves the performance of the deep-learning model to achieve an overall accuracy of 0.83. Our study shows the promising future to adopt deep-learning models with sequential phone usage and sensing data to detect depression.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128316129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sai Vaibhav Polisetti Venkata, Shubhankar Sabat, C. Deshpande, Asiful Arefeen, Daniel Peterson, H. Ghasemzadeh
{"title":"On-Device Machine Learning for Diagnosis of Parkinson’s Disease from Hand Drawn Artifacts","authors":"Sai Vaibhav Polisetti Venkata, Shubhankar Sabat, C. Deshpande, Asiful Arefeen, Daniel Peterson, H. Ghasemzadeh","doi":"10.1109/BSN56160.2022.9928465","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928465","url":null,"abstract":"Effective diagnosis of neuro-degenerative diseases is critical to providing early treatments, which in turn can lead to substantial savings in medical costs. Machine learning models can help with the diagnosis of such diseases like Parkinson’s and aid in assessing disease symptoms. This work introduces a novel system that integrates pervasive computing, mobile sensing, and machine learning to classify hand-drawn images and provide diagnostic insights for the screening of Parkinson’s disease patients. We designed a computational framework that combines data augmentation techniques with optimized convolutional neural network design for on-device and real-time image classification. We assess the performance of the proposed system using two datasets of images of Archimedean spirals drawn by hand and demonstrate that our approach achieves 76% and 83% accuracy respectively. Thanks to 4x memory reduction via integer quantization, our system can run fast on an Android smartphone. Our study demonstrates that pervasive computing may offer an inexpensive and effective tool for early diagnosis of Parkinson’s disease1.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126791612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Huang, Waseem Alkhayer, M. Fouda, Abdulkadir Celik, A. Eltawil
{"title":"Wearable Vital Signal Monitoring Prototype Based on Capacitive Body Channel Communication","authors":"Qi Huang, Waseem Alkhayer, M. Fouda, Abdulkadir Celik, A. Eltawil","doi":"10.1109/BSN56160.2022.9928512","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928512","url":null,"abstract":"Wireless body area network (WBAN) provides a means for seamless individual health monitoring without imposing restrictive limitations on normal daily routines. To date, Radio Frequency (RF) transceivers have been the technology of choice, however, drawbacks such as vulnerability to body shadowing effects, higher power consumption due to omnidirectional radiation and security concerns, have prompted the adoption of transceivers that use the human body channel for communication. In this paper, a vital signal monitoring transceiver prototype based on the human body channel communication (HBC), using commercially available chipsets is presented. RF and HBC communications are briefly reviewed and compared, and different schemes of HBC are introduced. A circuit model that represents the human body channel is then discussed and simulations are presented to illustrate the influence of the return path capacitance and receiver terminations on the path loss. The architecture of the transceiver prototype is then introduced where it is designed at a 21 MHz IEEE 802.15.6 standard-compliant carrier frequency. Finally, the performance of the transceiver, including the bit error rate (BER) and power efficiency, are characterized. Path loss is measured for two different scenarios, where variations of up to 5 dB were observed due to environmental effects. Energy efficiency measured at a maximum data-rate of 1.3 Mbps was found to be 8.3 nJ/b.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122075323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Y. Ahmed, Tousif Ahmed, Md. Mahbubur Rahman, Zihan Wang, Jilong Kuang, A. Gao
{"title":"Deep Audio Spectral Processing for Respiration Rate Estimation from Smart Commodity Earbuds","authors":"M. Y. Ahmed, Tousif Ahmed, Md. Mahbubur Rahman, Zihan Wang, Jilong Kuang, A. Gao","doi":"10.1109/BSN56160.2022.9928461","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928461","url":null,"abstract":"Respiration rate is an important health biomarker and a vital indicator for health and fitness. With smart earbuds gaining popularity as a commodity device, recent works have demonstrated the potential for monitoring breathing rate using such earable devices. In this work, for the first time we utilize deep image recognition techniques to infer respiration rate from earbud audio. We use image spectrograms from breathing cycle audio signals captured using Samsung earbuds as a spectral feature to train a deep convolutional neural network. Using novel earbud audio data collected from 30 subjects with both controlled breathing at a wide range (from 5 upto 45 breaths per minute), and uncontrolled natural breathing from 7-day home deployment, experimental results demonstrate that our model outperforms existing methods using earbuds for inferring respiration rates from regular intensity breathing and heavy breathing sounds with 0.77 aggregated MAE for controlled breathing and with 0.99 aggregated MAE for at-home natural breathing.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128684050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saeed Akbarzadeh, Xiao Gu, Zhipeng Wu, Benny P. L. Lo
{"title":"A Novel Active Human Echolocation Device","authors":"Saeed Akbarzadeh, Xiao Gu, Zhipeng Wu, Benny P. L. Lo","doi":"10.1109/BSN56160.2022.9928448","DOIUrl":"https://doi.org/10.1109/BSN56160.2022.9928448","url":null,"abstract":"Some animals, like bats and dolphins, can echolocate themselves and navigate through complete darkness. They can generate ultrasonic signals and locate themselves based on the echo bounced back from the surrounding objects/structures. As human, we lack such abilities to echolocate ourselves, and we mainly rely on our vision to guide and navigate. However, recently, some visually impaired people have trained and learned the skills to echo locate themselves demonstrating that we can too echo locate ourselves with our own hearing. Based on this principal, we propose a novel wearable device that can aid both sighted and visually impaired people in acquiring the echolocation skills. As our hearing is tuned to filter out echos, the proposed device is designed with an ultrasound transmitter with a carrier frequency of 40 kHz and modulated with a signal with 2kHz frequency to generate a click sound that could be heard by the user for echolocation. Hence, the brain experienced far less confusion while attempting to comprehend the surrounding world and isolate the aspects necessary to acquire the abilities. To assess the ability of user to acquiring the echolocation skills, a healthy subject study was conducted where six training sessions that we conducted, and EEG (electroencephalogram) signal of the subjects were collected while they were blindfolded and using the proposed device to echolocate. From the results, we have shown that there was a significant correlation between their echolocation training and the intensified activations of the visual cortex area demonstrating the subjects were able to use the echoed signal to ’visualize’ the surrounding environment. It also shows the subjects’ ability to learn and echolocate themselves quickly in a room fitted with a random objects.","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129737056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}