Shanshan Hu, Manuel Schmidt-Kraepelin, Scott Thiebes, A. Sunyaev
{"title":"Mapping Distributed Ledger Technology Characteristics to Use Cases in Healthcare: A Structured Literature Review","authors":"Shanshan Hu, Manuel Schmidt-Kraepelin, Scott Thiebes, A. Sunyaev","doi":"10.1145/3653076","DOIUrl":"https://doi.org/10.1145/3653076","url":null,"abstract":"Following the success of the Bitcoin blockchain, distributed ledger technology (DLT) has received extensive attention in health informatics research. Yet, the healthcare industry is highly complex with many different stakeholders, information systems, regulations, and challenges. Thus, DLT may be used in various settings and for different purposes. First surveys have started to synthesize our knowledge of the different use cases, in which healthcare may benefit from DLT implementations. However, an in-depth understanding of whether and how these use cases differ concerning their requirements of DLT characteristics (i.e., technical or administrative design features) is still lacking. In this work, we conducted a structured review of 185 studies on DLT-based applications in healthcare. The results reveal six pertinent use cases, each with its own combination of different purposes that DLT is used for. Furthermore, our study shows that each of these use cases has a unique set of requirements with regard to the most important DLT characteristics. In doing so, we seek to guide practitioners in the development of highly effective DLT-based applications in various healthcare settings and pave the way for future research to investigate the understudied areas of DLT-based applications in healthcare.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" November","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"iScan: Detection of Colorectal Cancer From CT Scan Images Using Deep Learning","authors":"Sagnik Ghosal, Debanjan Das, Jay Kumar Rai, Akanksha Singh Pandaw, Sakshi Verma","doi":"10.1145/3676282","DOIUrl":"https://doi.org/10.1145/3676282","url":null,"abstract":"\u0000 Colorectal cancer, a highly lethal form of cancer, can be treated effectively if detected early. However, the current diagnosis process involves a time-consuming and manual review of CT scans to identify cancerous regions and behavior, leading to resource consumption, subjectivity, and dependency on manual assessment. We propose a 3-phase deep neural system for automated colorectal cancer detection using CT scan images to address these challenges. It includes a SegNet network to identify tumor locations, an InceptionResNet V2 network to classify tumors as benign or malignant, and an analysis of tumor area cum perimeter to predict the cancer stage. The proposed model offers a fully automated solution by combining these functionalities under a single umbrella. In real-life CT scans from 37 patients, the proposed model achieved 95.8\u0000 \u0000 (%)\u0000 \u0000 ROI segmentation accuracy, a dice coefficient of 0.6214, 69.75\u0000 \u0000 (%)\u0000 \u0000 IoU score, and 95.83\u0000 \u0000 (%)\u0000 \u0000 tumor classification accuracy. The unique approach using Radial Length (RL) and Circularity (C) parameters predicted the T-stage with close to 85\u0000 \u0000 (%)\u0000 \u0000 accuracy. Based on these outcomes, the proposed system establishes itself as a reliable and suitable alternative to traditional cancer diagnosis techniques by leveraging the power of automation, deep learning, and innovative parameter analysis.\u0000","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"8 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyang Song, Yi-Xiang Luo, Tugba Tumer, Michel Valstar, Hatice Gunes
{"title":"Loss Relaxation Strategy for Noisy Facial Video-based Automatic Depression Recognition","authors":"Siyang Song, Yi-Xiang Luo, Tugba Tumer, Michel Valstar, Hatice Gunes","doi":"10.1145/3648696","DOIUrl":"https://doi.org/10.1145/3648696","url":null,"abstract":"Automatic depression analysis has been widely investigated on face videos that have been carefully collected and annotated in lab conditions. However, videos collected under real-world conditions may suffer from various types of noises due to challenging data acquisition conditions and lack of annotators. Although deep learning (DL) models frequently show excellent depression analysis performances on datasets collected in controlled lab conditions, such noise may degrade their generalization abilities for real-world depression analysis tasks. In this paper, we uncovered that noisy facial data and annotations consistently change the distribution of training losses for facial depression DL models, i.e., noisy data-label pairs cause larger loss values compared to clean data-label pairs. Since different loss functions could be applied depending on the employed model and task, we propose a generic loss function relaxation strategy that can jointly reduce the negative impact of various noisy data and annotation problems occurring in both classification and regression loss functions, for face video-based depression analysis, where the parameters of the proposed strategy can be automatically adapted during depression model training. The experimental results on 25 different artificially created noisy depression conditions (i.e., five noise types with five different noise levels) show that our loss relaxation strategy can clearly enhance both classification and regression loss functions, enabling the generation of superior face video-based depression analysis models under almost all noisy conditions. Our approach is robust to its main variable settings, and can adaptively and automatically obtain its parameters during training.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"12 s2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140266193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Interpretable Trend Analysis Neural Networks for Longitudinal Data Analysis","authors":"Zhenjie Yao, Yixin Chen, Jinwei Wang, Junjuan Li, Shuohua Chen, Shouling Wu, Yanhui Tu, Ming-Hui Zhao, Luxia Zhang","doi":"10.1145/3648105","DOIUrl":"https://doi.org/10.1145/3648105","url":null,"abstract":"Cohort study is one of the most commonly used study methods in medical and public health researches, which result in longitudinal data. Conventional statistical models and machine learning methods are not capable of modeling the evolution trend of the variables in longitudinal data. In this paper, we propose a Trend Analysis Neural Networks (TANN), which models the evolution trend of the variables by adaptive feature learning. TANN was tested on dataset of Kaiuan research. The task was to predict occurrence of cardiovascular events within 2 and 5 years, with 3 repeated medical examinations during 2008 and 2013. For 2-year prediction, The AUC of the TANN is 0.7378, which is a significant improvement than that of conventional methods, while that of TRNS, RNN, DNN, GBDT, RF, and LR are 0.7222, 0.7034, 0.7054, 0.7136, 0.7160 and 0.7024, respectively. For 5-year prediction, TANN also shows improvement. The experimental results show that the proposed TANN achieves better prediction performance on cardiovascular events prediction than conventional models. Furthermore, by analyzing the weights of TANN, we could find out important trends of the indicators, which are ignored by conventional machine learning models. The trend discovery mechanism interprets the model well. TANN is an appropriate balance between high performance and interpretability.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"22 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139958360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WalkingWizard - A truly wearable EEG headset for everyday use","authors":"Teck Lun Goh, L. Peh","doi":"10.1145/3648106","DOIUrl":"https://doi.org/10.1145/3648106","url":null,"abstract":"\u0000 Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing\u0000 WalkingWizard Integrated\u0000 , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.\u0000","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"61 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139836174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WalkingWizard - A truly wearable EEG headset for everyday use","authors":"Teck Lun Goh, L. Peh","doi":"10.1145/3648106","DOIUrl":"https://doi.org/10.1145/3648106","url":null,"abstract":"\u0000 Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing\u0000 WalkingWizard Integrated\u0000 , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.\u0000","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"118 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139776668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei-Xuan Li, Hsun-Ping Hsieh, Chiang Fan Yang, Ding-You Wu, Ching-Chung Ko
{"title":"Enhancing Robust Liver Cancer Diagnosis: A Contrastive Multi-Modality Learner with Lightweight Fusion and Effective Data Augmentation","authors":"Pei-Xuan Li, Hsun-Ping Hsieh, Chiang Fan Yang, Ding-You Wu, Ching-Chung Ko","doi":"10.1145/3639414","DOIUrl":"https://doi.org/10.1145/3639414","url":null,"abstract":"This paper explores the application of self-supervised contrastive learning in the medical domain, focusing on classification of multi-modality Magnetic Resonance (MR) images. To address the challenges of limited and hard-to-annotate medical data, we introduce multi-modality data augmentation (MDA) and cross-modality group convolution (CGC). In the pre-training phase, we leverage Simple Siamese networks to maximize the similarity between two augmented MR images from a patient, without a handcrafted pretext task. Our approach also combines 3D and 2D group convolution with a channel shuffle operation to efficiently incorporate different modalities of image features. Evaluation on liver MR images from a well-known hospital in Taiwan demonstrates a significant improvement over previous methods. This work contributes to advancing multi-modality contrastive learning, particularly in the context of medical imaging, offering enhanced tools for analyzing complex image data.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139140053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subsampled Randomized Hadamard Transformation based Ensemble Extreme Learning Machine for Human Activity Recognition","authors":"Dipanwita Thakur, Arindam Pal","doi":"10.1145/3634813","DOIUrl":"https://doi.org/10.1145/3634813","url":null,"abstract":"Extreme Learning Machine (ELM) is becoming a popular learning algorithm due to its diverse applications, including Human Activity Recognition (HAR). In ELM, the hidden node parameters are generated at random, and the output weights are computed analytically. However, even with a large number of hidden nodes, feature learning using ELM may not be efficient for natural signals due to its shallow architecture. Due to noisy signals of the smartphone sensors and high dimensional data, substantial feature engineering is required to obtain discriminant features and address the “curse-of-dimensionality”. In traditional ML approaches, dimensionality reduction and classification are two separate and independent tasks, increasing the system’s computational complexity. This research proposes a new ELM-based ensemble learning framework for human activity recognition to overcome this problem. The proposed architecture consists of two key parts: 1) Self-taught dimensionality reduction followed by classification. 2) they are bridged by “Subsampled Randomized Hadamard Transformation” (SRHT). Two different HAR datasets are used to establish the feasibility of the proposed framework. The experimental results clearly demonstrate the superiority of our method over the current state-of-the-art methods.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139229816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luigi D’Arco, Graham McCalmont, Haiying Wang, Huiru Zheng
{"title":"Application of Smart Insoles for Recognition of Activities of Daily Living: A Systematic Review","authors":"Luigi D’Arco, Graham McCalmont, Haiying Wang, Huiru Zheng","doi":"10.1145/3633785","DOIUrl":"https://doi.org/10.1145/3633785","url":null,"abstract":"Recent years have witnessed the increasing literature on using smart insoles in health and well-being, and yet, their capability of daily living activity recognition has not been reviewed. This paper addressed this need and provided a systematic review of smart insole-based systems in the recognition of Activities of Daily Living (ADLs). The review followed the PRISMA guidelines, assessing the sensing elements used, the participants involved, the activities recognised, and the algorithms employed. The findings demonstrate the feasibility of using smart insoles for recognising ADLs, showing their high performance in recognising ambulation and physical activities involving the lower body, ranging from 70% to 99.8% of Accuracy, with 13 studies over 95%. The preferred solutions have been those including machine learning. A lack of existing publicly available datasets has been identified, and the majority of the studies were conducted in controlled environments. Furthermore, no studies assessed the impact of different sampling frequencies during data collection, and a trade-off between comfort and performance has been identified between the solutions. In conclusion, real-life applications were investigated showing the benefits of smart insoles over other solutions and placing more emphasis on the capabilities of smart insoles.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"49 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139239470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Combining Deep Learning with Signal-image Encoding for Multi-Modal Mental Wellbeing Classification","authors":"Kieran Woodward, Eiman Kanjo, Athanasios Tsanas","doi":"10.1145/3631618","DOIUrl":"https://doi.org/10.1145/3631618","url":null,"abstract":"The quantification of emotional states is an important step to understanding wellbeing. Time series data from multiple modalities such as physiological and motion sensor data have proven to be integral for measuring and quantifying emotions. Monitoring emotional trajectories over long periods of time inherits some critical limitations in relation to the size of the training data. This shortcoming may hinder the development of reliable and accurate machine learning models. To address this problem, this paper proposes a framework to tackle the limitation in performing emotional state recognition: 1) encoding time series data into coloured images; 2) leveraging pre-trained object recognition models to apply a Transfer Learning (TL) approach using the images from step 1; 3) utilising a 1D Convolutional Neural Network (CNN) to perform emotion classification from physiological data; 4) concatenating the pre-trained TL model with the 1D CNN. We demonstrate that model performance when inferring real-world wellbeing rated on a 5-point Likert scale can be enhanced using our framework, resulting in up to 98.5% accuracy, outperforming a conventional CNN by 4.5%. Subject-independent models using the same approach resulted in an average of 72.3% accuracy (SD 0.038). The proposed methodology helps improve performance and overcome problems with small training datasets.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"41 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}