Smart HealthPub Date : 2025-03-25DOI: 10.1016/j.smhl.2025.100561
Chongxin Zhong , Jinyuan Jia , Huining Li
{"title":"An adaptive multimodal fusion framework for smartphone-based medication adherence monitoring of Parkinson’s disease","authors":"Chongxin Zhong , Jinyuan Jia , Huining Li","doi":"10.1016/j.smhl.2025.100561","DOIUrl":"10.1016/j.smhl.2025.100561","url":null,"abstract":"<div><div>Ensuring medication adherence for Parkinson’s disease (PD) patients is crucial to relieve patients’ symptoms and better customizing regimens according to patient’s clinical responses. However, traditional self-management approaches are often error-prone and have limited effectiveness in improving adherence. While smartphone-based solutions have been introduced to monitor various PD metrics, including medication adherence, these methods often rely on single-modality data or fail to fully leverage the advantages of multimodal integration. To address the issues, we present an adaptive multimodal fusion framework for monitoring medication adherence of PD based on a smartphone. Specifically, we segment and transform raw data from sensors to spectrograms. Then, we integrate multimodal data with quantification of their qualities and perform gradient modulation based on the contribution of each modality. Afterward, we monitor medication adherence in PD patients by detecting their medicine intake status. We evaluate the performance with the dataset from daily-life scenarios involving 455 patients. The results show that our work can achieve around 94% accuracy in medication adherence monitoring, indicating that our proposed framework is a promising tool to facilitate medication adherence monitoring in PD patients’ daily lives.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100561"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-25DOI: 10.1016/j.smhl.2025.100567
Ali Abbasi , Jiaqi Gong , Soroush Korivand
{"title":"Transforming stroop task cognitive assessments with multimodal inverse reinforcement learning","authors":"Ali Abbasi , Jiaqi Gong , Soroush Korivand","doi":"10.1016/j.smhl.2025.100567","DOIUrl":"10.1016/j.smhl.2025.100567","url":null,"abstract":"<div><div>Stroop tasks, recognized for their cognitively demanding nature, hold promise for diagnosing and monitoring neurodegenerative diseases. Understanding how humans allocate attention and resolve interference in the Stroop test remains a challenge; yet addressing this gap could reveal key opportunities for early-stage detection. Traditional approaches overlook the interplay between overt behavior and underlying neural processes, limiting insights into the complex color-word associations at play. To tackle this, we propose a framework that applies Inverse Reinforcement Learning (IRL) to fuse electroencephalography (EEG) signals with eye-tracking data, bridging the gap between neural and behavioral markers of cognition. We designed a Stroop experiment featuring congruent and incongruent conditions to evaluate attention allocation under varying levels of interference. By framing gaze as actions guided by an internally derived reward, IRL uncovers hidden motivations behind scanning patterns, while EEG data — processed with advanced feature extraction — reveals task-specific neural dynamics under high conflict. We validate our approach by measuring Probability Mismatch, Target Fixation Probability-Area Under the Curve, Sequence Score, and MultiMatch metrics. Results show that the IRL-EEG model outperforms an IRL-Image baseline, demonstrating improved alignment with human scanpaths and heightened sensitivity to attentional shifts in incongruent trials. These findings highlight the value of integrating neural data into computational models of cognition and illuminate possibilities for early detection of neurodegenerative disorders, where subclinical deficits may first emerge. Our IRL-based integration of EEG and eye-tracking further supports personalized cognitive assessments and adaptive user interfaces.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100567"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-25DOI: 10.1016/j.smhl.2025.100569
Yanhua Si , Yingyun Yang , Qilei Chen , Zinan Xiong , Yu Cao , Xinwen Fu , Benyuan Liu , Aiming Yang
{"title":"Improving gastric lesion detection with synthetic images from diffusion models","authors":"Yanhua Si , Yingyun Yang , Qilei Chen , Zinan Xiong , Yu Cao , Xinwen Fu , Benyuan Liu , Aiming Yang","doi":"10.1016/j.smhl.2025.100569","DOIUrl":"10.1016/j.smhl.2025.100569","url":null,"abstract":"<div><div>In the application of deep learning for gastric cancer detection, the quality of the data set is as important as, if not more, the design of the network architecture. However, obtaining labeled data, especially in fields such as medical imaging to detect gastric cancer, can be expensive and challenging. This scarcity is exacerbated by stringent privacy regulations and the need for annotations by specialists. Conventional methods of data augmentation fall short due to the complexities of medical imagery. In this paper, we explore the use of diffusion models to generate synthetic medical images for the detection of gastric cancer. We evaluate their capability to produce realistic images that can augment small datasets, potentially enhancing the accuracy and robustness of detection algorithms. By training diffusion models on existing gastric cancer data and producing new images, our aim is to expand these datasets, thereby enhancing the efficiency of deep learning model training to achieve better precision and generalization in lesion detection. Our findings indicate that images generated by diffusion models significantly mitigate the issue of data scarcity, advancing the field of deep learning in medical imaging.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100569"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-25DOI: 10.1016/j.smhl.2025.100565
Pinxiang Wang , Hanqi Chen , Zhouyu Li , Wenyao Xu , Yu-Ping Chang , Huining Li
{"title":"Continuous prediction of user dropout in a mobile mental health intervention program: An exploratory machine learning approach","authors":"Pinxiang Wang , Hanqi Chen , Zhouyu Li , Wenyao Xu , Yu-Ping Chang , Huining Li","doi":"10.1016/j.smhl.2025.100565","DOIUrl":"10.1016/j.smhl.2025.100565","url":null,"abstract":"<div><div>Mental health intervention can help to release individuals’ mental symptoms like anxiety and depression. A typical mental health intervention program can last for several months, people may lose interests along with the time and cannot insist till the end. Accurately predicting user dropout is crucial for delivering timely measures to address user disengagement and reduce its adverse effects on treatment. We develop a temporal deep learning approach to accurately predict dropout, leveraging advanced data augmentation and feature engineering techniques. By integrating interaction metrics from user behavior logs and semantic features from user self-reflections over a nine-week intervention program, our approach effectively characterizes user’s mental health intervention behavior patterns. The results validate the efficacy of temporal models for continuous dropout prediction.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100565"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-25DOI: 10.1016/j.smhl.2025.100570
Ziyu Wang , Hao Li , Di Huang , Hye-Sung Kim , Chae-Won Shin , Amir M. Rahmani
{"title":"HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations","authors":"Ziyu Wang , Hao Li , Di Huang , Hye-Sung Kim , Chae-Won Shin , Amir M. Rahmani","doi":"10.1016/j.smhl.2025.100570","DOIUrl":"10.1016/j.smhl.2025.100570","url":null,"abstract":"<div><div>Effective patient care in digital healthcare requires large language models (LLMs) that not only answer questions but also actively gather critical information through well-crafted inquiries. This paper introduces HealthQ, a novel framework for evaluating the questioning capabilities of LLM healthcare chains. By implementing advanced LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, HealthQ assesses how effectively these chains elicit comprehensive and relevant patient information. To achieve this, we integrate an LLM judge to evaluate generated questions across metrics such as specificity, relevance, and usefulness, while aligning these evaluations with traditional Natural Language Processing (NLP) metrics like ROUGE and Named Entity Recognition (NER)-based set comparisons. We validate HealthQ using two custom datasets constructed from public medical datasets, ChatDoctor and MTS-Dialog, and demonstrate its robustness across multiple LLM judge models, including GPT-3.5, GPT-4, and Claude. Our contributions are threefold: we present the first systematic framework for assessing questioning capabilities in healthcare conversations, establish a model-agnostic evaluation methodology, and provide empirical evidence linking high-quality questions to improved patient information elicitation.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100570"},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intuitive axial augmentation using polar-sine-based piecewise distortion for medical slice-wise segmentation","authors":"Yiqin Zhang , Qingkui Chen , Chen Huang , Zhengjie Zhang , Meiling Chen , Zhibing Fu","doi":"10.1016/j.smhl.2025.100556","DOIUrl":"10.1016/j.smhl.2025.100556","url":null,"abstract":"<div><div>Most data-driven models for medical image analysis rely on universal augmentations to improve accuracy. Experimental evidence has confirmed their effectiveness, but the unclear mechanism underlying them poses a barrier to the widespread acceptance and trust in such methods within the medical community. We revisit and acknowledge the unique characteristics of medical images apart from traditional digital images, and consequently, proposed a medical-specific augmentation algorithm that is more elastic and aligns well with radiology scan procedure. The method performs piecewise affine with sinusoidal distorted ray according to radius on polar coordinates, thus simulating uncertain postures of human lying flat on the scanning table. Our method could generate human visceral distribution without affecting the fundamental relative position on axial plane. Two non-adaptive algorithms, namely Meta-based Scan Table Removal and Similarity-Guided Parameter Search, are introduced to bolster robustness of our augmentation method. In contrast to other methodologies, our method is highlighted for its intuitive design and ease of understanding for medical professionals, thereby enhancing its applicability in clinical scenarios. Experiments show our method improves accuracy with two modality across multiple famous segmentation frameworks without requiring more data samples. Our preview code is available in: <span><span>https://github.com/MGAMZ/PSBPD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100556"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-24DOI: 10.1016/j.smhl.2025.100574
Yanhui Ren , Di Wang , Lingling An , Shiwen Mao , Xuyu Wang
{"title":"Quantum contrastive learning for human activity recognition","authors":"Yanhui Ren , Di Wang , Lingling An , Shiwen Mao , Xuyu Wang","doi":"10.1016/j.smhl.2025.100574","DOIUrl":"10.1016/j.smhl.2025.100574","url":null,"abstract":"<div><div>Deep learning techniques have been widely used for human activity recognition (HAR) applications. The major challenge lies in obtaining high-quality, large-scale labeled sensor datasets. However, unlike datasets such as images or text, HAR sensor datasets are non-intuitive and uninterpretable, making manual labeling extremely difficult. Self-supervised learning has emerged to address this problem, which can learn from large-scale unlabeled datasets that are easier to collect. Nevertheless, self-supervised learning has the increased computational cost and the demand for larger deep neural networks. Recently, quantum machine learning has attracted widespread attention due to its powerful computational capability and feature extraction ability. In this paper, we aim to address this classical hardware bottleneck using quantum machine learning techniques. We propose QCLHAR, a quantum contrastive learning framework for HAR, which combines quantum machine learning techniques with contrastive learning to learn better latent representations. We evaluate the feasibility of the proposed framework on six publicly available datasets for HAR. The experimental results demonstrate the effectiveness of the framework for HAR, which can surpass or match the precision of classical contrastive learning with fewer parameters. This validates the effectiveness of our approach and demonstrates the significant potential of quantum technology in addressing the challenges associated with the scarcity of labeled sensory data.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100574"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized multisensor wearable signal fusion for emotion recognition from noisy and incomplete data","authors":"Vamsi Kumar Naidu Pallapothula , Sidharth Anand , Sreyasee Das Bhattacharjee, Junsong Yuan","doi":"10.1016/j.smhl.2025.100571","DOIUrl":"10.1016/j.smhl.2025.100571","url":null,"abstract":"<div><div>Continual real-time monitoring of users’ health via noninvasive wearable devices (e.g., smartwatch, smartphone) demonstrates significant potential to enhance human well-being in everyday life. However, due to respective sampling rates, noise sensitivity, and data types, the inherent heterogeneity of the signals received from multiple sensors make the task of biosignal-based emotion recognition both complex and time-consuming. While how to optimally fuse multimode information (where each sensor produces a unique mode-specific input signal) to ensure a reliable inference performance remains difficult, the particular challenges in this problem setting is primarily threefold: (1) The data availability is limited due to several unique person/device-specific properties and high cost of labeling; (2) The acquired signals from wearable devices are often noisy or may as well be lossy due to users’ personal lifestyle choices or environmental interferences; (3) Due to several intra-individual and inter-individual signal variabilities, enabling model generalizability is always difficult. To this end, we propose a general-purpose multisensor fusion network, <em>GM-FuseNet</em> that can seamlessly integrate and transform multi-sensor signal information for a variety of tasks. Unlike a majority of existing works, which rely on a fundamental assumption that full multi-mode query information is present during inference, <em>GM-FuseNet</em>’s first-level preface multimodal transformer module is explicitly designed to enhance both unimodal and multimodal performance in the presence of partial modality details. We also utilize an effective <em>multimodal temporal correlation loss</em> that aligns the unimode signals pairwise in the temporal domain and encourages the model to learn the temporal correlation across multiple sensor-specific signals. Extensive evaluation using two public datasets WESAD and CASE reports outperformance (<span><math><mrow><mn>1</mn><mtext>–</mtext><mn>4</mn><mtext>%</mtext></mrow></math></span>) of the proposed <em>GM-FuseNet</em> against state-of-the-art supervised or self-supervised models while delivering a consistently robust generalization all-across. Additionally, by reporting another <span><math><mrow><mn>2</mn><mtext>–</mtext><mn>4</mn><mtext>%</mtext></mrow></math></span> improved accuracy and F1-scores, <em>GM-FuseNet</em> also demonstrates a significant promise in handling a variety of test environments including the missing and noisy multisensor query signals.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100571"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-23DOI: 10.1016/j.smhl.2025.100566
Abm Adnan Azmee, Francis Nweke, Mason Pederson, Md Abdullah Al Hafiz Khan, Yong Pei
{"title":"Human AI Collaboration Framework for Detecting Mental Illness Causes from Social Media","authors":"Abm Adnan Azmee, Francis Nweke, Mason Pederson, Md Abdullah Al Hafiz Khan, Yong Pei","doi":"10.1016/j.smhl.2025.100566","DOIUrl":"10.1016/j.smhl.2025.100566","url":null,"abstract":"<div><div>Mental health is a critical aspect of our overall well-being. Mental illness refers to conditions that impact an individual’s psychological state, resulting in considerable distress and limitations in functioning day-to-day tasks. Due to the progress of technology, social media has emerged as a platform for individuals to share their thoughts and emotions. The psychological state of individuals can be accessed with the help of data from these platforms. However, it is challenging for conventional machine learning models to analyze the diverse linguistic contexts of social media data. Moreover, to effectively analyze the data, we need the support of human experts. In this work, we propose a novel human AI-collaboration framework that leverages the strength of human expertise and artificial intelligence (AI) to overcome these challenges. Our proposed framework utilizes multi-level data along with feedback from human experts to identify the causes behind mental illness. The efficacy and effectiveness of our proposed model are shown by extensive evaluation on Reddit data. Experimental results demonstrate that our proposed model outperforms the baseline model by 3 to 17% performance improvement.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100566"},"PeriodicalIF":0.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-18DOI: 10.1016/j.smhl.2025.100555
Mustafa Elhadi Ahmed , Hongnian Yu , Michael Vassallo , Pelagia Koufaki
{"title":"Advancing real-world applications: A scoping review on emerging wearable technologies for recognizing activities of daily living","authors":"Mustafa Elhadi Ahmed , Hongnian Yu , Michael Vassallo , Pelagia Koufaki","doi":"10.1016/j.smhl.2025.100555","DOIUrl":"10.1016/j.smhl.2025.100555","url":null,"abstract":"<div><div>Wearable technologies for Activities of Daily Living (ADL) recognition have emerged as a crucial area of research, driven by the global rise in aging populations and the increase in chronic diseases. These technologies offer significant benefits for healthcare by enabling continuous monitoring and early detection of health issues. However, the field of ADL recognition with wearables remains under-explored in key areas such as user variability and data acquisition methodologies. This review aims to provide a comprehensive overview of recent advancements in ADL recognition using wearable devices, with a particular focus on commercially available devices. We systematically analyzed 157 studies from six databases following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, narrowing our focus to 77 articles that utilized proprietary datasets. These studies revealed three main categories of wearables: prototype devices (40 %), commercial research-grade devices (32 %), and consumer-grade devices (28 %) adapted for ADL recognition. Additionally, various detection algorithms were identified, with 31 % of studies utilizing basic machine learning techniques, 40 % employing advanced deep learning methods, and the remainder exploring ensemble learning and transfer learning approaches. Our findings underscore the growing adoption of accessible, commercial devices for both research and clinical applications. Furthermore, we identified two key areas for future research: the development of user-centered data preparation techniques to account for variability in ADL performance, and the enhancement of wearable technologies to better align with the practical needs of healthcare systems. These advancements are expected to enhance the usability and efficiency of wearables in improving patient care and healthcare management.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100555"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}