Lei Cao , Ling Feng , Yang Ding , Huijun Zhang , Xin Wang , Kaisheng Zeng , Yi Dai
{"title":"Online continuous learning of users suicidal risk on social media","authors":"Lei Cao , Ling Feng , Yang Ding , Huijun Zhang , Xin Wang , Kaisheng Zeng , Yi Dai","doi":"10.1016/j.artmed.2025.103199","DOIUrl":"10.1016/j.artmed.2025.103199","url":null,"abstract":"<div><div>Suicide is a tragedy for family and society. With social media becoming an integral part of people’s life nowadays, assessing suicidal risk based on one’s social media behavior has drawn increasing research attentions. The majority of the works trained a machine learning model to classify user’s suicidal risk severity level in a batch learning setting on the entire training data. This is not a timely and scalable solution in the context of social media where new data arrives sequentially in a stream form. In this study, we formulate and address the continuous suicidal risk assessment problem through a three-layered joint memory network, consisting of a short-term personal memory and long-term personal and global memories. Unlike existing methods that rely on static classification, our model supports real-time, continuous learning from users’ emotional and behavioral dynamics without the need for full retraining. This allows for personalized and adaptive risk tracking over time. We also present a way to continuously capture users’ personal features and integrate them in suicidal risk assessment. The performance on the constructed dataset containing 95 suicidal and 95 non-suicidal social media users shows that 96% of accuracy can be achieved with the proposed method.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103199"},"PeriodicalIF":6.1,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144501962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natthanaphop Isaradech , Andrea Riedel , Wachiranun Sirikul , Markus Kreuzthaler , Stefan Schulz
{"title":"Zero- and few-shot Named Entity Recognition and Text Expansion in medication prescriptions using large language models","authors":"Natthanaphop Isaradech , Andrea Riedel , Wachiranun Sirikul , Markus Kreuzthaler , Stefan Schulz","doi":"10.1016/j.artmed.2025.103165","DOIUrl":"10.1016/j.artmed.2025.103165","url":null,"abstract":"<div><div>Medication prescriptions in electronic health records (EHR) are often in free-text and may include a mix of languages, local brand names, and a wide range of idiosyncratic formats and abbreviations. Large language models (LLMs) have shown a promising ability to generate text in response to input prompts. We use ChatGPT3.5 to automatically structure and expand medication statements in discharge summaries and thus make them easier to interpret for people and machines. Named Entity Recognition (NER) and Text Expansion (EX) are used with different prompt strategies in a zero- and few-shot setting. 100 medication statements were manually annotated and curated. NER performance was measured by using strict and partial matching. For the EX task, two experts interpreted the results by assessing semantic equivalence between original and expanded statements. The model performance was measured by precision, recall, and F1 score. For NER, the best-performing prompt reached an average F1 score of 0.94 in the test set. For EX, the few-shot prompt showed superior performance among other prompts, with an average F1 score of 0.87. Our study demonstrates good performance for NER and EX tasks in free-text medication statements using ChatGPT3.5. Compared to a zero-shot baseline, a few-shot approach prevented the system from hallucinating, which is essential when processing safety-relevant medication data. We tested ChatGPT3.5-tuned prompts on other LLMs, including ChatGPT4o, Gemini 2.0 Flash, MedLM-1.5-Large, and DeepSeekV3. The findings showed most models outperformed ChatGPT3.5 in NER and EX tasks.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103165"},"PeriodicalIF":6.1,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144518567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang Sun , Yongxin Lan , Jian Sun , Pengfei Ji , Hongwei Ge , Ming Cui , Xin Yuan
{"title":"A prior knowledge-supervised fusion network predicts survival after radiotherapy in patients with advanced gastric cancer","authors":"Liang Sun , Yongxin Lan , Jian Sun , Pengfei Ji , Hongwei Ge , Ming Cui , Xin Yuan","doi":"10.1016/j.artmed.2025.103184","DOIUrl":"10.1016/j.artmed.2025.103184","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Predicting overall survival (OS) for advanced gastric cancer patients after radiotherapy is critical for developing an individualized treatment plan. However, existing studies have focused on gastric cancer CT images with a large amount of redundant information, neglecting the role of physicians’ prior knowledge in guiding gastric cancer CT image information. We propose a multimodal fusion method based on prior knowledge to predict OS after radiotherapy in advanced gastric cancer patients to assist physicians in clinical diagnosis and treatment.</div></div><div><h3>Methods:</h3><div>A prior knowledge supervised fusion network (PKSFnet) is proposed. Firstly, PKSFnet uses a novel sampling strategy, which enables the input model data to obtain a complete feature space by analyzing the entire patient data space. Afterwards, under the guidance of the multi-domain feature fusion module (MdFF), multimodal information of patients is adaptively fused and mined to improve the prediction performance.</div></div><div><h3>Results:</h3><div>The results of the proposed model are superior to those of other unimodal and multimodal state-of-the-art methods. For the segmented survival time classification task, the AUC, specificity, sensitivity, precision of the proposed model are 0.8397, 0.875, 0.7556, and 0.875, respectively. For the survival risk regression task, the C-index and HR of the proposed model are 0.8574 and 4.658 respectively. Ablation experimental results further demonstrate the impact of each module of the proposed model. Finally, we apply the novel sampling strategy to other deep learning models and achieve significant improvement.</div></div><div><h3>Conclusion:</h3><div>The experimental results have demonstrated that the proposed model can effectively predict OS after radiotherapy in patients with advanced gastric cancer, which demonstrate that the proposed model can facilitate the development and application of robust clinical treatment strategies.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103184"},"PeriodicalIF":6.1,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144514072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhui Song , Chenchu Xu , Boyan Wang , Xiuquan Du , Jie Chen , Yanping Zhang , Shuo Li
{"title":"Interactive prototype learning and self-learning for few-shot medical image segmentation","authors":"Yuhui Song , Chenchu Xu , Boyan Wang , Xiuquan Du , Jie Chen , Yanping Zhang , Shuo Li","doi":"10.1016/j.artmed.2025.103183","DOIUrl":"10.1016/j.artmed.2025.103183","url":null,"abstract":"<div><div>Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103183"},"PeriodicalIF":6.1,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144335687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiushuo Cheng , Catherine Morgan , Arindam Sikdar , Alessandro Masullo , Alan Whone , Majid Mirmehdi
{"title":"Your turn: At home turning angle estimation for Parkinson’s disease severity assessment","authors":"Qiushuo Cheng , Catherine Morgan , Arindam Sikdar , Alessandro Masullo , Alan Whone , Majid Mirmehdi","doi":"10.1016/j.artmed.2025.103194","DOIUrl":"10.1016/j.artmed.2025.103194","url":null,"abstract":"<div><div>People with Parkinson’s Disease (PD) often experience progressively worsening gait, including changes in how they turn around, as the disease progresses. Existing clinical rating tools are not capable of capturing hour-by-hour variations of PD symptoms, as they are confined to brief assessments within clinic settings, leaving gait performance outside these controlled environments unaccounted for. Measuring turning angles continuously and passively is a component step towards using gait characteristics as sensitive indicators of disease progression in PD. This paper presents a deep learning-based approach to automatically quantify turning angles by extracting 3D skeletons from videos and calculating the rotation of hip and knee joints. We utilise advanced human pose estimation models, Fastpose and Strided Transformer, on a total of 1386 turning video clips from 24 subjects (12 people with PD and 12 healthy control volunteers), trimmed from a PD dataset of unscripted free-living videos in a home-like setting (Turn-REMAP). We also curate a turning video dataset, Turn-H3.6M, from the public Human3.6M human pose benchmark with 3D groundtruth, to further validate our method. Previous gait research has primarily taken place in clinics or laboratories evaluating scripted gait outcomes, but this work focuses on free-living home settings where complexities exist, such as baggy clothing and poor lighting. Due to difficulties in obtaining accurate groundtruth data in a free-living setting, we quantise the angle into the nearest bin 45° based on the manual labelling of expert clinicians. Our method achieves a turning calculation accuracy of 41.6%, a Mean Absolute Error (MAE) of 34.7°, and a weighted precision (WPrec) of 68.3% for Turn-REMAP. On Turn-H3.6M, it achieves an accuracy of 73.5%, an MAE of 18.5°, and a WPrec of 86.2%. This is the first work to explore the use of single monocular camera data to quantify turns by PD patients in a home setting. All data and models are publicly available, providing a baseline for turning parameter measurement to promote future PD gait research.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103194"},"PeriodicalIF":6.1,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144480806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Wang , Yang He , Xiaoxin Du , Lei Zhu , Junqi Wang , Tongxuan Wang
{"title":"VAE-GANMDA: A microbe-drug association prediction model integrating variational autoencoders and generative adversarial networks","authors":"Bo Wang , Yang He , Xiaoxin Du , Lei Zhu , Junqi Wang , Tongxuan Wang","doi":"10.1016/j.artmed.2025.103198","DOIUrl":"10.1016/j.artmed.2025.103198","url":null,"abstract":"<div><div>Traditional biological experimental methods typically require weeks or even months of experimentation, and the cost of each experiment can reach hundreds or even thousands of dollars, which is quite expensive and time-consuming. To address this, a model called VAE-GANMDA, which integrates variational autoencoders (VAE) and generative adversarial networks (GAN) for predicting microbe-drug associations, has been proposed. Firstly, a heterogeneous network of microbes and drugs is established to enrich the association information. Secondly, by fusing VAE and GAN, the model learns the manifold distribution of data through association features, obtaining nonlinear manifold features. Furthermore, the VAE generation module is improved by integrating the Convolutional Block Attention Module (CBAM) and Gaussian kernel function, enhancing the smooth perception of manifold features, thus endowing VAE with stronger feature extraction capabilities. Then, singular value decomposition (SVD) technique is employed to extract linear features of the data. Finally, by combining linear and nonlinear features, the k-means++ algorithm is used to select balanced and high-quality negative samples for training the MLP classifier. Through performance evaluation, the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) of VAE-GANMDA reach 0.9724 and 0.9635 respectively, outperforming classical machine learning methods and the majority of deep learning methods. Case studies demonstrate that VAE-GANMDA accurately predicts candidate drugs related to SARS-CoV-2 and candidate microbes related to ciprofloxacin.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103198"},"PeriodicalIF":6.1,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144321927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haimiao Mo , Hongjia Wu , Qian Rong , Zhijian Hu , Meng Yi , Peipei Chen
{"title":"An anxiety screening framework integrating multimodal data and graph node correlation","authors":"Haimiao Mo , Hongjia Wu , Qian Rong , Zhijian Hu , Meng Yi , Peipei Chen","doi":"10.1016/j.artmed.2025.103189","DOIUrl":"10.1016/j.artmed.2025.103189","url":null,"abstract":"<div><div>Anxiety disorders are a significant global health concern, profoundly impacting patients’ lives and social functioning while imposing considerable burdens on families and economies. However, current anxiety screening methods face limitations due to cost constraints and cognitive biases, particularly in their inability to deeply model correlations among multidimensional features. They often overlook crucial information inherent in their internal couplings, limiting their accuracy and applicability in clinical diagnostics. To address these challenges, we propose an advanced anxiety screening framework that integrates multimodal data, such as physiological, behavioral, audio, and textual, using a Graph Convolutional Network (GCN). While our framework draws upon existing technologies such as GCN, one-dimensional convolutional neural networks, and gated recurrent units, the uniqueness of our framework lies in how these components are combined to capture complex spatiotemporal relationships and correlations among multimodal features. Experimental results demonstrate the framework’s robust performance, achieving an accuracy of 93.48%, Area Under Curve of 94.58%, precision of 90.00%, sensitivity of 81.82%, specificity of 97.14%, F1 score of 85.71%. Notably, the method remains effective even when questionnaire data is unavailable, underscoring its practicality and reliability. This anxiety screening approach provides a new perspective for early identification and intervention of anxiety symptoms, offering a scientific basis for personalized treatment and prevention through the analysis of multimodal data and graph structures.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103189"},"PeriodicalIF":6.1,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yeon Uk Jeong , Jeongwhan Choi , Noseong Park , Jae Yong Ryu , Yi Rang Kim
{"title":"Predicting drug-drug interactions: A deep learning approach with GCN-based collaborative filtering","authors":"Yeon Uk Jeong , Jeongwhan Choi , Noseong Park , Jae Yong Ryu , Yi Rang Kim","doi":"10.1016/j.artmed.2025.103185","DOIUrl":"10.1016/j.artmed.2025.103185","url":null,"abstract":"<div><div>The use of combination drugs among patients is increasing due to effectiveness compared to monotherapies. However, healthcare providers should continue to be concerned about the potential risks associated with patient safety arising from drug-drug interactions (DDIs) when they use combination drugs. Whereas direct physicochemical interactions contribute to certain cases of DDIs, the majority of DDIs occur because one drug modulates enzymes, such as cytochrome P450, responsible for metabolizing another drug. Therefore, drugs that interact with the same family drugs are more likely to interact with each other by mediating specific enzymes. Adapted from techniques used to recommend users with similar interests, we introduce an AI recommendation model with graph convolutional network (GCN) and collaborative filtering that analyzes the connectivity of interacting drugs rather than their chemical structures. This approach deviates from typical classification models by not requiring sampling of undefined interactions as negative samples, allowing the prediction of potential interactions for all unknown drug pairs, circumventing the challenges associated with selecting negative interactions and data imbalance. Our methodology used the DrugBank database (version 5.1.9 released on January 3, 2022), encompassing 4,072 drugs and 1,391,790 drug pairs with interactions. Furthermore, the robustness of the model was verified through a 5-fold validation and external data validation using TWOSIDES data. Notably, our model’s efficacy is established solely through the exploitation of DDI reports, offering a versatile framework capable of accurately predicting interactions among diverse drug types. The source code for this project is distributed on GitHub (<span><span>https://github.com/yeonuk-Jeong/DDI-OCF</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103185"},"PeriodicalIF":6.1,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangjoon Park , Yong Bae Kim , Jee Suk Chang , Seo Hee Choi , Hyungjin Chung , Ik Jae Lee , Hwa Kyung Byun
{"title":"Label-independent framework for objective evaluation of cosmetic outcome in breast cancer","authors":"Sangjoon Park , Yong Bae Kim , Jee Suk Chang , Seo Hee Choi , Hyungjin Chung , Ik Jae Lee , Hwa Kyung Byun","doi":"10.1016/j.artmed.2025.103179","DOIUrl":"10.1016/j.artmed.2025.103179","url":null,"abstract":"<div><div>With advancements in the field of breast cancer treatment, the assessment of postsurgical cosmetic outcomes has gained increasing significance owing to its substantial impact on patients’ quality of life. However, evaluating breast cosmesis is challenging because of the inherently subjective nature of expert labeling. In this study, we present a novel automated approach, attention-guided denoising diffusion anomaly detection (AG-DDAD), designed to assess breast cosmesis following surgery. The model addresses the limitations of conventional supervised learning and existing anomaly detection models. Our approach leverages the attention mechanism of distillation with no labels and a self-supervised vision transformer, combined with a diffusion model, to achieve high-quality image reconstruction and precise transformation of discriminative regions. By training the diffusion model on unlabeled data, predominantly with normal cosmesis, we adopted an unsupervised anomaly detection perspective to automatically score the cosmesis. Real-world data experiments demonstrated the effectiveness of our method, providing visually appealing representations and quantifiable scores for cosmesis evaluation. Compared with commonly used rule-based programs, our fully automated approach eliminates the need for manual annotations and offers an objective evaluation. Moreover, our anomaly detection model exhibits state-of-the-art performance, surpassing existing models in terms of accuracy. Beyond the scope of breast cosmesis, our research represents a significant advancement in unsupervised anomaly detection within the medical domain, thereby paving the way for future investigations.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103179"},"PeriodicalIF":6.1,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Curtis Murray , Lewis Mitchell , Jonathan Tuke , Mark Mackay
{"title":"Probabilistic emotion and sentiment modelling of patient-reported experiences","authors":"Curtis Murray , Lewis Mitchell , Jonathan Tuke , Mark Mackay","doi":"10.1016/j.artmed.2025.103178","DOIUrl":"10.1016/j.artmed.2025.103178","url":null,"abstract":"<div><div>Patient feedback is necessary to assess the extent to which healthcare delivery aligns with public needs and expectations. Surveys provide structured feedback that is readily analysed; however, they are costly, infrequent, and constrained by predefined questions, limiting a comprehensive understanding of patient experience. In contrast, the unstructured nature of online reviews and social-media posts can reveal unique insights into patient perspectives, yet that very lack of structure presents a challenge for analysis. In this study, we present a methodology for interpretable probabilistic modelling of patient emotions from patient-reported experiences. We employ metadata-network topic modelling to uncover key themes in 13,380 patient-reported experiences from Care Opinion (2012-2022) and reveal insightful relationships between these themes and labelled emotions. Our results show positivity and negativity relate most strongly to aspects of patient experience, such as patient-caregiver interactions, rather than clinical outcomes. Patient educational engagement exhibits strong positivity, whereas dismissal and rejection are linked to suicidality and depression. We develop a context-specific probabilistic emotion recommender system that predicts both multi-label emotions and binary sentiments with a Naïve Bayes classifier using topics as predictors. We assess performance with nDCG and Q-measure and achieve an F1 of 0.921, significantly outperforming standard sentiment lexicons. This methodology offers a cost-effective, timely, and transparent approach to harness unconstrained patient-reported feedback, with the potential to augment traditional patient-reported experience collection. Our R package and interactive dashboard make the approach readily accessible for future research and clinical practice applications, enabling hospitals to integrate emotional insights into surveys and tailor care to patient needs. Overall, this study provides a new avenue for understanding and improving patient experience and the quality of healthcare delivery.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103178"},"PeriodicalIF":6.1,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144261671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}