{"title":"Generalized multisensor wearable signal fusion for emotion recognition from noisy and incomplete data","authors":"Vamsi Kumar Naidu Pallapothula , Sidharth Anand , Sreyasee Das Bhattacharjee, Junsong Yuan","doi":"10.1016/j.smhl.2025.100571","DOIUrl":"10.1016/j.smhl.2025.100571","url":null,"abstract":"<div><div>Continual real-time monitoring of users’ health via noninvasive wearable devices (e.g., smartwatch, smartphone) demonstrates significant potential to enhance human well-being in everyday life. However, due to respective sampling rates, noise sensitivity, and data types, the inherent heterogeneity of the signals received from multiple sensors make the task of biosignal-based emotion recognition both complex and time-consuming. While how to optimally fuse multimode information (where each sensor produces a unique mode-specific input signal) to ensure a reliable inference performance remains difficult, the particular challenges in this problem setting is primarily threefold: (1) The data availability is limited due to several unique person/device-specific properties and high cost of labeling; (2) The acquired signals from wearable devices are often noisy or may as well be lossy due to users’ personal lifestyle choices or environmental interferences; (3) Due to several intra-individual and inter-individual signal variabilities, enabling model generalizability is always difficult. To this end, we propose a general-purpose multisensor fusion network, <em>GM-FuseNet</em> that can seamlessly integrate and transform multi-sensor signal information for a variety of tasks. Unlike a majority of existing works, which rely on a fundamental assumption that full multi-mode query information is present during inference, <em>GM-FuseNet</em>’s first-level preface multimodal transformer module is explicitly designed to enhance both unimodal and multimodal performance in the presence of partial modality details. We also utilize an effective <em>multimodal temporal correlation loss</em> that aligns the unimode signals pairwise in the temporal domain and encourages the model to learn the temporal correlation across multiple sensor-specific signals. Extensive evaluation using two public datasets WESAD and CASE reports outperformance (<span><math><mrow><mn>1</mn><mtext>–</mtext><mn>4</mn><mtext>%</mtext></mrow></math></span>) of the proposed <em>GM-FuseNet</em> against state-of-the-art supervised or self-supervised models while delivering a consistently robust generalization all-across. Additionally, by reporting another <span><math><mrow><mn>2</mn><mtext>–</mtext><mn>4</mn><mtext>%</mtext></mrow></math></span> improved accuracy and F1-scores, <em>GM-FuseNet</em> also demonstrates a significant promise in handling a variety of test environments including the missing and noisy multisensor query signals.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100571"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-23DOI: 10.1016/j.smhl.2025.100566
Abm Adnan Azmee, Francis Nweke, Mason Pederson, Md Abdullah Al Hafiz Khan, Yong Pei
{"title":"Human AI Collaboration Framework for Detecting Mental Illness Causes from Social Media","authors":"Abm Adnan Azmee, Francis Nweke, Mason Pederson, Md Abdullah Al Hafiz Khan, Yong Pei","doi":"10.1016/j.smhl.2025.100566","DOIUrl":"10.1016/j.smhl.2025.100566","url":null,"abstract":"<div><div>Mental health is a critical aspect of our overall well-being. Mental illness refers to conditions that impact an individual’s psychological state, resulting in considerable distress and limitations in functioning day-to-day tasks. Due to the progress of technology, social media has emerged as a platform for individuals to share their thoughts and emotions. The psychological state of individuals can be accessed with the help of data from these platforms. However, it is challenging for conventional machine learning models to analyze the diverse linguistic contexts of social media data. Moreover, to effectively analyze the data, we need the support of human experts. In this work, we propose a novel human AI-collaboration framework that leverages the strength of human expertise and artificial intelligence (AI) to overcome these challenges. Our proposed framework utilizes multi-level data along with feedback from human experts to identify the causes behind mental illness. The efficacy and effectiveness of our proposed model are shown by extensive evaluation on Reddit data. Experimental results demonstrate that our proposed model outperforms the baseline model by 3 to 17% performance improvement.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100566"},"PeriodicalIF":0.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-18DOI: 10.1016/j.smhl.2025.100555
Mustafa Elhadi Ahmed , Hongnian Yu , Michael Vassallo , Pelagia Koufaki
{"title":"Advancing real-world applications: A scoping review on emerging wearable technologies for recognizing activities of daily living","authors":"Mustafa Elhadi Ahmed , Hongnian Yu , Michael Vassallo , Pelagia Koufaki","doi":"10.1016/j.smhl.2025.100555","DOIUrl":"10.1016/j.smhl.2025.100555","url":null,"abstract":"<div><div>Wearable technologies for Activities of Daily Living (ADL) recognition have emerged as a crucial area of research, driven by the global rise in aging populations and the increase in chronic diseases. These technologies offer significant benefits for healthcare by enabling continuous monitoring and early detection of health issues. However, the field of ADL recognition with wearables remains under-explored in key areas such as user variability and data acquisition methodologies. This review aims to provide a comprehensive overview of recent advancements in ADL recognition using wearable devices, with a particular focus on commercially available devices. We systematically analyzed 157 studies from six databases following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, narrowing our focus to 77 articles that utilized proprietary datasets. These studies revealed three main categories of wearables: prototype devices (40 %), commercial research-grade devices (32 %), and consumer-grade devices (28 %) adapted for ADL recognition. Additionally, various detection algorithms were identified, with 31 % of studies utilizing basic machine learning techniques, 40 % employing advanced deep learning methods, and the remainder exploring ensemble learning and transfer learning approaches. Our findings underscore the growing adoption of accessible, commercial devices for both research and clinical applications. Furthermore, we identified two key areas for future research: the development of user-centered data preparation techniques to account for variability in ADL performance, and the enhancement of wearable technologies to better align with the practical needs of healthcare systems. These advancements are expected to enhance the usability and efficiency of wearables in improving patient care and healthcare management.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100555"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-preserving LLM-based chatbots for hypertensive patient self-management","authors":"Sara Montagna , Stefano Ferretti , Lorenz Cuno Klopfenstein , Michelangelo Ungolo , Martino Francesco Pengo , Gianluca Aguzzi , Matteo Magnini","doi":"10.1016/j.smhl.2025.100552","DOIUrl":"10.1016/j.smhl.2025.100552","url":null,"abstract":"<div><div>Medical chatbots are becoming a basic component in telemedicine, propelled by advancements in Large Language Models (LLMs). However, LLMs’ integration into clinical settings comes with several issues, with privacy concerns being particularly significant.</div><div>The paper proposes a tailored architectural solution and an information workflow that address privacy issues, while preserving the benefits of LLMs. We examine two solutions to prevent the disclosure of sensitive information: <em>(i)</em> a filtering mechanism that processes sensitive data locally but leverage a robust OpenAI’s online LLM for engaging with the user effectively, and <em>(ii)</em> a fully local deployment of open-source LLMs. The effectiveness of these solutions is assessed in the context of hypertension management across various tasks, ranging from intent recognition to reliable and emphatic conversation. Interestingly, while the first solution proved to be more robust in intent recognition, an evaluation by domain experts of the models’ responses, based on reliability and empathetic principles, revealed that two out of six open LLMs received the highest scores.</div><div>The study underscores the viability of incorporating LLMs into medical chatbots. In particular, our findings suggest that open LLMs can offer a privacy-preserving, yet promising, alternative to external LLM services, ensuring safer and more reliable telemedicine practices. Future efforts will focus on fine-tuning local models to enhance their performance across all tasks.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100552"},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-03-04DOI: 10.1016/j.smhl.2025.100554
Kelvin M. Frazier, Brian F. Bender
{"title":"Automated supplement correction in passive urine color measurement device for real-time hydration testing","authors":"Kelvin M. Frazier, Brian F. Bender","doi":"10.1016/j.smhl.2025.100554","DOIUrl":"10.1016/j.smhl.2025.100554","url":null,"abstract":"<div><div>Monitoring urine color as a means of assessing hydration status has long been a recommended technique for active populations like athletes, military personnel, and outdoor laborers. Urine color correlates well to urine concentration and is a simple, non-invasive practice. However, currently this approach is subjective, and errors arise in variation in ambient lighting conditions, comparator colors used, and individual perception. In addition, certain supplements such as riboflavin (vitamin B2) and beet juice are known to distort urine color and can confound hydration assessment. An automated urinalysis device (InFlow) was developed to measure urine color, an index of hydration status, in real-time during urination in the presence of these supplements. Machine learning techniques were used to reduce mean absolute hydration assessment error from riboflavin-derived color skew from 2.50 ± 0.37 to 0.85 (±0.06) color units on a 7-point color chart scale compared to a commercial colorimeter. In the absence of supplements and in the samples spiked with beet juice the InFlow device produced a mean absolute error of 0.48 (±0.06) color units. Finally, we demonstrate the feasibility of detecting myoglobinuria for potential future use in rhabdomyolysis screening. Our results show the InFlow device provides a novel approach with appropriate accuracy for standardizing hydration assessment via urinalysis in environments with high testing frequency demands in the presence of common urine color interferents including riboflavin and beet juice.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100554"},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-02-25DOI: 10.1016/j.smhl.2025.100551
Md. Mahmodul Hasan , Mohammad Motiur Rahman
{"title":"Privacy-preserving polyp segmentation using federated learning with differential privacy","authors":"Md. Mahmodul Hasan , Mohammad Motiur Rahman","doi":"10.1016/j.smhl.2025.100551","DOIUrl":"10.1016/j.smhl.2025.100551","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Patient privacy is of paramount importance in the medical field, especially as data-driven medical applications gain popularity. The privacy of medical records is increasingly crucial. In this context, data-oriented polyp (a precancerous stage of colon cancer) segmentation is a critical area of ongoing research, aiming to improve automated segmentation. Accurate segmentation is essential for the complete removal of these overgrown cells from the gastrointestinal system. Although large data sets using data-driven algorithms have shown excellent performance in image segmentation, privacy concerns have limited the availability of such datasets for medical image segmentation tasks, including polyp segmentation. This research aims to develop an approach for polyp segmentation that combines data from multiple sources without compromising patient privacy.</div></div><div><h3>Methods:</h3><div>We design a differentially private federated learning system to segment polyps without compromising privacy. Our approach employs the encoder–decoder architecture UNet 3+ with a deep supervision technique to achieve effective segmentation of polyps in a federated setup. The federated training process aims to find generalized global models for the entities participating in the federation. The study uses four public databases to train and evaluate the proposed method.</div></div><div><h3>Results:</h3><div>The proposed privacy-protected technique demonstrates promising outcomes in polyp segmentation, achieving an average Intersection over Union (IoU) score of 0.90881 ± 0.00355 over four publicly available datasets. Evaluation metrics include precision, sensitivity, and specificity values, indicating the effectiveness of our approach in accurately segmenting polyps.</div></div><div><h3>Conclusions:</h3><div>Our differentially private federated learning system successfully segments polyps without compromising patient privacy. The promising results suggest that this approach can significantly contribute to the field of polyp segmentation, facilitating the use of large datasets while maintaining strict privacy standards.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100551"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-02-11DOI: 10.1016/j.smhl.2025.100542
Kazi Zawad Arefin , Kazi Shafiul Alam , Sayed Mashroor Mamun , Nafi Us Sabbir Sabith , Masud Rabbani , Parama Sridevi , Sheikh Iqbal Ahamed
{"title":"PulseSight : A novel method for contactless oxygen saturation (SpO2) monitoring using smartphone cameras, remote photoplethysmography and machine learning","authors":"Kazi Zawad Arefin , Kazi Shafiul Alam , Sayed Mashroor Mamun , Nafi Us Sabbir Sabith , Masud Rabbani , Parama Sridevi , Sheikh Iqbal Ahamed","doi":"10.1016/j.smhl.2025.100542","DOIUrl":"10.1016/j.smhl.2025.100542","url":null,"abstract":"<div><div>Monitoring oxygen saturation (SpO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span>) level is crucial for evaluating the current cardiac and respiratory condition of a person, particularly in medical settings. Conventional pulse oximetry, while efficient, has drawbacks such as the requirement for physical touch and vulnerability to certain environmental influences. In this paper, we propose an innovative approach for estimating SpO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span> levels utilizing smartphone cameras and video-based photoplethysmography (PPG) without physical touch. Our framework consists of an Android mobile application that records 20-second face videos, which a cloud-based backend server then analyzes. The server utilizes deep learning-based facial recognition and signal processing techniques to extract remote photoplethysmography (rPPG) signals from specific facial regions and predict oxygen saturation (SpO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span>) levels using a Support Vector Regression (SVR) Machine learning model. Signal noise and motion artifacts are mitigated by extracting relevant characteristics from the rPPG. The system was validated by experimental studies, which contained 40 sets of videos collected from 10 participants. The study was conducted under different illumination conditions, which showed low RMSE score (1.45 ±0.1) and MAE score (0.92 ±0.01). Also, our system shows high usability, as indicated by the System Usability Scale (SUS) score of 80.5. The results demonstrate that our method offers a dependable and contactless substitute for continuous SpO2 monitoring, with potential uses in telemedicine and remote patient monitoring.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100542"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-02-01DOI: 10.1016/j.smhl.2025.100541
Md. Armanul Hasan , Ridwan Mustofa , Niamat Ullah Ibne Hossain , Md. Saiful Islam
{"title":"Smart health practices: Strategies to improve healthcare efficiency through digital twin technology","authors":"Md. Armanul Hasan , Ridwan Mustofa , Niamat Ullah Ibne Hossain , Md. Saiful Islam","doi":"10.1016/j.smhl.2025.100541","DOIUrl":"10.1016/j.smhl.2025.100541","url":null,"abstract":"<div><div>A digital twin (DT) is a virtual representation of a real-world object that has dynamic, bidirectional connections between the real-world object and its digital domain. With the advent of Industry 4.0, DT technology was initially applied in the engineering and manufacturing sectors, but recent research indicates DT may also be useful within the healthcare sector. The purpose of this study was to determine the potential applications of DT technology in the healthcare sector and offer suggestions for its effective implementation by healthcare institutions to increase service efficiency. Based on a review of the literature, we developed a model to demonstrate the applications of DTs on public and personal health. A questionnaire with five points Likert scale was then designed based on this model. Data were collected through an online survey conducted with 306 participants. To verify our hypothesized correlations among the constructs, structural equation modeling was used. The findings suggested that explainable artificial intelligence-based early diagnosis, simulation model-based vaccination, artificial intelligence location technology, sensor-based real-time health monitoring, and in silico personalized medicine are potential applications of DT that can increase healthcare efficiency. We also considered the moderating influence of (a) security and privacy and (b) certification and regulatory issues, acknowledging their pivotal roles in ensuring the successful implementation and widespread acceptance of DT technology in the field of healthcare. This study contributes to the body of knowledge in academia and offers useful insights for technologists, policymakers, and healthcare professionals who want to fully utilize DT technology to build an effective healthcare system that can adapt to the changing needs of communities and individuals.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100541"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-01-23DOI: 10.1016/j.smhl.2025.100540
Eman Rezk , Mohamed Eltorki , Wael El-Dakhakhni
{"title":"Human knowledge-based artificial intelligence methods for skin cancer management: Accuracy and interpretability study","authors":"Eman Rezk , Mohamed Eltorki , Wael El-Dakhakhni","doi":"10.1016/j.smhl.2025.100540","DOIUrl":"10.1016/j.smhl.2025.100540","url":null,"abstract":"<div><div>Skin cancer management, including monitoring and excision, involves sophisticated decisions reliant on several interdependent factors. This complexity leads to a scarcity of data useful for skin cancer management. Deep learning achieved massive success in computer vision due to its ability to extract representative features from images. However, deep learning methods require large amounts of data to develop accurate models, whereas machine learning methods perform well with small datasets. In this work, we aim to compare the accuracy and interpretability of skin cancer management prediction 1) using deep learning and machine learning methods and 2) utilizing various inputs including clinical images, dermoscopic images, and lesion clinical tabular features created by experts to represent lesion characteristics. We implemented two approaches, a deep learning pipeline for feature extraction and classification trained on different input modalities including images and lesion clinical features. The second approach uses lesion clinical features to train machine learning classifiers. The results show that the machine learning approach trained on clinical features achieves higher accuracy (0.80) and higher area under the curve (0.92) compared to the deep learning pipeline trained on skin images and lesion clinical features which achieves an accuracy of 0.66 and area under the curve of 0.74. Additionally, the machine learning approach provides more informative and understandable interpretations of the results. This work emphasizes the significance of utilizing human knowledge in developing precise and transparent predictive models. In addition, our findings highlight the potential of machine learning methods in predicting lesion management in situation where the data size is insufficient to leverage deep learning capabilities.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart HealthPub Date : 2025-01-06DOI: 10.1016/j.smhl.2024.100539
Antony Garcia , Xinming Huang
{"title":"SAFE: Sound Analysis for Fall Event detection using machine learning","authors":"Antony Garcia , Xinming Huang","doi":"10.1016/j.smhl.2024.100539","DOIUrl":"10.1016/j.smhl.2024.100539","url":null,"abstract":"<div><div>This study evaluates the application of machine learning (ML) and deep learning (DL) algorithms for fall detection using sound signals. The work is supported by the Sound Analysis for Fall Events (SAFE) dataset, comprising 950 audio samples, including 475 fall events recorded with a grappling dummy to simulate realistic scenarios. Decision tree-based ML algorithms achieved a classification accuracy of 93% at lower sampling rates, indicating that critical features are preserved despite reduced resolution. DL models, using spectrogram-based feature extraction, reached accuracies up to 99%, surpassing traditional ML methods in performance. Linear models also achieved high accuracy (up to 97%) in various spectrogram techniques, emphasizing the separability of audio features. These results establish the viability of sound-based fall detection systems as efficient and accurate solutions.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100539"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}