Fagen Xie, Robert S Zeiger, Mary Marycania Saparudin, Sahar Al-Salman, Eric Puttock, William Crawford, Michael Schatz, Stanley Xu, William M Vollmer, Wansu Chen
{"title":"Identifying Asthma-Related Symptoms From Electronic Health Records Using a Hybrid Natural Language Processing Approach Within a Large Integrated Health Care System: Retrospective Study.","authors":"Fagen Xie, Robert S Zeiger, Mary Marycania Saparudin, Sahar Al-Salman, Eric Puttock, William Crawford, Michael Schatz, Stanley Xu, William M Vollmer, Wansu Chen","doi":"10.2196/69132","DOIUrl":"10.2196/69132","url":null,"abstract":"<p><strong>Background: </strong>Asthma-related symptoms are significant predictors of asthma exacerbation. Most of these symptoms are documented in clinical notes in a free-text format, and effective methods for capturing asthma-related symptoms from unstructured data are lacking.</p><p><strong>Objective: </strong>The study aims to develop a natural language processing (NLP) algorithm for identifying symptoms associated with asthma from clinical notes within a large integrated health care system.</p><p><strong>Methods: </strong>We analyzed unstructured clinical notes within 2 years before a visit with asthma diagnosis in 2013-2018 and 2021-2022 to identify 4 common asthma-related symptoms. Related terms and phrases were initially compiled from publicly available resources and then refined through clinician input and chart review. A rule-based NLP algorithm was iteratively developed and refined via multiple rounds of chart review followed by adjudication. Subsequently, transformer-based deep learning algorithms were trained using the same manually annotated datasets. A hybrid NLP algorithm was then generated by combining rule-based and transformer-based algorithms. The hybrid NLP algorithm was finally applied to the implementation notes.</p><p><strong>Results: </strong>A total of 11,374,552 eligible clinical notes with 128,211,793 sentences were analyzed. After applying the hybrid algorithm to implementation notes, at least 1 asthma-related symptom was identified in 1,663,450 out of 127,763,086 (1.3%) sentences and 858,350 out of 11,364,952 (7.55%) notes, respectively. Cough was the most frequently identified at both the sentence (1,363,713/127,763,086, 1.07%) and note (660,685/11,364,952, 5.81%) levels, while chest tightness was the least frequent at both the sentence (141,733/127,763,086, 0.11%) and note (64,251/11,364,952, 0.57%) levels. The frequency of multiple symptoms ranged from 0.03% (36,057/127,763,086) to 0.38% (484,050/127,763,086) at the sentence level and 0.10% (10,954/11,364,952) to 1.85% (209,805/11,364,952) at the note level. Validation against 1600 manually annotated clinical notes yielded a positive predictive value ranging from 96.53% (wheezing) to 97.42% (chest tightness) at the sentence level and 96.76% (wheezing) to 97.42% (chest tightness) at the note level. Sensitivity ranged from 93.9% (dyspnea) to 95.95% (cough) at the sentence level and 96% (chest tightness) to 99.07% (cough) at the note level. All 4 symptoms had F1-scores greater than 0.95 at both the sentence and note levels, regardless of NLP algorithms.</p><p><strong>Conclusions: </strong>The developed NLP algorithms could effectively capture asthma-related symptoms from unstructured clinical notes. These algorithms could be used to facilitate early asthma detection and predict exacerbation risk.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e69132"},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231518/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Brehmer, Constantin Seibold, Jan Egger, Khalid Majjouti, Michaela Tapp-Herrenbrück, Hannah Pinnekamp, Vanessa Priester, Michael Aleithe, Uli Fischer, Bernadette Hosters, Jens Kleesiek
{"title":"Fine-Grained Classification of Pressure Ulcers and Incontinence-Associated Dermatitis Using Multimodal Deep Learning: Algorithm Development and Validation Study.","authors":"Alexander Brehmer, Constantin Seibold, Jan Egger, Khalid Majjouti, Michaela Tapp-Herrenbrück, Hannah Pinnekamp, Vanessa Priester, Michael Aleithe, Uli Fischer, Bernadette Hosters, Jens Kleesiek","doi":"10.2196/67356","DOIUrl":"10.2196/67356","url":null,"abstract":"<p><strong>Background: </strong>Pressure ulcers (PUs) and incontinence-associated dermatitis (IAD) are prevalent conditions in clinical settings, posing significant challenges due to their similar presentations but differing treatment needs. Accurate differentiation between PUs and IAD is essential for appropriate patient care, yet it remains a burden for nursing staff and wound care experts.</p><p><strong>Objective: </strong>This study aims to develop and introduce a robust multimodal deep learning framework for the classification of PUs and IAD, along with the fine-grained categorization of their respective wound severities, to enhance diagnostic accuracy and support clinical decision-making.</p><p><strong>Methods: </strong>We collected and annotated a dataset of 1555 wound images, achieving consensus among 4 wound experts. Our framework integrates wound images with categorical patient data to improve classification performance. We evaluated 4 models-2 convolutional neural networks and 2 transformer-based architectures-each with approximately 25 million parameters. Various data preprocessing strategies, augmentation techniques, training methods (including multimodal data integration, synthetic data generation, and sampling), and postprocessing approaches (including ensembling and test-time augmentation) were systematically tested to optimize model performance.</p><p><strong>Results: </strong>The transformer-based TinyViT model achieved the highest performance in binary classification of PU and IAD, with an F1-score (harmonic mean of precision and recall) of 93.23%, outperforming wound care experts and nursing staff on the test dataset. In fine-grained classification of wound categories, the TinyViT model also performed best for PU categories with an F1-score of 75.43%, while ConvNeXtV2 showed superior performance in IAD category classification with an F1-score of 53.20%. Incorporating multimodal data improved performance in binary classification but had less impact on fine-grained categorization. Augmentation strategies and training techniques significantly influenced model performance, with ensembling enhancing accuracy across all tasks.</p><p><strong>Conclusions: </strong>Our multimodal deep learning framework effectively differentiates between PUs and IAD, achieving high accuracy and outperforming human wound care experts. By integrating wound images with categorical patient data, the model enhances diagnostic precision, offering a valuable decision-support tool for health care professionals. This advancement has the potential to reduce diagnostic uncertainty, optimize treatment pathways, and alleviate the burden on medical staff, leading to faster interventions and improved patient outcomes. The framework's strong performance suggests practical applications in clinical settings, such as integration into hospital electronic health record systems or mobile applications for bedside diagnostics. Future work should focus on validating real-world","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e67356"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223690/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing-Tong Tzeng, Jeng-Lin Li, Huan-Yu Chen, Chun-Hsiang Huang, Chi-Hsin Chen, Cheng-Yi Fan, Edward Pei-Chuan Huang, Chi-Chun Lee
{"title":"Correction: Improving the Robustness and Clinical Applicability of Automatic Respiratory Sound Classification Using Deep Learning-Based Audio Enhancement: Algorithm Development and Validation.","authors":"Jing-Tong Tzeng, Jeng-Lin Li, Huan-Yu Chen, Chun-Hsiang Huang, Chi-Hsin Chen, Cheng-Yi Fan, Edward Pei-Chuan Huang, Chi-Chun Lee","doi":"10.2196/76150","DOIUrl":"10.2196/76150","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/67239.].</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e76150"},"PeriodicalIF":0.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144008572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Segment Anything Model 2 for Zero-Shot 3D Segmentation of Abdominal Organs in Computed Tomography Scans to Adapt Video Tracking Capabilities for 3D Medical Imaging: Algorithm Development and Validation.","authors":"Yosuke Yamagishi, Shouhei Hanaoka, Tomohiro Kikuchi, Takahiro Nakao, Yuta Nakamura, Yukihiro Nomura, Soichiro Miki, Takeharu Yoshikawa, Osamu Abe","doi":"10.2196/72109","DOIUrl":"10.2196/72109","url":null,"abstract":"<p><strong>Background: </strong>Medical image segmentation is crucial for diagnosis and treatment planning in radiology, but it traditionally requires extensive manual effort and specialized training data. With its novel video tracking capabilities, the Segment Anything Model 2 (SAM 2) presents a potential solution for automated 3D medical image segmentation without the need for domain-specific training. However, its effectiveness in medical applications, particularly in abdominal computed tomography (CT) imaging remains unexplored.</p><p><strong>Objective: </strong>The aim of this study was to evaluate the zero-shot performance of SAM 2 in 3D segmentation of abdominal organs in CT scans and to investigate the effects of prompt settings on segmentation results.</p><p><strong>Methods: </strong>In this retrospective study, we used a subset of the TotalSegmentator CT dataset from eight institutions to assess SAM 2's ability to segment eight abdominal organs. Segmentation was initiated from three different z-coordinate levels (caudal, mid, and cranial levels) of each organ. Performance was measured using the dice similarity coefficient (DSC). We also analyzed the impact of \"negative prompts,\" which explicitly exclude certain regions from the segmentation process, on accuracy.</p><p><strong>Results: </strong>A total of 123 patients (mean age 60.7, SD 15.5 years; 63 men, 60 women) were evaluated. As a zero-shot approach, larger organs with clear boundaries demonstrated high segmentation performance, with mean DSCs as follows: liver, 0.821 (SD 0.192); right kidney, 0.862 (SD 0.212); left kidney, 0.870 (SD 0.154); and spleen, 0.891 (SD 0.131). Smaller organs showed lower performance: gallbladder, 0.531 (SD 0.291); pancreas, 0.361 (SD 0.197); and adrenal glands-right, 0.203 (SD 0.222) and left, 0.308 (SD 0.234). The initial slice for segmentation and the use of negative prompts significantly influenced the results. By removing negative prompts from the input, the DSCs significantly decreased for six organs.</p><p><strong>Conclusions: </strong>SAM 2 demonstrated promising zero-shot performance in segmenting certain abdominal organs in CT scans, particularly larger organs. Performance was significantly influenced by input negative prompts and initial slice selection, highlighting the importance of optimizing these factors.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e72109"},"PeriodicalIF":2.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144562191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah AlFarabi Ali, Hebah AlDehlawi, Ahoud Jazzar, Heba Ashi, Nihal Esam Abuzinadah, Mohammad AlOtaibi, Abdulrahman Algarni, Hazzaa Alqahtani, Sara Akeel, Soulafa Almazrooa
{"title":"The Diagnostic Performance of Large Language Models and Oral Medicine Consultants for Identifying Oral Lesions in Text-Based Clinical Scenarios: Prospective Comparative Study.","authors":"Sarah AlFarabi Ali, Hebah AlDehlawi, Ahoud Jazzar, Heba Ashi, Nihal Esam Abuzinadah, Mohammad AlOtaibi, Abdulrahman Algarni, Hazzaa Alqahtani, Sara Akeel, Soulafa Almazrooa","doi":"10.2196/70566","DOIUrl":"10.2196/70566","url":null,"abstract":"<p><strong>Background: </strong>The use of artificial intelligence (AI), especially large language models (LLMs), is increasing in health care, including in dentistry. There has yet to be an assessment of the diagnostic performance of LLMs in oral medicine.</p><p><strong>Objective: </strong>We aimed to compare the effectiveness of ChatGPT (OpenAI) and Microsoft Copilot (integrated within the Microsoft 365 suite) with oral medicine consultants in formulating accurate differential and final diagnoses for oral lesions from written clinical scenarios.</p><p><strong>Methods: </strong>Fifty comprehensive clinical case scenarios including patient age, presenting complaint, history of the presenting complaint, medical history, allergies, intra- and extraoral findings, lesion description, and any additional information including laboratory investigations and specific clinical features were given to three oral medicine consultants, who were asked to formulate a differential diagnosis and a final diagnosis. Specific prompts for the same 50 cases were designed and input into ChatGPT and Copilot to formulate both differential and final diagnoses. The diagnostic accuracy was compared between the LLMs and oral medicine consultants.</p><p><strong>Results: </strong>ChatGPT exhibited the highest accuracy, providing the correct differential diagnoses in 37 of 50 cases (74%). There were no significant differences in the accuracy of providing the correct differential diagnoses between AI models and oral medicine consultants. ChatGPT was as accurate as consultants in making the final diagnoses, but Copilot was significantly less accurate than ChatGPT (P=.015) and one of the oral medicine consultants (P<.001) in providing the correct final diagnosis.</p><p><strong>Conclusions: </strong>ChatGPT and Copilot show promising performance for diagnosing oral medicine pathology in clinical case scenarios to assist dental practitioners. ChatGPT-4 and Copilot are still evolving, but even now, they might provide a significant advantage in the clinical setting as tools to help dental practitioners in their daily practice.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e70566"},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223689/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust, Anxious Attachment, and Conversational AI Adoption Intentions in Digital Counseling: A Preliminary Cross-Sectional Questionnaire Study.","authors":"Xiaoli Wu, Kongmeng Liew, Martin J Dorahy","doi":"10.2196/68960","DOIUrl":"https://doi.org/10.2196/68960","url":null,"abstract":"<p><strong>Background: </strong>Conversational artificial intelligence (CAI) is increasingly used in various counseling settings to deliver psychotherapy, provide psychoeducational content, and offer support like companionship or emotional aid. Research has shown that CAI has the potential to effectively address mental health issues when its associated risks are handled with great caution. It can provide mental health support to a wider population than conventional face-to-face therapy, and at a faster response rate and more affordable cost. Despite CAI's many advantages in mental health support, potential users may differ in their willingness to adopt and engage with CAI to support their own mental health.</p><p><strong>Objective: </strong>This study focused specifically on dispositional trust in AI and attachment styles, and examined how they are associated with individuals' intentions to adopt CAI for mental health support.</p><p><strong>Methods: </strong>A cross-sectional survey of 239 American adults was conducted. Participants were first assessed on their attachment style, then presented with a vignette about CAI use, after which their dispositional trust and subsequent adoption intentions toward CAI counseling were surveyed. Participants had not previously used CAI for digital counseling for mental health support.</p><p><strong>Results: </strong>Dispositional trust in artificial intelligence emerged as a critical predictor of CAI adoption intentions (P<.001), while attachment anxiety (P=.04), rather than avoidance (P=.09), was found to be positively associated with the intention to adopt CAI counseling after controlling for age and gender.</p><p><strong>Conclusions: </strong>These findings indicated higher dispositional trust might lead to stronger adoption intention, and higher attachment anxiety might also be associated with greater CAI counseling adoption. Further research into users' attachment styles and dispositional trust is needed to understand individual differences in CAI counseling adoption for enhancing the safety and effectiveness of CAI-driven counseling services and tailoring interventions.</p><p><strong>Trial registration: </strong>Open Science Framework; https://osf.io/c2xqd.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e68960"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12056427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kirstin Leitner, Clare Cutri-French, Abigail Mandel, Lori Christ, Nathaneal Koelper, Meaghan McCabe, Emily Seltzer, Laura Scalise, James A Colbert, Anuja Dokras, Roy Rosin, Lisa Levine
{"title":"A Conversational Agent Using Natural Language Processing for Postpartum Care for New Mothers: Development and Engagement Analysis.","authors":"Kirstin Leitner, Clare Cutri-French, Abigail Mandel, Lori Christ, Nathaneal Koelper, Meaghan McCabe, Emily Seltzer, Laura Scalise, James A Colbert, Anuja Dokras, Roy Rosin, Lisa Levine","doi":"10.2196/58454","DOIUrl":"10.2196/58454","url":null,"abstract":"<p><strong>Background: </strong>The \"fourth trimester,\" or postpartum time period, remains a critical phase of pregnancy that significantly impacts parents and newborns. Care poses challenges due to complex individual needs as well as low attendance rates at routine appointments. A comprehensive technological solution could provide a holistic and equitable solution to meet care goals.</p><p><strong>Objective: </strong>This paper describes the development of patient engagement data with a novel postpartum conversational agent that uses natural language processing to support patients post partum.</p><p><strong>Methods: </strong>We report on the development of a postpartum conversational agent from concept to usable product as well as the patient engagement with this technology. Content for the program was developed using patient- and provider-based input and clinical algorithms. Our program offered 2-way communication to patients and details on physical recovery, lactation support, infant care, and warning signs for problems. This was iterated upon by our core clinical team and an external expert clinical panel before being tested on patients. Patients eligible for discharge around 24 hours after delivery who had delivered a singleton full-term infant vaginally were offered use of the program. Patient demographics, accuracy, and patient engagement were collected over the first 6 months of use.</p><p><strong>Results: </strong>A total of 290 patients used our conversational agent over the first 6 months, of which 112 (38.6%) were first time parents and 162 (56%) were Black. In total, 286 (98.6%) patients interacted with the platform at least once, 271 patients (93.4%) completed at least one survey, and 151 (52%) patients asked a question. First time parents and those breastfeeding their infants had higher rates of engagement overall. Black patients were more likely to promote the program than White patients (P=.047). The overall accuracy of the conversational agent during the first 6 months was 77%.</p><p><strong>Conclusions: </strong>It is possible to develop a comprehensive, automated postpartum conversational agent. The use of such a technology to support patients postdischarge appears to be acceptable with very high engagement and patient satisfaction.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e58454"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223682/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahmudur Rahman, Jifan Gao, Kyle A Carey, Dana P Edelson, Askar Afshar, John W Garrett, Guanhua Chen, Majid Afshar, Matthew M Churpek
{"title":"Comparison of Deep Learning Approaches Using Chest Radiographs for Predicting Clinical Deterioration: Retrospective Observational Study.","authors":"Mahmudur Rahman, Jifan Gao, Kyle A Carey, Dana P Edelson, Askar Afshar, John W Garrett, Guanhua Chen, Majid Afshar, Matthew M Churpek","doi":"10.2196/67144","DOIUrl":"10.2196/67144","url":null,"abstract":"<p><strong>Background: </strong>The early detection of clinical deterioration and timely intervention for hospitalized patients can improve patient outcomes. The currently existing early warning systems rely on variables from structured data, such as vital signs and laboratory values, and do not incorporate other potentially predictive data modalities. Because respiratory failure is a common cause of deterioration, chest radiographs are often acquired in patients with clinical deterioration, which may be informative for predicting their risk of intensive care unit (ICU) transfer.</p><p><strong>Objective: </strong>This study aimed to compare and validate different computer vision models and data augmentation approaches with chest radiographs for predicting clinical deterioration.</p><p><strong>Methods: </strong>This retrospective observational study included adult patients hospitalized at the University of Wisconsin Health System between 2009 and 2020 with an elevated electronic cardiac arrest risk triage (eCART) score, a validated clinical deterioration early warning score, on the medical-surgical wards. Patients with a chest radiograph obtained within 48 hours prior to the elevated score were included in this study. Five computer vision model architectures (VGG16, DenseNet121, Vision Transformer, ResNet50, and Inception V3) and four data augmentation methods (histogram normalization, random flip, random Gaussian noise, and random rotate) were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) for predicting clinical deterioration (ie, ICU transfer or ward death in the following 24 hours).</p><p><strong>Results: </strong>The study included 21,817 patient admissions, of which 1655 (7.6%) experienced clinical deterioration. The DenseNet121 model pretrained on chest radiograph datasets with histogram normalization and random Gaussian noise augmentation had the highest discrimination (AUROC 0.734 and AUPRC 0.414), while the vision transformer having 24 transformer blocks with random rotate augmentation had the lowest discrimination (AUROC 0.598).</p><p><strong>Conclusions: </strong>The study shows the potential of chest radiographs in deep learning models for predicting clinical deterioration. The DenseNet121 architecture pretrained with chest radiographs performed better than other architectures in most experiments, and the addition of histogram normalization with random Gaussian noise data augmentation may enhance the performance of DenseNet121 and pretrained VGG16 architectures.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e67144"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Per Niklas Waaler, Musarrat Hussain, Igor Molchanov, Lars Ailo Bongo, Brita Elvevåg
{"title":"Correction: \"Prompt Engineering an Informational Chatbot for Education on Mental Health Using a Multiagent Approach for Enhanced Compliance With Prompt Instructions: Algorithm Development and Validation\".","authors":"Per Niklas Waaler, Musarrat Hussain, Igor Molchanov, Lars Ailo Bongo, Brita Elvevåg","doi":"10.2196/75191","DOIUrl":"https://doi.org/10.2196/75191","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/69820.].</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e75191"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12022528/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Perakslis, Kimberly Nolen, Ethan Fricklas, Tracy Tubb
{"title":"Striking a Balance: Innovation, Equity, and Consistency in AI Health Technologies.","authors":"Eric Perakslis, Kimberly Nolen, Ethan Fricklas, Tracy Tubb","doi":"10.2196/57421","DOIUrl":"10.2196/57421","url":null,"abstract":"<p><strong>Unlabelled: </strong>With the explosion of innovation driven by generative and traditional artificial intelligence (AI), comes the necessity to understand and regulate products that often defy current regulatory classification. Tradition, and lack of regulatory expediency, imposes the notion of force-fitting novel innovations into pre-existing product classifications or into the essentially unregulated domains of wellness or consumer electronics. Further, regulatory requirements, levels of risk tolerance, and capabilities vary greatly across the spectrum of technology innovators. For example, currently unregulated information and consumer electronic suppliers set their own editorial and communication standards without extensive federal regulation. However, industries like biopharma companies are held to a higher standard in the same space, given current direct-to-consumer regulations like the Sunshine Act (also known as Open Payments), the federal Anti-Kickback Statute, the federal False Claims Act, and others. Clear and well-defined regulations not only reduce ambiguity but facilitate scale, showcasing the importance of regulatory clarity in fostering innovation and growth. To avoid highly regulated industries like health care and biopharma from being discouraged from developing AI to improve patient care, there is a need for a specialized framework to establish regulatory evidence for AI-based medical solutions. In this paper, we review the current regulatory environment considering current innovations but also pre-existing legal and regulatory responsibilities of the biopharma industry and propose a novel, hybridized approach for the assessment of novel AI-based patient solutions. Further, we will elaborate the proposed concepts via case studies. This paper explores the challenges posed by the current regulatory environment, emphasizing the need for a specialized framework for AI medical devices. By reviewing existing regulations and proposing a hybridized approach, we aim to ensure that the potential of AI in biopharmaceutical innovation is not hindered by uneven regulatory landscapes.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"4 ","pages":"e57421"},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12223681/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}