JAMIA OpenPub Date : 2025-04-15eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf028
{"title":"Correction to: Leveraging deep learning to detect stance in Spanish tweets on COVID-19 vaccination.","authors":"","doi":"10.1093/jamiaopen/ooaf028","DOIUrl":"https://doi.org/10.1093/jamiaopen/ooaf028","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1093/jamiaopen/ooaf007.].</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf028"},"PeriodicalIF":2.5,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11999061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-04-10eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf026
Ruichen Rong, Zifan Gu, Hongyin Lai, Tanna L Nelson, Tony Keller, Clark Walker, Kevin W Jin, Catherine Chen, Ann Marie Navar, Ferdinand Velasco, Eric D Peterson, Guanghua Xiao, Donghan M Yang, Yang Xie
{"title":"A deep learning model for clinical outcome prediction using longitudinal inpatient electronic health records.","authors":"Ruichen Rong, Zifan Gu, Hongyin Lai, Tanna L Nelson, Tony Keller, Clark Walker, Kevin W Jin, Catherine Chen, Ann Marie Navar, Ferdinand Velasco, Eric D Peterson, Guanghua Xiao, Donghan M Yang, Yang Xie","doi":"10.1093/jamiaopen/ooaf026","DOIUrl":"10.1093/jamiaopen/ooaf026","url":null,"abstract":"<p><strong>Objectives: </strong>Recent advances in deep learning show significant potential in analyzing continuous monitoring electronic health records (EHR) data for clinical outcome prediction. We aim to develop a Transformer-based, Encounter-level Clinical Outcome (TECO) model to predict mortality in the intensive care unit (ICU) using inpatient EHR data.</p><p><strong>Materials and methods: </strong>The TECO model was developed using multiple baseline and time-dependent clinical variables from 2579 hospitalized COVID-19 patients to predict ICU mortality and was validated externally in an acute respiratory distress syndrome cohort (<i>n</i> = 2799) and a sepsis cohort (<i>n</i> = 6622) from the Medical Information Mart for Intensive Care IV (MIMIC-IV). Model performance was evaluated based on the area under the receiver operating characteristic (AUC) and compared with Epic Deterioration Index (EDI), random forest (RF), and extreme gradient boosting (XGBoost).</p><p><strong>Results: </strong>In the COVID-19 development dataset, TECO achieved higher AUC (0.89-0.97) across various time intervals compared to EDI (0.86-0.95), RF (0.87-0.96), and XGBoost (0.88-0.96). In the 2 MIMIC testing datasets (EDI not available), TECO yielded higher AUC (0.65-0.77) than RF (0.59-0.75) and XGBoost (0.59-0.74). In addition, TECO was able to identify clinically interpretable features that were correlated with the outcome.</p><p><strong>Discussion: </strong>The TECO model outperformed proprietary metrics and conventional machine learning models in predicting ICU mortality among patients with COVID-19, widespread inflammation, respiratory illness, and other organ failures.</p><p><strong>Conclusion: </strong>The TECO model demonstrates a strong capability for predicting ICU mortality using continuous monitoring data. While further validation is needed, TECO has the potential to serve as a powerful early warning tool across various diseases in inpatient settings.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf026"},"PeriodicalIF":2.5,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11984207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-04-09eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf021
Liz Salmi, Dana M Lewis, Jennifer L Clarke, Zhiyong Dong, Rudy Fischmann, Emily I McIntosh, Chethan R Sarabu, Catherine M DesRoches
{"title":"A proof-of-concept study for patient use of open notes with large language models.","authors":"Liz Salmi, Dana M Lewis, Jennifer L Clarke, Zhiyong Dong, Rudy Fischmann, Emily I McIntosh, Chethan R Sarabu, Catherine M DesRoches","doi":"10.1093/jamiaopen/ooaf021","DOIUrl":"https://doi.org/10.1093/jamiaopen/ooaf021","url":null,"abstract":"<p><strong>Objectives: </strong>The use of large language models (LLMs) is growing for both clinicians and patients. While researchers and clinicians have explored LLMs to manage patient portal messages and reduce burnout, there is less documentation about how patients use these tools to understand clinical notes and inform decision-making. This proof-of-concept study examined the reliability and accuracy of LLMs in responding to patient queries based on an open visit note.</p><p><strong>Materials and methods: </strong>In a cross-sectional proof-of-concept study, 3 commercially available LLMs (ChatGPT 4o, Claude 3 Opus, Gemini 1.5) were evaluated using 4 distinct prompt series-<i>Standard</i>, <i>Randomized</i>, <i>Persona</i>, and <i>Randomized Persona</i>-with multiple questions, designed by patients, in response to a single neuro-oncology progress note. LLM responses were scored by the note author (neuro-oncologist) and a patient who receives care from the note author, using an 8-criterion rubric that assessed <i>Accuracy</i>, <i>Relevance</i>, <i>Clarity</i>, <i>Actionability</i>, <i>Empathy/Tone</i>, <i>Completeness</i>, <i>Evidence</i>, and <i>Consistency</i>. Descriptive statistics were used to summarize the performance of each LLM across all prompts.</p><p><strong>Results: </strong>Overall, the Standard and Persona-based prompt series yielded the best results across all criterion regardless of LLM. Chat-GPT 4o using Persona-based prompts scored highest in all categories. All LLMs scored low in the use of <i>Evidence</i>.</p><p><strong>Discussion: </strong>This proof-of-concept study highlighted the potential for LLMs to assist patients in interpreting open notes. The most effective LLM responses were achieved by applying <i>Persona</i>-style prompts to a patient's question.</p><p><strong>Conclusion: </strong>Optimizing LLMs for patient-driven queries, and patient education and counseling around the use of LLMs, have potential to enhance patient use and understanding of their health information.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf021"},"PeriodicalIF":2.5,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11980777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-04-03eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf023
David McMinn, Tom Grant, Laura DeFord-Watts, Veronica Porkess, Margarita Lens, Christopher Rapier, Wilson Q Joe, Timothy A Becker, Walter Bender
{"title":"Using artificial intelligence to expedite and enhance plain language summary abstract writing of scientific content.","authors":"David McMinn, Tom Grant, Laura DeFord-Watts, Veronica Porkess, Margarita Lens, Christopher Rapier, Wilson Q Joe, Timothy A Becker, Walter Bender","doi":"10.1093/jamiaopen/ooaf023","DOIUrl":"10.1093/jamiaopen/ooaf023","url":null,"abstract":"<p><strong>Objective: </strong>To assess the capacity of a bespoke artificial intelligence (AI) process to help medical writers efficiently generate quality plain language summary abstracts (PLSAs).</p><p><strong>Materials and methods: </strong>Three independent studies were conducted. In Studies 1 and 3, original scientific abstracts (OSAs; <i>n</i> = 48, <i>n</i> = 2) and corresponding PLSAs written by medical writers versus bespoke AI were assessed using standard readability metrics. Study 2 compared time and effort of medical writers (<i>n</i> = 10) drafting PLSAs starting with an OSA (<i>n</i> = 6) versus the output of 1 bespoke AI (<i>n</i> = 6) and 1 non-bespoke AI (<i>n</i> = 6) process. These PLSAs (<i>n</i> = 72) were assessed by subject matter experts (SMEs; <i>n</i> = 3) for accuracy and physicians (<i>n</i> = 7) for patient suitability. Lastly, in Study 3, medical writers (<i>n</i> = 22) and patients/patient advocates (<i>n </i>= 5) compared quality of medical writer and bespoke AI-generated PLSAs.</p><p><strong>Results: </strong>In Study 1, bespoke AI PLSAs were easier to read than medical writer PLSAs across all readability metrics (<i>P</i> <.01). In Study 2, bespoke AI output saved medical writers >40% in time for PLSA creation and required less effort than unassisted writing. SME-assessed quality was higher for AI-assisted PLSAs, and physicians preferred bespoke AI-generated outputs for patient use. In Study 3, bespoke AI PLSAs were more readable and rated of higher quality than medical writer PLSAs.</p><p><strong>Discussion: </strong>The bespoke AI process may enhance access to health information by helping medical writers produce PLSAs of scientific content that are fit for purpose.</p><p><strong>Conclusion: </strong>The bespoke AI process can more efficiently create better quality, more readable first draft PLSAs versus medical writer-generated PLSAs.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf023"},"PeriodicalIF":2.5,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11967854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer-assisted prescription of erythropoiesis-stimulating agents in patients undergoing maintenance hemodialysis: a randomized control trial for artificial intelligence model selection.","authors":"Lee-Moay Lim, Ming-Yen Lin, Chan Hsu, Chantung Ku, Yi-Pei Chen, Yihuang Kang, Yi-Wen Chiu","doi":"10.1093/jamiaopen/ooaf020","DOIUrl":"10.1093/jamiaopen/ooaf020","url":null,"abstract":"<p><strong>Objective: </strong>Machine learning (ML) algorithms are promising tools for managing anemia in hemodialysis (HD) patients. However, their efficacy in predicting erythropoiesis-stimulating agents (ESAs) doses remains uncertain. This study aimed to evaluate the effectiveness of a contemporary artificial intelligence (AI) model in prescribing ESA doses compared to physicians for HD patients.</p><p><strong>Materials and methods: </strong>This double-blinded control trial randomized participants into traditional doctor (Dr) and AI groups. In the Dr group, doses of ESA were determined by following clinical guideline recommendations, while in the AI group, they were predicted by the developed models named Random effects (REEM) trees, Mixed-effect random forest (MERF), Long short-term memory (LSTM) networks-I, and LSTM-II. The primary outcome was the capability to maintain patients' hemoglobin (Hb) value near 11 g/dL with a margin of 0.25 g/dL after treating the suggested ESA, with the secondary outcome being Hb value between 10 and 12 g/dL.</p><p><strong>Results: </strong>A total of 124 participants were enrolled, with 104 completing the study. The mean Hb values were 10.8 and 10.9 g/dL in the AI and Dr groups, respectively, with 69.7% and 73.5% of participants in the respective groups maintaining Hb levels between 10 and 12 g/dL. Only the REEM trees model passed the non-inferiority test for the primary outcome with a margin of 0.25 g/dL and the secondary outcome with a margin of 15%. There was no difference in severe adverse events between the 2 groups.</p><p><strong>Conclusion: </strong>The REEM trees AI model demonstrated non-inferiority to physicians in prescribing ESA doses for HD patients, maintaining Hb levels within the therapeutic target.</p><p><strong>Clinicaltrialsgov identifier: </strong>NCT04185519.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf020"},"PeriodicalIF":2.5,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-03-26eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf018
Jin-Ah Sim, Xiaolei Huang, Rachel T Webster, Kumar Srivastava, Kirsten K Ness, Melissa M Hudson, Justin N Baker, I-Chan Huang
{"title":"Leveraging natural language processing and machine learning to characterize psychological stress and life meaning and purpose in pediatric cancer survivors: a preliminary validation study.","authors":"Jin-Ah Sim, Xiaolei Huang, Rachel T Webster, Kumar Srivastava, Kirsten K Ness, Melissa M Hudson, Justin N Baker, I-Chan Huang","doi":"10.1093/jamiaopen/ooaf018","DOIUrl":"10.1093/jamiaopen/ooaf018","url":null,"abstract":"<p><strong>Objective: </strong>To determine if natural language processing (NLP) and machine learning (ML) techniques accurately identify interview-based psychological stress and meaning/purpose data in child/adolescent cancer survivors.</p><p><strong>Materials and methods: </strong>Interviews were conducted with 51 survivors (aged 8-17.9 years; ≥5-years post-therapy) from St Jude Children's Research Hospital. Two content experts coded 244 and 513 semantic units, focusing on attributes of psychological stress (anger, controllability/manageability, fear/anxiety) and attributes of meaning/purpose (goal, optimism, purpose). Content experts extracted specific attributes from the interviews, which were designated as the gold standard. Two NLP/ML methods, Word2Vec with Extreme Gradient Boosting (XGBoost), and Bidirectional Encoder Representations from Transformers Large (BERT<sub>Large</sub>), were validated using accuracy, areas under the receiver operating characteristic curves (AUROCC), and under the precision-recall curves (AUPRC).</p><p><strong>Results: </strong>BERT<sub>Large</sub> demonstrated higher accuracy, AUROCC, and AUPRC in identifying all attributes of psychological stress and meaning/purpose versus Word2Vec/XGBoost. BERT<sub>Large</sub> significantly outperformed Word2Vec/XGBoost in characterizing all attributes (<i>P</i> <.05) except for the purpose attribute of meaning/purpose.</p><p><strong>Discussion: </strong>These findings suggest that AI tools can help healthcare providers efficiently assess emotional well-being of childhood cancer survivors, supporting future clinical interventions.</p><p><strong>Conclusions: </strong>NLP/ML effectively identifies interview-based data for child/adolescent cancer survivors.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf018"},"PeriodicalIF":2.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11936487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143721728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-03-12eCollection Date: 2025-04-01DOI: 10.1093/jamiaopen/ooaf017
Alex C Cheng, Eva Bascompte Moragas, Ellis Thomas, Lindsay O'Neal, Paul A Harris, Ranee Chatterjee, James Goodrich, Jamie Roberts, Sameer Cheema, Sierra Lindo, Daniel E Ford, Liz Martinez, Scott Carey, Ann Dozier, Carrie Dykes, Pavithra Panjala, Lynne Wagenknecht, Joseph E Andrews, Janet Shuping, Derick Burgin, Nancy S Green, Siddiq Mohammed, Sana Khoury-Shakour, Lisa Connally, Cameron Coffran, Adam Qureshi, Natalie Schlesinger, Rhonda G Kost
{"title":"Standards and infrastructure for multisite deployment of the research participant perception survey.","authors":"Alex C Cheng, Eva Bascompte Moragas, Ellis Thomas, Lindsay O'Neal, Paul A Harris, Ranee Chatterjee, James Goodrich, Jamie Roberts, Sameer Cheema, Sierra Lindo, Daniel E Ford, Liz Martinez, Scott Carey, Ann Dozier, Carrie Dykes, Pavithra Panjala, Lynne Wagenknecht, Joseph E Andrews, Janet Shuping, Derick Burgin, Nancy S Green, Siddiq Mohammed, Sana Khoury-Shakour, Lisa Connally, Cameron Coffran, Adam Qureshi, Natalie Schlesinger, Rhonda G Kost","doi":"10.1093/jamiaopen/ooaf017","DOIUrl":"10.1093/jamiaopen/ooaf017","url":null,"abstract":"<p><strong>Objectives: </strong>To develop and disseminate a technical framework for administering the Research Participant Perception Survey (RPPS) and aggregating data across institutions using REDCap.</p><p><strong>Materials and methods: </strong>Six RPPS Steering Committee (RSC) member institutions met bi-weekly to achieve consensus on survey sampling techniques, data standards, participant and study descriptor variables, and dashboard design.</p><p><strong>Results: </strong>RSC members implemented the infrastructure to send the RPPS to participants and shared data to the Empowering the Participant Voice Consortium Database. Two pilot sites used the tools generated by the RSC to implement the RPPS.</p><p><strong>Discussion: </strong>The RSC created a REDCap project setup file, an external module visual analytics dashboard, an English/Spanish language file, and an implementation guide.</p><p><strong>Conclusion: </strong>The technical setup materials created by the RSC were effective in aiding new sites in implementing the RPPS and could help future sites adopt the RPPS to better understand participant experiences to improve research recruitment and retention.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 2","pages":"ooaf017"},"PeriodicalIF":2.5,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11901591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-02-28eCollection Date: 2025-02-01DOI: 10.1093/jamiaopen/ooaf006
Nicholas C Wan, Monika E Grabowska, Vern Eric Kerchberger, Wei-Qi Wei
{"title":"Exploring beyond diagnoses in electronic health records to improve discovery: a review of the phenome-wide association study.","authors":"Nicholas C Wan, Monika E Grabowska, Vern Eric Kerchberger, Wei-Qi Wei","doi":"10.1093/jamiaopen/ooaf006","DOIUrl":"10.1093/jamiaopen/ooaf006","url":null,"abstract":"<p><strong>Objective: </strong>The phenome-wide association study (PheWAS) systematically examines the phenotypic spectrum extracted from electronic health records (EHRs) to uncover correlations between phenotypes and exposures. This review explores methodologies, highlights challenges, and outlines future directions for EHR-driven PheWAS.</p><p><strong>Materials and methods: </strong>We searched the PubMed database for articles spanning from 2010 to 2023, and we collected data regarding exposures, phenotypes, cohorts, terminologies, replication, and ancestry.</p><p><strong>Results: </strong>Our search yielded 690 articles. Following exclusion criteria, we identified 291 articles published between January 1, 2010, and December 31, 2023. A total number of 162 (55.6%) articles defined phenomes using phecodes, indicating that research is reliant on the organization of billing codes. Moreover, 72.8% of articles utilized exposures consisting of genetic data, and the majority (69.4%) of PheWAS lacked replication analyses.</p><p><strong>Discussion: </strong>Existing literature underscores the need for deeper phenotyping, variability in PheWAS exposure variables, and absence of replication in PheWAS. Current applications of PheWAS mainly focus on cardiovascular, metabolic, and endocrine phenotypes; thus, applications of PheWAS in uncommon diseases, which may lack structured data, remain largely understudied.</p><p><strong>Conclusions: </strong>With modern EHRs, future PheWAS should extend beyond diagnosis codes and consider additional data like clinical notes or medications to create comprehensive phenotype profiles that consider severity, temporality, risk, and ancestry. Furthermore, data interoperability initiatives may help mitigate the paucity of PheWAS replication analyses. With the growing availability of data in EHR, PheWAS will remain a powerful tool in precision medicine.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 1","pages":"ooaf006"},"PeriodicalIF":2.5,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879097/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-02-26eCollection Date: 2025-02-01DOI: 10.1093/jamiaopen/ooaf011
Nicole E Werner, Makenzie Morgen, Anna Jolliff, Madeline Kieren, Joanna Thomson, Scott Callahan, Neal deJong, Carolyn Foster, David Ming, Arielle Randolph, Christopher J Stille, Mary Ehlenbach, Barbara Katz, Ryan J Coller
{"title":"Toward digital caregiving network interventions for children with medical complexity living in socioeconomically disadvantaged neighborhoods.","authors":"Nicole E Werner, Makenzie Morgen, Anna Jolliff, Madeline Kieren, Joanna Thomson, Scott Callahan, Neal deJong, Carolyn Foster, David Ming, Arielle Randolph, Christopher J Stille, Mary Ehlenbach, Barbara Katz, Ryan J Coller","doi":"10.1093/jamiaopen/ooaf011","DOIUrl":"10.1093/jamiaopen/ooaf011","url":null,"abstract":"<p><strong>Background: </strong>To be usable, useful, and sustainable for families of children with medically complex conditions (CMC), digital interventions must account for the complex sociotechnical context in which these families provide care. CMC experience higher neighborhood socioeconomic disadvantage than other child populations, which has associations with CMC health. Neighborhoods may influence the structure and function of the array of caregivers CMC depend upon (ie, the caregiving network).</p><p><strong>Objective: </strong>Explore the structures/functions and barriers/facilitators of caregiving networks for CMC living in socioeconomically disadvantaged neighborhoods to inform the design of digital network interventions.</p><p><strong>Methods: </strong>We conducted 6 virtual focus groups with caregivers of CMC living in socioeconomically disadvantaged neighborhoods from 6 sites. Three groups included \"primary caregivers\" (parent/guardian), and 3 groups included \"secondary caregivers\" (eg, other family member, in-home nurse). We analyzed transcripts using thematic analysis.</p><p><strong>Results: </strong>Primary (<i>n</i> = 18) and secondary (<i>n</i> = 9) caregivers were most often female (81%) and reported a mean (SD) caregiving network size of 3.9 (1.60). We identified 4 themes to inform digital network intervention design: (1) Families vary in whether they prefer to be the locus of network communication, (2) external forces may override caregivers' communication preferences, (3) neighborhood assets influence caregiving network structure, and (4) unfilled or unreliably filled secondary caregiver roles creates vulnerability and greater demands on the primary caregiver.</p><p><strong>Discussion and conclusion: </strong>Our results provide a foundation from which digital network interventions can be designed, highlighting that caregiving networks for CMC living in socioeconomically disadvantaged neighborhoods are influenced by family preferences, external forces, and neighborhood assets.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 1","pages":"ooaf011"},"PeriodicalIF":2.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JAMIA OpenPub Date : 2025-02-25eCollection Date: 2025-02-01DOI: 10.1093/jamiaopen/ooaf016
Timothy Owolabi
{"title":"Transforming appeal decisions: machine learning triage for hospital admission denials.","authors":"Timothy Owolabi","doi":"10.1093/jamiaopen/ooaf016","DOIUrl":"10.1093/jamiaopen/ooaf016","url":null,"abstract":"<p><strong>Objective: </strong>To develop and validate a machine learning model that helps physician advisors efficiently identify hospital admission denials likely to be overturned on appeal.</p><p><strong>Materials: </strong>Analysis of 2473 appealed hospital admission denials with known outcomes, split 90:10 for training and testing.</p><p><strong>Methods: </strong>Six binary classifier models were trained and evaluated using accuracy, precision, recall, and F1 score metrics.</p><p><strong>Results: </strong>An elastic net logistic regression model was selected based on computational efficiency and optimal performance with 84% accuracy, 84% precision, 98% recall, and an F1 score of 0.9.</p><p><strong>Discussion: </strong>The predictive model addresses the risk of physician advisors accepting inappropriate denials due to biased perceptions of appeal success. Model implementation improved denial screening efficiency and was a key feature of a more successful appeal strategy.</p><p><strong>Conclusions: </strong>By addressing data quality problems inherent to electronic health data, and expanding the feature space, machine learning can be an effective tool in the healthcare provider space.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"8 1","pages":"ooaf016"},"PeriodicalIF":2.5,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11854074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}