{"title":"Special Issue on Informatics Education: Exploring the Impact of GitHub Copilot on Health Informatics Education.","authors":"Sanja Avramovic,Ivan Avramovic,Janusz Wojtusiak","doi":"10.1055/a-2414-7790","DOIUrl":"https://doi.org/10.1055/a-2414-7790","url":null,"abstract":"BACKGROUNDThe use of artificial intelligence-driven code completion tools, particularly the integration of GitHub Copilot with Visual Studio, has potential implications for Health Informatics education, particularly for students learning SQL and Python.OBJECTIVESThis study aims to evaluate the effectiveness of these tools in solving or assisting with the solution of problems found in Health Informatics coursework, ranging from simple to complex.METHODSThe study assesses the performance of GitHub Copilot in generating code for Health Informatics coding assignments from graduate classes, with a focus on the impact of detailed explanations on the tool's effectiveness.RESULTSFindings reveal that GitHub Copilot can generate correct code for straightforward problems. The correctness and effectiveness of solutions decrease with problem complexity, and the tool struggles with the most challenging problems, although performance on complex problems improves with more detailed explanations.CONCLUSIONSThe study underscores the relevance of these tools to programming in Health Informatics education but also highlights the need for critical evaluation by students. It concludes with a call for educators to adapt swiftly to this rapidly evolving technology.","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142265942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Berenguer Albiñana, Matteo Pallocca, Hayley Fenton, Will Sopwith, Charlie Van Eden, Olof Akre, Annika Auranen, François Bocquet, Marina Borges, Emiliano Calvo, John Corkett, Serena Di Cosimo, Nicola Gentili, Julien Guérin, Sissel Jor, Tomas Kazda, Alenka Kolar, Tim Kuschel, Maria Julia Lostes, Chiara Paratore, Paolo Pedrazzoli, Marko Petrovic, Jarno Raid, Miriam Roche, Christoph Schatz, Joelle Thonnard, Giovanni Tonon, Alberto Traverso, Andrea Wolf, Ahmed H. Zedan, Piers Mahon
{"title":"Developing PRISM: A Pragmatic Institutional Survey and Bench Marking Tool to Measure Digital Research Maturity of Cancer Centers","authors":"Carlos Berenguer Albiñana, Matteo Pallocca, Hayley Fenton, Will Sopwith, Charlie Van Eden, Olof Akre, Annika Auranen, François Bocquet, Marina Borges, Emiliano Calvo, John Corkett, Serena Di Cosimo, Nicola Gentili, Julien Guérin, Sissel Jor, Tomas Kazda, Alenka Kolar, Tim Kuschel, Maria Julia Lostes, Chiara Paratore, Paolo Pedrazzoli, Marko Petrovic, Jarno Raid, Miriam Roche, Christoph Schatz, Joelle Thonnard, Giovanni Tonon, Alberto Traverso, Andrea Wolf, Ahmed H. Zedan, Piers Mahon","doi":"10.1055/s-0044-1788331","DOIUrl":"https://doi.org/10.1055/s-0044-1788331","url":null,"abstract":"<p>\u0000<b>Background</b> Multicenter precision oncology real-world evidence requires a substantial long-term investment by hospitals to prepare their data and align on common Clinical Research processes and medical definitions. Our team has developed a self-assessment framework to support hospitals and hospital networks to measure their digital maturity and better plan and coordinate those investments. From that framework, we developed PRISM for Cancer Outcomes: <b>PR</b>agmatic <b>I</b>nstitutional <b>S</b>urvey and bench<b>M</b>arking.</p> <p>\u0000<b>Objectives</b> The primary objective was to develop PRISM as a tool for self-assessment of digital maturity in oncology hospitals and research networks; a secondary objective was to create an initial benchmarking cohort of >25 hospitals using the tool as input for future development.</p> <p>\u0000<b>Methods</b> PRISM is a 25-question semiquantitative self-assessment survey developed iteratively from expert knowledge in oncology real-world study delivery. It covers four digital maturity dimensions: (1) Precision oncology, (2) Clinical digital data, (3) Routine outcomes, and (4) Information governance and delivery. These reflect the four main data types and critical enablers for precision oncology research from routine electronic health records.</p> <p>\u0000<b>Results</b> During piloting with 26 hospitals from 19 European countries, PRISM was found to be easy to use and its semiquantitative questions to be understood in a wide diversity of hospitals. Results within the initial benchmarking cohort aligned well with internal perspectives. We found statistically significant differences in digital maturity, with Precision oncology being the most mature dimension, and Information governance and delivery the least mature.</p> <p>\u0000<b>Conclusion</b> PRISM is a light footprint benchmarking tool to support the planning of large-scale real-world research networks. It can be used to (i) help an individual hospital identify areas most in need of investment and improvement, (ii) help a network of hospitals identify sources of best practice and expertise, and (iii) help research networks plan research. With further testing, policymakers could use PRISM to better plan digital investments around the Cancer Mission and European Digital Health Space.</p> ","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issue on Teaching and Training Future Health Informaticians:Partnering with Students to Develop a Capstone for a Graduate Health Informatics Program.","authors":"Rita Jezrawi,Stephanie Zahorka Derka,Elizabeth Warnick,Jasmine Foley,Vritti Patel,Neethu Pavithran,Thérèse Bernier,Nicole Wagner,Neil G Barr,Vincent Maccio,Margaret Leyland,Cynthia Lokker","doi":"10.1055/a-2412-3535","DOIUrl":"https://doi.org/10.1055/a-2412-3535","url":null,"abstract":"OBJECTIVETo assess the desirability, feasibility, and sustainability of integrating a project-based capstone course with the course-based curriculum of an interdisciplinary MSc health informatics program guided with a student-partnered steering committee and student-centered approach.METHODSWe conducted an online cross-sectional survey (n=87) and three semi-structured focus groups (n=18) of health informatics students and alumni. Survey data was analyzed descriptively. Focus groups were audio-recorded and transcribed verbatim and then analyzed using a general inductive and classic analysis approach.RESULTSMost students were supportive of including a capstone project but desired an option to work independently or within a group. Students perceived several benefits to capstone courses while concerned over perceived challenges to capstone implementation, evaluation, and managing group processes. Themes identified were: 1) professional development, identity, and career advancement; 2) emulating the real world and learning beyond the classroom, 3) embracing new, full circle learning, 4) anticipated course structure, delivery, and preparation, 5) balancing student choice, interests, and priorities, and 6) concerns over group dynamics, limitations, and support.CONCLUSIONSThis study demonstrates the value of having students as partners at each stage in the process from methods conception to course curriculum design. With the steering committee and the curriculum developer, we codeveloped a student-centered course that integrates foundational digital health-related project knowledge acquisition with an inquiry-based project which can be completed independently or in small groups. This study demonstrates the potential benefits and challenges that health informatics educators may consider when (re)-designing capstone courses.","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Krause,Sharon Davis,Zhijun Yin,Katherine Schafer,Trent Rosenbloom,Colin Walsh
{"title":"Enhancing Suicide Risk Prediction Models with Temporal Clinical Note Features.","authors":"Kevin Krause,Sharon Davis,Zhijun Yin,Katherine Schafer,Trent Rosenbloom,Colin Walsh","doi":"10.1055/a-2411-5796","DOIUrl":"https://doi.org/10.1055/a-2411-5796","url":null,"abstract":"OBJECTIVEThe objective of this study was to investigate the impact of enhancing a structured-data-based suicide attempt risk prediction model with temporal Concept Unique Identifiers (CUIs) derived from clinical notes. We aimed to examine how different temporal schemes, model types, and prediction ranges influenced the model's predictive performance. This research sought to improve our understanding of how the integration of temporal information and clinical variable transformation could enhance model predictions.MATERIALS AND METHODSWe identified modeling targets using diagnostic codes for suicide attempts within 30, 90, or 365 days following a temporally grouped visit cluster. Structured data included medications, diagnoses, procedures, and demographics, while unstructured data consisted of terms extracted with regular expressions from clinical notes. We compared models trained only on structured data (controls) to hybrid models trained on both structured and unstructured data. We used two temporalization schemes for clinical notes: fixed 90-day windows and flexible epochs. We trained and assessed random forests and hybrid LSTM neural networks using AUPRC and AUROC, with additional evaluation of sensitivity and PPV at 95% specificity.RESULTSThe training set included 2,364,183 visit clusters with 2,009 30-day suicide attempts, and the testing set contained 471,936 visit clusters with 480 suicide attempts. Models trained with temporal CUIs outperformed those trained with only structured data. The window-temporalized LSTM model achieved the highest AUPRC (0.056 ± 0.013) for the 30-day prediction range. Hybrid models generally showed better performance compared to controls across most metrics.DISCUSSION AND CONCLUSIONThis study demonstrated that incorporating EHR-derived clinical note features enhanced suicide attempt risk prediction models, particularly with window-temporalized LSTM models. Our results underscored the critical value of unstructured data in suicidality prediction, aligning with previous findings. Future research should focus on integrating more sophisticated methods to continue improving prediction accuracy, which will enhance the effectiveness of future intervention.","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Issue on Informatics Education: Teaching Data Science through an Interactive, Hands-On Workshop with Clinically-Relevant Case Studies.","authors":"Alvin Dean Jeffery, Patricia Sengstack","doi":"10.1055/a-2407-1272","DOIUrl":"https://doi.org/10.1055/a-2407-1272","url":null,"abstract":"<p><strong>Background: </strong>In this case report, we describe the development of an innovative workshop to bridge the gap in data science education for practicing clinicians (and particularly nurses). In the workshop, we emphasize the core concepts of machine learning and predictive modeling to increase understanding among clinicians.</p><p><strong>Objective: </strong>Addressing the limited exposure of healthcare providers to leverage and critique data science methods, this interactive workshop aims to provide clinicians with foundational knowledge in data science, enabling them to contribute effectively to teams focused on improving care quality.</p><p><strong>Methods: </strong>The workshop focuses on meaningful topics for clinicians, such as model performance evaluation and introduces machine learning through hands-on exercises using free, interactive python notebooks. Clinical case studies on sepsis recognition and opioid overdose death provide relatable contexts for applying data science concepts.</p><p><strong>Results: </strong>Positive feedback from over 300 participants across various settings highlights the workshop's effectiveness in making complex topics accessible to clinicians.</p><p><strong>Conclusions: </strong>Our approach prioritizes engaging content delivery and practical application over extensive programming instruction, aligning with adult learning principles. This initiative underscores the importance of equipping clinicians with data science knowledge to navigate today's data-driven healthcare landscape, offering a template for integrating data science education into healthcare informatics programs or continuing professional development.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Issue on Informatics Education: < Integrating Diversity, Equity, Inclusion, and Accessibility into a Data Storytelling Model for Health Informatics Education >.","authors":"Grace Gao, Christie Martin, Alvin Dean Jeffery","doi":"10.1055/a-2407-1329","DOIUrl":"https://doi.org/10.1055/a-2407-1329","url":null,"abstract":"<p><strong>Background: </strong>Health informatics education is pivotal in integrating diversity, equity, inclusion, and accessibility (DEIA) principles into curricula and leveraging data with equity considerations. Integrating clinically driven data with other datasets is crucial to comprehensive understanding of patient care demographics, experiences, and outcomes to create equity-minded data storytelling. Publicly available Healthy People 2030 (HP2030) resources complement academic EHRs, supporting tailored learning activities in informatics education to enhance educational utility through a DEIA lens.</p><p><strong>Objectives: </strong>This case report describes the expansion of an existing DEI checklist to an updated DEIA checklist for preparing future informaticians to collect and critically evaluate DEIA features using this checklist in creating equity-minded data storytelling.</p><p><strong>Methods: </strong>An equity-minded data storytelling model and the HP2030 framework were utilized to develop the DEIA checklist. We employed an informal cognitive walkthrough to expand the DEIA checklist and evaluate the DEIA measures or characteristics within datasets from the HP2030 social determinants of health (SDOH) 5 topics using this checklist.</p><p><strong>Results: </strong>We reviewed 76 available SDOH-related datasets and added 6 measures to \"demographics\" and 7 to \"skills, abilities, & accessibility\" of the DEIA checklist. Our evaluation of the DEIA checklist verified HP2030's inclusion of all measures, except \"religions/beliefs.\" All DEIA measures were linked to equity and accessibility, 1 in inclusion, and the inclusion of 3 characteristics comprising the category \"language\" and 6 characteristics comprising the category \"images.\"</p><p><strong>Conclusion: </strong>Results highlighted the accessibility and comprehensiveness of HP2030 demographic data resources, considering SDOH factors and promoting inclusive data representation to address health disparities. The DEIA checklist provides a structured tool in facilitating unbiased data collection and visualization of SDOH-related data in data storytelling through an equity-informed lens. Integrating an equity-minded data storytelling with frameworks like HP2030 enriches health informatics education, broadens students' understanding of health disparities, and supports evidence-based interventions for improved health outcomes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Special Issue on Informatics Education: ChatGPT Performs Worse on USMLE-Style Ethics Questions Compared to Medical Knowledge Questions.","authors":"Tessa Louise Danehy, Jessica Hecht, Sabrina Kentis, Clyde Schechter, Sunit Jariwala","doi":"10.1055/a-2405-0138","DOIUrl":"https://doi.org/10.1055/a-2405-0138","url":null,"abstract":"<p><strong>Objectives: </strong>The main objective of this study is to evaluate the ability of the Large Language Model ChatGPT to accurately answer USMLE board style medical ethics questions compared to medical knowledge based questions. This study has the additional objectives of comparing the overall accuracy of GPT-3.5 to GPT-4 and to assess the variability of responses given by each version.</p><p><strong>Materials and methods: </strong>Using AMBOSS, a third party USMLE Step Exam test prep service, we selected one group of 27 medical ethics questions and a second group of 27 medical knowledge questions matched on question difficulty for medical students. We ran 30 trials asking these questions on GPT-3.5 and GPT-4, and recorded the output. A random-effects linear probability regression model evaluated accuracy, and a Shannon entropy calculation evaluated response variation.</p><p><strong>Results: </strong>Both versions of ChatGPT demonstrated a worse performance on medical ethics questions compared to medical knowledge questions. GPT-4 performed 18% points (P < 0.05) worse on medical ethics questions compared to medical knowledge questions and GPT-3.5 performed 7% points (P = 0.41) worse. GPT-4 outperformed GPT-3.5 by 22% points (P < 0.001) on medical ethics and 33% points (P < 0.001) on medical knowledge. GPT-4 also exhibited an overall lower Shannon entropy for medical ethics and medical knowledge questions (0.21 and 0.11, respectively) than GPT-3.5 (0.59 and 0.55) which indicates lower variability in response.</p><p><strong>Conclusion: </strong>Both versions of ChatGPT performed more poorly on medical ethics questions compared to medical knowledge questions. GPT-4 significantly outperformed GPT-3.5 on overall accuracy and exhibited a significantly lower response variability in answer choices. This underscores the need for ongoing assessment of ChatGPT versions for medical education.</p><p><strong>Key words: </strong>ChatGPT, Large Language Model, Artificial Intelligence, Medical Education, USMLE, Ethics.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Swaminathan Kandaswamy, Herb Williams, Sarah A Thompson, Thomas Dawson, Naveen Muthu, Evan Orenstein
{"title":"Realizing the Full Potential of Clinical Decision Support: Translating Usability Testing into Routine Practice in healthcare operations.","authors":"Swaminathan Kandaswamy, Herb Williams, Sarah A Thompson, Thomas Dawson, Naveen Muthu, Evan Orenstein","doi":"10.1055/a-2404-2129","DOIUrl":"https://doi.org/10.1055/a-2404-2129","url":null,"abstract":"<p><strong>Background: </strong>Clinical Decision Support (CDS) tools have a mixed record of effectiveness, often due to inadequate alignment with clinical workflows and poor usability. While there's a consensus that usability testing methods address these issues, in practice, usability testing is generally only used for selected projects (such as funded research studies). There is a critical need for CDS operations to apply usability testing to all CDS implementations.</p><p><strong>Objectives: </strong>In this State of the Art / Best Practice paper, we share challenges with scaling usability in healthcare operations and alternative methods and CDS governance structures to enable usability testing as a routine practice.</p><p><strong>Methods: </strong>We coalesce our experience and results of applying guerilla in-situ usability testing to over 20 projects in 1 year period with the proposed solution.</p><p><strong>Results: </strong>We demonstrate the feasibility of adopting \"guerilla in-situ usability testing\" in operations and their effectiveness in incorporating user feedback and improving design.</p><p><strong>Conclusion: </strong>Although some methodological rigor was relaxed to accommodate operational speed, the benefits outweighed the limitations. Broader adoption of usability testing may transform CDS implementation and improve health outcomes.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nudging Towards Sleep-Friendly Hospitalizations: A Multifaceted Approach on Reducing Unnecessary Overnight Interventions.","authors":"Sullafa Kadura, Lauren Eisner, Samia Lopa, Alexander Poulakis, Hannah Mesmer, Nicole Willnow, Wilfred Pigeon","doi":"10.1055/a-2404-2344","DOIUrl":"https://doi.org/10.1055/a-2404-2344","url":null,"abstract":"<p><strong>Background: </strong>Choice architecture refers to the design of decision environments, which can influence healthcare decision-making. Nudges are subtle adjustments in these environments that guide decisions toward desired outcomes. For example, Computerized Provider Order Entry (CPOE) within Electronic Health Records (EHR) recommends frequencies for interventions such as nursing assessments and medication administrations, but these can link to around-the-clock schedules without clinical necessity.</p><p><strong>Objective: </strong>This study aimed to evaluate an intervention to promote sleep-friendly practices by optimizing choice architecture and employing targeted nudges on inpatient order frequencies.</p><p><strong>Methods: </strong>We employed a quasi-experimental interrupted time series analysis of a multifaceted, multiphase intervention to reduce overnight interventions in a hospital system. Our intervention featured EHR modifications to optimize the scheduling of vital sign checks, neurological checks, and medication administrations. Additionally, we used targeted secure messaging reminders and education on an inpatient neurology unit (INU) to supplement the initiative.</p><p><strong>Results: </strong>Significant increases in sleep-friendly medication orders were observed at the academic medical center (AMC) and community hospital affiliate (CHA), particularly for acetaminophen and heparin at the AMC. This led to a reduction in overnight medication administrations, with the most substantial decrease observed with heparin at all locations (CHA: 18%, AMC: 10%, INU: 10%, p<0.05). Sleep-friendly vital sign orders increased significantly at all sites (AMC: 6.7%, CHA 4.3%, INU: 14%, p<0.05), and sleep-friendly neuro check orders increased significantly at the AMC (8.1%, p<0.05). There was also a significant reduction in overnight neurological checks at the AMC.</p><p><strong>Discussion: </strong>Tailoring EHR modifications and employing multifaceted nudging strategies emerged as promising approaches for reducing unnecessary overnight interventions. The observed shifts in sleep-friendly ordering translated into decreases in overnight interventions.</p><p><strong>Conclusion: </strong>Multifaceted nudges can effectively influence clinician decision-making and patient care. The varied impacts across nudge types and settings emphasize the importance of thoughtful nudge design and understanding local workflows.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabrina Mangal, Maryam Hyder, Kate Zarzuela, William McDonald, Ruth M Masterson Creber, Ian M Kronish, Stefan Konigorski, Mathew S Maurer, Monika M Safford, Mark S Lachs, Parag Goyal
{"title":"\"It attracts your eyes and brain\": Refining visualizations for shared decision-making with heart failure patients.","authors":"Sabrina Mangal, Maryam Hyder, Kate Zarzuela, William McDonald, Ruth M Masterson Creber, Ian M Kronish, Stefan Konigorski, Mathew S Maurer, Monika M Safford, Mark S Lachs, Parag Goyal","doi":"10.1055/a-2402-5832","DOIUrl":"10.1055/a-2402-5832","url":null,"abstract":"<p><strong>Background: </strong>N-of-1 trials have emerged as a personalized approach to patient-centered care, where patients can compare evidence-based treatments using their own data. However, little is known about optimal methods to present individual-level data from medication-related N-of-1 trials to patients to promote decision-making.</p><p><strong>Objectives: </strong>We conducted qualitative interviews with patients with heart failure with preserved ejection fraction (HFpEF) undergoing N-of-1 trials to iterate, refine, and optimize a patient-facing data visualization tool for displaying results of N-of-1 medication trials. The goal of optimizing this tool was to promote patients' understanding of their individual health information, and to ultimately facilitate shared decision-making about continuing or discontinuing their medication.</p><p><strong>Methods: </strong>We conducted 32 semi-structured qualitative interviews with 9 participants over the course of their participation in N-of-1 trials. The N-of-1 trials were conducted to facilitate a comparison of continuing versus discontinuing a beta-blocker. Interviews were conducted in-person or over the phone after each treatment period to evaluate participant perspectives on a data visualization tool prototype. Data were coded using directed content analysis by two independent reviewers and included a third reviewer to reach consensus when needed. Major themes were extracted and iteratively incorporated into the patient-facing data visualization tool.</p><p><strong>Results: </strong>Nine participants provided feedback on how their data was displayed in the visualization tool. After qualitative analysis, three major themes emerged that informed our final interface. Participants preferred: 1) clearly stated individual symptom scores, 2) a reference image with labels to guide their interpretation of symptom information, and 3) qualitative language over numbers alone conveying the meaning of changes in their scores (e.g., better, worse).</p><p><strong>Conclusions: </strong>Feedback informed the design of a patient-facing data visualization tool for medication-related N-of-1 trials. Future work should include usability and comprehension testing of this interface on a larger scale.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}