2024 MCBK North American chapter meeting—Lightning talk and demonstration abstracts

IF 2.6 Q2 HEALTH POLICY & SERVICES
{"title":"2024 MCBK North American chapter meeting—Lightning talk and demonstration abstracts","authors":"","doi":"10.1002/lrh2.10479","DOIUrl":null,"url":null,"abstract":"<p><b>POSTERS</b></p><p><b>DEMONSTRATIONS</b></p><p>Saketh Boddapati, University of Michigan College of Literature, Science, and the Arts</p><p><span>[email protected]</span></p><p>Yongqun “Oliver” He, University of Michigan Medical School</p><p><span>[email protected]</span></p><p>Healthcare providers learn continuously as a core part of their work. However, as the rate of knowledge production in biomedicine increases, better support for providers' continuous learning is needed. Tools for learning from clinical data are widely available in the form of clinical quality dashboards and feedback reports. However, these tools seem to be frequently unused.</p><p>Making clinical data useful as feedback for learning appears to be a key challenge for health systems. Feedback can include coaching, evaluation, and appreciation, but systems developed for performance improvement do not adequately recognize these purposes in the context of provider learning. Moreover, providers have different information needs, motivational orientations, and workplace cultures, all of which affect the usefulness of data as feedback.</p><p>To increase the usefulness of data as feedback, we developed a Precision Feedback Knowledge Base (PFKB) for a precision feedback system. PFKB contains knowledge about how feedback influences motivation, to enable the precision feedback system to compute a motivational potential score for possible feedback messages. PFKB has four primary knowledge components: (1) causal pathway models, (2) message templates, (3) performance measures, and (4) annotations of motivating information in clinical data. We also developed vignettes about 7 diverse provider personas to illustrate how the precision feedback system uses PFKB in the context of anesthesia care. This ongoing research includes a pilot study that has demonstrated the technical feasibility of the precision feedback system, in preparation for a trial of precision feedback in an anesthesia quality improvement consortium.</p><p>Bruce Bray, University of Utah, on behalf of the HL7 Learning Health Systems Work Group</p><p><span>[email protected]</span></p><p>Data is the lifeblood of computable biomedical knowledge (CBK) and must adhere to standards to achieve the interoperability needed to generate virtuous learning cycles within a learning health system (LHS). The HL7 Learning Health System Work Group (HL7 LHS WG) conducted a scoping review to compile an initial list of standards that can support the LHS across “quadrants” of a virtuous learning cycle: (1) knowledge to action, (2) action to data, (3) data to evidence, and (4) evidence to knowledge. We found that few standards explicitly refer to an overarching framework that aligns interoperability and data standards across the phases of the LHS. We will describe our initial work to identify relevant gaps and overlaps in standards in this environment. Future work should address standards coordination and pilot testing within an LHS framework. These efforts can enhance collaboration among communities such as MCBK and HL7 to promote standards-based computable knowledge mobilization.</p><p>Allen Flynn, University of Michigan, on behalf of the Knowledge Systems Lab, Department of Learning Health Sciences</p><p><span>[email protected]</span></p><p>The Knowledge Object (KO) is a modular, extensible digital object, designed to allow computable biomedical knowledge (CBK) to be managed as a resource and implemented as a service. This poster describes how the KO model has evolved to better support the FAIR principles by:</p><p>Enabling multiple services to increase interoperability and reuse.</p><p>KOs were originally developed to run with an Activator, which loaded and deployed KOs on request, exposed the service, and routed responses as a RESTful API. While the intent was to facilitate low-friction KO implementation, the Activator could limit interoperability and the potential for reuse. Expanding the model to reduce reliance on the Activator and allow for multiple services means KOs can meet a wider range of stakeholder needs. Current work includes developing updated specifications and reference implementation for activation.</p><p>Enabling multiple implementations of knowledge and services to increase interoperability and reuse. The legacy KO model included one implementation of both the CBK payload and the service to activate it. Engineering a KO to contain multiple implementations of CBK and services means KOs can meet a wider range of stakeholder needs. We are updating our model and engineering KOs capable of carrying multiple implementations and services.</p><p>Improving findability, accessibility interoperability, and reuse through an updated model and metadata. Metadata for the legacy KO model primarily described the service. Using the new model, we are now developing standards-based extensible Linked Data metadata to describe the KO, the knowledge it contains, and the services that “activate” that knowledge.</p><p>Nicole Gauthreaux, NORC at the University of Chicago</p><p><span>[email protected]</span></p><p>Courtney Zott, NORC at the University of Chicago</p><p><span>[email protected]</span></p><p>Prashila Dullabh, NORC at the University of Chicago</p><p><span>[email protected]</span></p><p>This poster is relevant to clinicians, health system leaders, informaticians, and researchers interested in driving future mobilization of computable knowledge for patient-centered clinical decision support (PC CDS). We conducted a cross-cutting synthesis of real-world PC CDS projects to identify the types of measures used, measurement challenges and limitations, and action steps to advance PC CDS measurement. We reviewed research products from 20 PC CDS projects funded by the Agency for Healthcare Research and Quality (AHRQ) to gather information on their studies, and we conducted key informant interviews with Principal Investigators of nine projects to gather their experiences and challenges with PC CDS measurement.</p><p>Findings from the synthesis revealed a considerable focus on measuring the effectiveness of the PC CDS, primarily by collecting patient and clinician perspectives on the usability and acceptability of the tool and observing patient health outcomes from the intervention. Many projects incorporated patient perspectives in their study, yet there were more process measures (e.g., patient satisfaction with the design) than outcome measures (e.g., patient activation to manage their health due to the PC CDS). Few projects measured safety, or the technical performance and information presented by the PC CDS technology. Finally, equity measures rarely extended beyond descriptive analyses of participant socio-demographics. Key informants described other evaluation challenges related to patient recruitment, technical limitations, and imprecision in data collection specific to PC CDS interventions. These findings provide a basis for guiding future development of measures that promote the adoption and use of knowledge for patient-centered care.</p><p>Pawan Goyal, American College of Emergency Physicians</p><p><span>[email protected]</span></p><p>Data is driving the future of medicine. We've already seen the critical importance of real-time insights to new and emerging health threats during the COVID-19 pandemic, as well as the impact of health care trends and patterns of resource utilization. With the new Emergency Medicine Data Institute (EMDI), the American College of Emergency Physicians (ACEP) is rapidly moving emergency medicine to the forefront of data-driven quality and practice innovation. This new initiative is poised to become a central source of intelligence and knowledge generation across all emergency medicine stakeholders. Harnessing the power of information that physicians are already recording, ACEP is synthesizing and standardizing data across multiple billing and EHR environments, innovating new research, and pursuing national-level grants all while enhancing value for emergency physicians, patients, and the broader health care community.</p><p>Indika Kahanda, PhD, University of North Florida</p><p><span>[email protected]</span></p><p>The quest for mobilizing computable biomedical knowledge, inconsistencies, and contradictions in biomedical literature would be a significant barrier. Given the exponential growth of scientific information, researchers often face the daunting task of detecting contradictory statements on crucial health topics. This work proposes to develop a full-fledged, trustworthy automated pipeline for explainable contradiction detection, which will integrate an Information Retrieval (IR) system backed by a local data store, predictive models, and an Explainable AI (XAI) component. Users can input queries on medicine and health topics, and the system will identify top documents and sentences through syntactic analysis and refine results via semantic examination for relevant research claims. These sentences are forwarded to the predictive models backed by Large Language Models, which will classify each pair as contradictory. The XAI component will help output visual explanations based on these predictions. We have used ManConCorpus, a popular biomedical contradiction corpus on cardiovascular diseases, to develop and evaluate our predictive models. The preliminary results demonstrate that PubMedBERT, with an F1 score of 97%, can outperform BioBERT, Bioformer, and Distil-BERT in classifying a given pair of sentences relevant to cardiovascular disease as contradictory or not. Further investigation is necessary to ensure that the models are robust in performing similarly on any health and medical topic. In the future, these predictive models will be combined with the aforementioned IR/XAI components for developing the prototype pipeline. This study has implications for medical and healthcare practitioners, researchers, students, systematic review authors, and the biomedical text-mining community.</p><p>Zach Landis-Lewis, University of Michigan, Department of Learning Health Sciences</p><p><span>[email protected]</span></p><p>Healthcare providers learn continuously as a core part of their work. However, as the rate of knowledge production in biomedicine increases, better support for providers' continuous learning is needed. Tools for learning from clinical data are widely available in the form of clinical quality dashboards and feedback reports. However, these tools seem to be frequently unused.</p><p>Making clinical data useful as feedback for learning appears to be a key challenge for health systems. Feedback can include coaching, evaluation, and appreciation, but systems developed for performance improvement do not adequately recognize these purposes in the context of provider learning. Moreover, providers have different information needs, motivational orientations, and workplace cultures, all of which affect the usefulness of data as feedback.</p><p>To increase the usefulness of data as feedback, we developed a Precision Feedback Knowledge Base (PFKB) for a precision feedback system. PFKB contains knowledge about how feedback influences motivation, to enable the precision feedback system to compute a motivational potential score for possible feedback messages. PFKB has four primary knowledge components: (1) causal pathway models, (2) message templates, (3) performance measures, and (4) annotations of motivating information in clinical data. We also developed vignettes about 7 diverse provider personas to illustrate how the precision feedback system uses PFKB in the context of anesthesia care. This ongoing research includes a pilot study that has demonstrated the technical feasibility of the precision feedback system, in preparation for a trial of precision feedback in an anesthesia quality improvement consortium.</p><p>Siddharth Limaye, Carle Illinois College of Medicine</p><p><span>[email protected]</span></p><p>Understanding causation is an essential element of medical education. While a plethora of research exists regarding the use of knowledge graphs for topics such as drug discovery and personalized medicine, there is relatively less work regarding (1) the use of these graphs for medical education and (2) the use of causal graphs for these purposes. Pathology Graphs is a proof-of-concept application that uses a typed database to represent the causal model underlying several physiologic and pathophysiologic processes in the human body. Effectively, Pathology Graphs is an interactive, digital revision of the concept map.</p><p>While graph databases are suitable for building concept maps and causal graphs, Pathology Graphs instead uses a typed database to implement a higher-order graph structure. This change allows Pathology Graphs to represent certain types of relationships (e.g., an enzyme modifying a reaction) which would otherwise not be simply expressible in a graph database. This change also allows inference of the downstream effects of a change in the pathology graph, such as predicting that product will decrease if an enzyme inhibitor is added to the system. This capability could be useful for further functionality, such as automatically generating multiple-choice questions and answers for students or allowing students to generate differential diagnoses by examining all the potential causes of a finding. Challenges remaining to be addressed for this application include data input, user interface, and output visualization, as the goal of this project is to be accessible for those without pre-existing programming knowledge.</p><p>Aswini Misro, YouDiagnose Limited</p><p><span>[email protected]</span></p><p>Background—AI is being incorporated into healthcare by major tech companies, but public acceptance remains challenging. The study aims to understand resistance to AI despite its increasing accuracy and potential to improve patient waiting times.</p><p>Method—In partnership with the University of Hull and the Academic Health Science Network (AHSN) in the UK, a study was conducted that involved 111 adult patients or carers. 9 did not respond while 28 demonstrated poor digital literacy and were therefore excluded. An interactive user page was designed for the remaining users (<i>n</i> = 74) to engage with a medical chatbot. The participants were asked whether participants would use an AI-powered medical chatbot or nurse for triage at their nearest A&amp;E. Unstructured open-ended interviews were conducted to understand the participants' reasoning behind their answers.</p><p>Finding – Participants ranged in age from 21 to 74, with a slight female majority (40 female vs. 34 male). The majority, 79.8% (<i>n</i> = 59) respondents, expressed their comfort with a nurse-led care while a mere 20.2% (<i>n</i> = 15) showed readiness to interact with a medical chatbot. Notably, the age group of 20–40 was most open to the idea of consulting a chatbot. The four primary objections were: 90.5% (<i>n</i> = 67) believed that the chatbot fails to justify its decisions, 59.5% (<i>n</i> = 44) doubt its accuracy, 78.4% (<i>n</i> = 58) felt that the chatbot was inflexible, and 79.7% (<i>n</i> = 59) found it unemotional and detached.</p><p>Conclusion—Effective treatment relies on trust, respect, and understanding crucial elements needed when incorporating AI in medicine. It should be introduced progressively, considering patients' emotions and societal circumstances.</p><p>Jerome Osheroff, TMIT Consulting LLC/University of Utah/VA</p><p><span>[email protected]</span></p><p>Dave Little (Epic) Stephanie Guarino (Nemours, ChristianaCare) Teresa Cullen (Pima County Health Dept) Rosie Bartel (Patient Partner) Joshua E. Richardson (RTI International) For the POLLC and SCD Learning Communities</p><p>Understanding gaps in care is essential for clinical quality improvement efforts. The Pain Management/Opioid Use LHS Learning Community (POLLC), an initiative that engages care delivery organizations (CDOs) and other stakeholders to collaboratively accelerate quality improvement efforts and results, has been working since 2022 to develop and implement a care gap report for this target. Participants identified 12 important potential care improvement opportunities to assess appropriate long-term opioid use for chronic pain, for example, high opioid doses, multiple emergency department visits, and no Prescription Drug Monitoring Program Check. In March 2023, Epic released a pilot Care Gap Query that uses SQL code to identify patients meeting these criteria; it requires an Epic analyst with SQL expertise to implement and modify the query. In November 2023, Epic released a Care Gap Report (CGR) using a Reporting Workbench Template that enables Epic users without special expertise to run and configure the report. It enables users to take bulk action on report results, for example, placing orders and triggering communications. POLLC participants are exploring opportunities to aggregate care gap results from individual CDOs in a region into “population CGRs” that can be used to inform and guide public health interventions. A parallel learning community on sickle cell disease (SCD) that grew out of POLLC is developing analogous care gap reports for SCD in Epic and Oracle EHRs. Approaches are being explored to leverage interoperable, computable biomedical knowledge to make creating and deploying CGRs across targets, EHRs, and CDOs faster and more efficient.</p><p>Henrique Santos, Rensselaer Polytechnic Institute</p><p><span>[email protected]</span></p><p>Paulo Pinheiro, Instituto Piaget</p><p><span>[email protected]</span></p><p>Deborah L. McGuinness, Rensselaer Polytechnic Institute</p><p><span>[email protected]</span></p><p>Many countries perform surveys to gather data from their population for supporting decision-making and development of public policies. Questionnaires are possibly the most used type of data acquisition instrument in surveys, although additional kinds may be employed (especially in health-related surveys). In the United States, the NHANES is a national health and nutrition examination survey conducted by the National Center for Health Statistics, designed to collect data on adults and children's health and nutritional status. Data is organized in several tables, each containing variables to a specific theme, such as demographics, and dietary information. In addition, data dictionaries are available to (sometimes partially) document the tables' contents. While data is mostly provided by survey participants, instruments might be collecting data related to other entities (e.g., from participants' households and families, as well as laboratory results from participants' provided blood and urine samples). All this complex knowledge can often only be elicited by humans when analyzing and understanding the data dictionaries in combination with the data. The representation of this knowledge in a machine-interpretable format could facilitate further use of the data. We detail how Semantic Data Dictionaries (SDDs) have been used to elicit knowledge about surveys, using the publicly available NHANES data and data dictionaries. In SDDs, we formalize the semantics of variables, including entities, attributes, and more, using terminology from relevant ontologies, and demonstrate how they are used in an automated process to generate a rich knowledge graph that enables downstream tasks in support of survey data analysis.</p><p>Deborah Swain, North Carolina Central University</p><p><span>[email protected]</span></p><p>Following MCBK pilot training in 2021–22 supported by an Institute of Museum and Library Services (IMLS) grant, we designed and developed an open educational resource (OER) platform to be accessible and sustainable for global users. Students and development partners included librarians in medical libraries and informatics graduate students in the United States and Canada. However, international applicants were unable to participate. The OER collection provides full access.</p><p>The concept of an open pedagogy for resources supports open commons and sharing in the larger MCBK community. Policies for OER allow open content and open education practices. In the future, new material and potential courses and textbooks can become part of this MCBK OER collection of documents and slides.</p><p>Currently, the MCBK resources are hosted in the North Carolina Digital Online Collection of Knowledge (NC Docks): https://libres.uncg.edu/ir/nccu/clist.aspx?id=41690. Technical support is provided by UNC-Greensboro and NCCU Research and Instructional Services Librarian, Danielle Colbert-Lewis.</p><p>[See references and special contributors.]</p><p>Deborah Swain, North Carolina Central University</p><p><span>[email protected]</span></p><p>Amrit Vastrala, North Carolina Central University</p><p><span>[email protected]</span></p><p>Bias in AI (artificial intelligence) and ML (machine learning) refers to the systematic errors that can occur in training data, algorithms, or models and lead to discriminatory or unfair outcomes for groups of people. This bias may be deliberate or inadvertent, and it may result from a number of factors including data collections, pre-existing social biases, improper algorithm design, and ethical motivation of XAI (explainable AI).</p><p>The effects of bias can range from maintaining social injustices to making unreliable or inaccurate predictions to patients. The methodologies and frameworks for practice that biomedical researchers and health practitioners have proposed include fairness metrics, pre-processing methods, post-processing strategies to detect and mitigate bias, and models. Both model-level explanations for providers and prediction-level explanations for users/patients have been researched. This poster summarizes evidence-based research and thoughtful recommendations for key stakeholders. Recognizing recent research and reference publications are the primary objective of our literature review and interviews.</p><p>Bias from AI is a difficult problem to solve due to complexity and nuance. There is still a lot of work to be done to ensure that biomedical systems are fair and transparent. Everyone involved can address bias in AI/ML and help develop best practices for building responsible and trustworthy systems. Ongoing research and collaboration among experts in ethics, social science, and healthcare will be essential.</p><p>Yujia Tian, University of Michigan</p><p><span>[email protected]</span></p><p>Although the etiology of most mental illnesses remains unclear, it is believed that they could be caused by a combination of genetic, social factors, and personal characteristics. The complex clinical manifestations of mental illnesses pose challenges in medical diagnosis. Many infrastructural models exist to support classification of mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) serves as the principal authority for psychiatric diagnoses by the American Psychiatric Association (APA). The International Statistical Classification of Diseases and Related Health Problems (ICD) also provides a taxonomic and diagnostic classification of mental illnesses. However, DSM-5 and ICD still struggle with accurate diagnosis with heterogeneity and comorbidity. In contrast, the Hierarchical Taxonomy Of Psychopathology (HiTOP) system offers a more continuous perspective of the disease spectrum. The Research Domain Criteria (RDoC) framework focuses on dimensions of behavioral/psychological functioning and their implementing neural circuits. Ontology brings a new direction to the future research of psychiatry. The Human Phenotype Ontology (HPO) standardizes various mental illnesses. HPO is also increasingly integrated with genomic data, enhancing its utility in clinical diagnostics and personalized medicine. We plan to integrate the advantages of existing infrastructure models in psychiatry into HPO and fill up missing aspects in the ontology. Ultimately, we can combine the concept of a learning health system (LHS) to create a medical environment that continuously learns and adapts, using big data and machine learning technology to optimize treatment strategies and improve the level of precision medicine.</p><p>Arlene Bierman, AHRQ</p><p><span>[email protected]</span></p><p>David Carlson, Clinical Cloud Solutions</p><p><span>[email protected]</span></p><p>Jenna Norton, NIDDK</p><p><span>[email protected]</span></p><p>Evelyn Gallego, EMI Advisors</p><p><span>[email protected]</span></p><p>Stanley Huff, MD, Graphite Health</p><p><span>[email protected]</span></p><p><i>Part 1</i>: I will be demonstrating a new LOINC Ontology. The LOINC Ontology is being made available as a SNOMED CT extension. The LOINC team at Regenstrief Institute and SNOMED International have a new agreement to make all LOINC content available in a new SNOMED CT extension. The creation of the new ontology is proceeding in a step-wise fashion. The first content in the ontology is 24 000 quantitative laboratory tests. Information about the new ontology can be found at https://loincsnomed.org/ and a browser for the content can be found at https://browser.loincsnomed.org/.</p><p><i>Part 2</i>: The new ontology allows sematic reasoning on LOINC content, a capability that has been lacking in previous releases of the LOINC terminology. For example, using the SNOMED Expression Constraint Language (ECL), you can easily identify the codes that represent fasting glucose levels regardless of the method used for the test. Future releases will allow deeper reasoning, for example, about what kinds of parenteral antibiotics are available for a particular kind of bacteria identified by culture.</p><p>Preston Lee, PhD, MBA, FAMIA – Skycapp</p><p><span>[email protected]</span></p><p>Adela Grando, PhD, FACMI, FAMIA – Arizona State University</p><p><span>[email protected]</span></p><p><i>Part 1</i>: This Skycapp software delivery platform demo will show how a complex NIH-funded CDS Hooks service developed at Arizona State University (ASU) can be widely disseminated and deployed in an automated manner to worldwide evaluators and adopters. The system and approach are generalized to all “level 4” CBK types through strict adherence to FHIR (and other) data standards and infrastructural interoperability.</p><p><i>Part 2</i>: Skycapp's implementation of post-publication CBK delivery and deployment is based on the balloted HL7/Logica Marketplace 2 STU 2 specification. We have purposefully designed the platform to provide a “publish → deploy → adopt” model of dissemination enabling adopters to evaluate artifacts in local context prior to deciding to pursue their adoption, thus encouraging CDS experimentation by deferring any sizable commitments of time or money.</p><p>Eric Mercer, Brigham Young University, Computer Science</p><p><span>[email protected]</span></p><p>Bryce Pierson, Brigham Young University, Computer Science</p><p><span>[email protected]</span></p><p>Keith A. Butler, University of Washington</p><p><span>[email protected]</span></p><p><i>Part-2</i>: FAIR Principles Relevance</p><p>Findable: Complex HIT designs could be indexed for search engine discovery by the cognitive work problem they were proven and certified to solve.</p><p>Accessible: Our overall aim is automated translation back and forth between the concepts and languages of the design community and those of model checking, thereby making model checking far more accessible and usable for provider participation in HIT design.</p><p>Interoperable: The BPMN standard [1] is widely adopted and available in dozens of commercially supported modeling products. BPMN models can be exported as XML files, thereby carrying forward the conceptual design requirement onto implementation platforms.</p><p>Reusable: The finite state machine for the cognitive work problem that a workflow design must solve can be reused for model checking multiple HIT designs that purport to solve the same cognitive problem. Certification means they each can solve the identical problem, yet may differ widely in their qualities of usability, function allocation to human vs. computing, cost to develop/deploy, timeliness, etc.</p><p>The important, related principle of trustworthiness is also increased by model checking certificates that verify all sequences of an HIT design are correct.</p><p>Sabbir Rashid, Rensselaer Polytechnic Institute</p><p><span>[email protected]</span></p><p>Deborah McGuinness, Rensselaer Polytechnic Institute</p><p><span>[email protected]</span></p><p>A clinical decision support system (CDSS) can support physicians in making clinical decisions, such as differential diagnosis, therapy planning, or plan critiquing. To make such informed decisions, there may be a large amount of medical data and literature that a physician needs to keep track of such as new research articles, pharmacological therapies, and updates in Clinical Practice Guidelines. Therefore, a CDSS can be designed to assist physicians by providing relevant, evidence-based clinical recommendations, subsequently reducing the mental overhead required to keep up to date with an evolving body of literature. We designed a CDSS by leveraging Semantic Web technologies to create an AI system that reasons in a way similar to physicians. We base our abstraction of human reasoning on the Select and Test Model (ST-Model), which combines multiple forms of reasoning, such as abstraction, deduction, abduction, and induction, to arrive at and test hypotheses. Based on this framework, we perform ensemble reasoning, the integration and interaction of multiple types of reasoning. We apply our CDSS to the treatment of type 2 diabetes mellitus by designing a domain ontology, the Diabetes Pharmacology Ontology (DPO), that supports both deductive and abductive reasoning. DPO is additionally used to provide a schema for our knowledge representation of hypothetical patients, where each patient is encoded in RDF as a Personalized Health Knowledge Graph (PHKG). We build our system using the Whyis knowledge graph framework by writing software agents to perform custom deductive reasoning and integrate the performance of abduction using an existing reasoning engine, the AAA Abduction Solver. We apply our approach to perform therapy planning on the hypothetical patients, which we will showcase as a part of the demonstration of our system.</p><p>The use of semantic technologies allows us to leverage existing reasoning engines, both in terms of deductive and abductive reasoning, use formal knowledge representations, such as clinical ontologies and vocabularies, and incorporate existing techniques for capturing provenance, such as nanopublications. Additionally, our approach allows for the generation of justifications for the reasoning choices made. Furthermore, this work promotes the FAIR principles and guidelines that have been widely adopted and cited for publishing data and metadata on the web. For the ontology to be findable, globally unique and persistent identifiers are created for each resource in the ontology. Concepts are directly accessible from their URL and the ontology itself is directly accessible via the resource URL defined in the ontology. To promote interoperability, we link concepts in our ontology to other standard vocabularies, including LOINC, ChEBI, Symptom Ontology, and NCIT. Finally, to promote the reusability of our resource, we have published, made readily available, and adequately documented the ontology, PHKGs, and software that we use for our CDSS. The demonstration will show our hybrid reasoning clinical decision support system in action in a diabetes setting.</p><p>Farid Seifi, Knowledge Systems Lab, University of Michigan</p><p><span>[email protected]</span></p><p>Anurag Bangera, Knowledge Systems Lab, University of Michigan</p><p><span>[email protected]</span></p><p>Additionally, we have improved on our original metadata model and will discuss how standards-based Linked Data metadata improves the Findability, Accessibility, Interoperability, and Reusability of CBK packaged as KOs.</p><p>Legacy KOs could only be run as part of a RESTful web service that can be called by other systems. Additionally, the legacy KO model allowed for only one service and implementation. In contrast, the enhanced model allows for multiple services within a single KO, and multiple implementations of the same knowledge and/or services, each in a different programming language. In this demo, we will show how the files, metadata, code, and other information needed to a variety of technical paths can be packaged together inside a single compound digital Knowledge Object that has the potential to support reuse by a variety of people with different professional roles.</p><p>The enhanced KO model can package computable biomedical knowledge in a technically variform way so that a wider variety of stakeholders can more quickly and easily use it. By offering many technical paths to deploying and using the same computable knowledge, CBK artifacts can be used in multiple different contexts, increasing interoperability and reusability. Enabling various technical paths to using the same CBK provides different ways for application developers, system integrators, CBK evaluators, data analysts, and others to reuse CBK in ways that meet existing and emerging needs.</p><p>Mitchell Shiell, Ontario Institute for Cancer Research</p><p><span>[email protected]</span></p><p>Describe the computable biomedical knowledge (CBK) you will demonstrate:</p><p>Next-generation sequencing has made genomics datasets commonplace, posing new challenges for research groups who want to efficiently gather, store, and share their data, while maximizing its value and reuse. This creates a compelling case for new computational tools to mobilize these massive datasets at scale. Overture is a suite of open-source and free-to-use modular software that works in concert to build and deploy scalable genomics data platforms. These platforms streamline the gathering, organizing, and sharing of raw and interpreted data, making it accessible for both humans and machines to translate into knowledge.</p><p>Our MCBK demo will highlight how Overture creates data resources that broadly achieve FAIR data goals. We will demonstrate how our core microservices—Ego, Song, Score, Maestro, and Arranger—achieve these data goals with a presentation and practical demonstration of the Overture platform.</p><p>Describe how your CBK promotes the FAIR principles and/or trust:</p><p>Overture is comprised of five core components that each provide a foundation for mobilizing discoverable, FAIR (Findable, Accessible, Interoperable, and Reusable) genomics data. (1) Ego, Overture's identity and permission management service, enables accessibility with appropriate authentication and authorization procedures using standard and free protocols. (2) Song and (3) Score work together to support findability with data submission, management, and retrieval methods. These services significantly increase data quality, findability, and interoperability with automated tracking and custom metadata validations. (4) Maestro indexes data from a distributed network of Song metadata repositories into a unified Elasticsearch index, and (5) Arranger then uses this index to produce a graphQL search API that can be extended with a library of configurable search and portal UI components. Combining these services completes a comprehensive end-to-end data portal that broadly enables the secure, scalable reuse of genomics data. Overture aims to make large-scale genomics data FAIR and cost-effective for researchers worldwide, fostering mobilization and collaboration over data globally.</p>","PeriodicalId":43916,"journal":{"name":"Learning Health Systems","volume":"9 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/lrh2.10479","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learning Health Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/lrh2.10479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH POLICY & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

POSTERS

DEMONSTRATIONS

Saketh Boddapati, University of Michigan College of Literature, Science, and the Arts

[email protected]

Yongqun “Oliver” He, University of Michigan Medical School

[email protected]

Healthcare providers learn continuously as a core part of their work. However, as the rate of knowledge production in biomedicine increases, better support for providers' continuous learning is needed. Tools for learning from clinical data are widely available in the form of clinical quality dashboards and feedback reports. However, these tools seem to be frequently unused.

Making clinical data useful as feedback for learning appears to be a key challenge for health systems. Feedback can include coaching, evaluation, and appreciation, but systems developed for performance improvement do not adequately recognize these purposes in the context of provider learning. Moreover, providers have different information needs, motivational orientations, and workplace cultures, all of which affect the usefulness of data as feedback.

To increase the usefulness of data as feedback, we developed a Precision Feedback Knowledge Base (PFKB) for a precision feedback system. PFKB contains knowledge about how feedback influences motivation, to enable the precision feedback system to compute a motivational potential score for possible feedback messages. PFKB has four primary knowledge components: (1) causal pathway models, (2) message templates, (3) performance measures, and (4) annotations of motivating information in clinical data. We also developed vignettes about 7 diverse provider personas to illustrate how the precision feedback system uses PFKB in the context of anesthesia care. This ongoing research includes a pilot study that has demonstrated the technical feasibility of the precision feedback system, in preparation for a trial of precision feedback in an anesthesia quality improvement consortium.

Bruce Bray, University of Utah, on behalf of the HL7 Learning Health Systems Work Group

[email protected]

Data is the lifeblood of computable biomedical knowledge (CBK) and must adhere to standards to achieve the interoperability needed to generate virtuous learning cycles within a learning health system (LHS). The HL7 Learning Health System Work Group (HL7 LHS WG) conducted a scoping review to compile an initial list of standards that can support the LHS across “quadrants” of a virtuous learning cycle: (1) knowledge to action, (2) action to data, (3) data to evidence, and (4) evidence to knowledge. We found that few standards explicitly refer to an overarching framework that aligns interoperability and data standards across the phases of the LHS. We will describe our initial work to identify relevant gaps and overlaps in standards in this environment. Future work should address standards coordination and pilot testing within an LHS framework. These efforts can enhance collaboration among communities such as MCBK and HL7 to promote standards-based computable knowledge mobilization.

Allen Flynn, University of Michigan, on behalf of the Knowledge Systems Lab, Department of Learning Health Sciences

[email protected]

The Knowledge Object (KO) is a modular, extensible digital object, designed to allow computable biomedical knowledge (CBK) to be managed as a resource and implemented as a service. This poster describes how the KO model has evolved to better support the FAIR principles by:

Enabling multiple services to increase interoperability and reuse.

KOs were originally developed to run with an Activator, which loaded and deployed KOs on request, exposed the service, and routed responses as a RESTful API. While the intent was to facilitate low-friction KO implementation, the Activator could limit interoperability and the potential for reuse. Expanding the model to reduce reliance on the Activator and allow for multiple services means KOs can meet a wider range of stakeholder needs. Current work includes developing updated specifications and reference implementation for activation.

Enabling multiple implementations of knowledge and services to increase interoperability and reuse. The legacy KO model included one implementation of both the CBK payload and the service to activate it. Engineering a KO to contain multiple implementations of CBK and services means KOs can meet a wider range of stakeholder needs. We are updating our model and engineering KOs capable of carrying multiple implementations and services.

Improving findability, accessibility interoperability, and reuse through an updated model and metadata. Metadata for the legacy KO model primarily described the service. Using the new model, we are now developing standards-based extensible Linked Data metadata to describe the KO, the knowledge it contains, and the services that “activate” that knowledge.

Nicole Gauthreaux, NORC at the University of Chicago

[email protected]

Courtney Zott, NORC at the University of Chicago

[email protected]

Prashila Dullabh, NORC at the University of Chicago

[email protected]

This poster is relevant to clinicians, health system leaders, informaticians, and researchers interested in driving future mobilization of computable knowledge for patient-centered clinical decision support (PC CDS). We conducted a cross-cutting synthesis of real-world PC CDS projects to identify the types of measures used, measurement challenges and limitations, and action steps to advance PC CDS measurement. We reviewed research products from 20 PC CDS projects funded by the Agency for Healthcare Research and Quality (AHRQ) to gather information on their studies, and we conducted key informant interviews with Principal Investigators of nine projects to gather their experiences and challenges with PC CDS measurement.

Findings from the synthesis revealed a considerable focus on measuring the effectiveness of the PC CDS, primarily by collecting patient and clinician perspectives on the usability and acceptability of the tool and observing patient health outcomes from the intervention. Many projects incorporated patient perspectives in their study, yet there were more process measures (e.g., patient satisfaction with the design) than outcome measures (e.g., patient activation to manage their health due to the PC CDS). Few projects measured safety, or the technical performance and information presented by the PC CDS technology. Finally, equity measures rarely extended beyond descriptive analyses of participant socio-demographics. Key informants described other evaluation challenges related to patient recruitment, technical limitations, and imprecision in data collection specific to PC CDS interventions. These findings provide a basis for guiding future development of measures that promote the adoption and use of knowledge for patient-centered care.

Pawan Goyal, American College of Emergency Physicians

[email protected]

Data is driving the future of medicine. We've already seen the critical importance of real-time insights to new and emerging health threats during the COVID-19 pandemic, as well as the impact of health care trends and patterns of resource utilization. With the new Emergency Medicine Data Institute (EMDI), the American College of Emergency Physicians (ACEP) is rapidly moving emergency medicine to the forefront of data-driven quality and practice innovation. This new initiative is poised to become a central source of intelligence and knowledge generation across all emergency medicine stakeholders. Harnessing the power of information that physicians are already recording, ACEP is synthesizing and standardizing data across multiple billing and EHR environments, innovating new research, and pursuing national-level grants all while enhancing value for emergency physicians, patients, and the broader health care community.

Indika Kahanda, PhD, University of North Florida

[email protected]

The quest for mobilizing computable biomedical knowledge, inconsistencies, and contradictions in biomedical literature would be a significant barrier. Given the exponential growth of scientific information, researchers often face the daunting task of detecting contradictory statements on crucial health topics. This work proposes to develop a full-fledged, trustworthy automated pipeline for explainable contradiction detection, which will integrate an Information Retrieval (IR) system backed by a local data store, predictive models, and an Explainable AI (XAI) component. Users can input queries on medicine and health topics, and the system will identify top documents and sentences through syntactic analysis and refine results via semantic examination for relevant research claims. These sentences are forwarded to the predictive models backed by Large Language Models, which will classify each pair as contradictory. The XAI component will help output visual explanations based on these predictions. We have used ManConCorpus, a popular biomedical contradiction corpus on cardiovascular diseases, to develop and evaluate our predictive models. The preliminary results demonstrate that PubMedBERT, with an F1 score of 97%, can outperform BioBERT, Bioformer, and Distil-BERT in classifying a given pair of sentences relevant to cardiovascular disease as contradictory or not. Further investigation is necessary to ensure that the models are robust in performing similarly on any health and medical topic. In the future, these predictive models will be combined with the aforementioned IR/XAI components for developing the prototype pipeline. This study has implications for medical and healthcare practitioners, researchers, students, systematic review authors, and the biomedical text-mining community.

Zach Landis-Lewis, University of Michigan, Department of Learning Health Sciences

[email protected]

Healthcare providers learn continuously as a core part of their work. However, as the rate of knowledge production in biomedicine increases, better support for providers' continuous learning is needed. Tools for learning from clinical data are widely available in the form of clinical quality dashboards and feedback reports. However, these tools seem to be frequently unused.

Making clinical data useful as feedback for learning appears to be a key challenge for health systems. Feedback can include coaching, evaluation, and appreciation, but systems developed for performance improvement do not adequately recognize these purposes in the context of provider learning. Moreover, providers have different information needs, motivational orientations, and workplace cultures, all of which affect the usefulness of data as feedback.

To increase the usefulness of data as feedback, we developed a Precision Feedback Knowledge Base (PFKB) for a precision feedback system. PFKB contains knowledge about how feedback influences motivation, to enable the precision feedback system to compute a motivational potential score for possible feedback messages. PFKB has four primary knowledge components: (1) causal pathway models, (2) message templates, (3) performance measures, and (4) annotations of motivating information in clinical data. We also developed vignettes about 7 diverse provider personas to illustrate how the precision feedback system uses PFKB in the context of anesthesia care. This ongoing research includes a pilot study that has demonstrated the technical feasibility of the precision feedback system, in preparation for a trial of precision feedback in an anesthesia quality improvement consortium.

Siddharth Limaye, Carle Illinois College of Medicine

[email protected]

Understanding causation is an essential element of medical education. While a plethora of research exists regarding the use of knowledge graphs for topics such as drug discovery and personalized medicine, there is relatively less work regarding (1) the use of these graphs for medical education and (2) the use of causal graphs for these purposes. Pathology Graphs is a proof-of-concept application that uses a typed database to represent the causal model underlying several physiologic and pathophysiologic processes in the human body. Effectively, Pathology Graphs is an interactive, digital revision of the concept map.

While graph databases are suitable for building concept maps and causal graphs, Pathology Graphs instead uses a typed database to implement a higher-order graph structure. This change allows Pathology Graphs to represent certain types of relationships (e.g., an enzyme modifying a reaction) which would otherwise not be simply expressible in a graph database. This change also allows inference of the downstream effects of a change in the pathology graph, such as predicting that product will decrease if an enzyme inhibitor is added to the system. This capability could be useful for further functionality, such as automatically generating multiple-choice questions and answers for students or allowing students to generate differential diagnoses by examining all the potential causes of a finding. Challenges remaining to be addressed for this application include data input, user interface, and output visualization, as the goal of this project is to be accessible for those without pre-existing programming knowledge.

Aswini Misro, YouDiagnose Limited

[email protected]

Background—AI is being incorporated into healthcare by major tech companies, but public acceptance remains challenging. The study aims to understand resistance to AI despite its increasing accuracy and potential to improve patient waiting times.

Method—In partnership with the University of Hull and the Academic Health Science Network (AHSN) in the UK, a study was conducted that involved 111 adult patients or carers. 9 did not respond while 28 demonstrated poor digital literacy and were therefore excluded. An interactive user page was designed for the remaining users (n = 74) to engage with a medical chatbot. The participants were asked whether participants would use an AI-powered medical chatbot or nurse for triage at their nearest A&E. Unstructured open-ended interviews were conducted to understand the participants' reasoning behind their answers.

Finding – Participants ranged in age from 21 to 74, with a slight female majority (40 female vs. 34 male). The majority, 79.8% (n = 59) respondents, expressed their comfort with a nurse-led care while a mere 20.2% (n = 15) showed readiness to interact with a medical chatbot. Notably, the age group of 20–40 was most open to the idea of consulting a chatbot. The four primary objections were: 90.5% (n = 67) believed that the chatbot fails to justify its decisions, 59.5% (n = 44) doubt its accuracy, 78.4% (n = 58) felt that the chatbot was inflexible, and 79.7% (n = 59) found it unemotional and detached.

Conclusion—Effective treatment relies on trust, respect, and understanding crucial elements needed when incorporating AI in medicine. It should be introduced progressively, considering patients' emotions and societal circumstances.

Jerome Osheroff, TMIT Consulting LLC/University of Utah/VA

[email protected]

Dave Little (Epic) Stephanie Guarino (Nemours, ChristianaCare) Teresa Cullen (Pima County Health Dept) Rosie Bartel (Patient Partner) Joshua E. Richardson (RTI International) For the POLLC and SCD Learning Communities

Understanding gaps in care is essential for clinical quality improvement efforts. The Pain Management/Opioid Use LHS Learning Community (POLLC), an initiative that engages care delivery organizations (CDOs) and other stakeholders to collaboratively accelerate quality improvement efforts and results, has been working since 2022 to develop and implement a care gap report for this target. Participants identified 12 important potential care improvement opportunities to assess appropriate long-term opioid use for chronic pain, for example, high opioid doses, multiple emergency department visits, and no Prescription Drug Monitoring Program Check. In March 2023, Epic released a pilot Care Gap Query that uses SQL code to identify patients meeting these criteria; it requires an Epic analyst with SQL expertise to implement and modify the query. In November 2023, Epic released a Care Gap Report (CGR) using a Reporting Workbench Template that enables Epic users without special expertise to run and configure the report. It enables users to take bulk action on report results, for example, placing orders and triggering communications. POLLC participants are exploring opportunities to aggregate care gap results from individual CDOs in a region into “population CGRs” that can be used to inform and guide public health interventions. A parallel learning community on sickle cell disease (SCD) that grew out of POLLC is developing analogous care gap reports for SCD in Epic and Oracle EHRs. Approaches are being explored to leverage interoperable, computable biomedical knowledge to make creating and deploying CGRs across targets, EHRs, and CDOs faster and more efficient.

Henrique Santos, Rensselaer Polytechnic Institute

[email protected]

Paulo Pinheiro, Instituto Piaget

[email protected]

Deborah L. McGuinness, Rensselaer Polytechnic Institute

[email protected]

Many countries perform surveys to gather data from their population for supporting decision-making and development of public policies. Questionnaires are possibly the most used type of data acquisition instrument in surveys, although additional kinds may be employed (especially in health-related surveys). In the United States, the NHANES is a national health and nutrition examination survey conducted by the National Center for Health Statistics, designed to collect data on adults and children's health and nutritional status. Data is organized in several tables, each containing variables to a specific theme, such as demographics, and dietary information. In addition, data dictionaries are available to (sometimes partially) document the tables' contents. While data is mostly provided by survey participants, instruments might be collecting data related to other entities (e.g., from participants' households and families, as well as laboratory results from participants' provided blood and urine samples). All this complex knowledge can often only be elicited by humans when analyzing and understanding the data dictionaries in combination with the data. The representation of this knowledge in a machine-interpretable format could facilitate further use of the data. We detail how Semantic Data Dictionaries (SDDs) have been used to elicit knowledge about surveys, using the publicly available NHANES data and data dictionaries. In SDDs, we formalize the semantics of variables, including entities, attributes, and more, using terminology from relevant ontologies, and demonstrate how they are used in an automated process to generate a rich knowledge graph that enables downstream tasks in support of survey data analysis.

Deborah Swain, North Carolina Central University

[email protected]

Following MCBK pilot training in 2021–22 supported by an Institute of Museum and Library Services (IMLS) grant, we designed and developed an open educational resource (OER) platform to be accessible and sustainable for global users. Students and development partners included librarians in medical libraries and informatics graduate students in the United States and Canada. However, international applicants were unable to participate. The OER collection provides full access.

The concept of an open pedagogy for resources supports open commons and sharing in the larger MCBK community. Policies for OER allow open content and open education practices. In the future, new material and potential courses and textbooks can become part of this MCBK OER collection of documents and slides.

Currently, the MCBK resources are hosted in the North Carolina Digital Online Collection of Knowledge (NC Docks): https://libres.uncg.edu/ir/nccu/clist.aspx?id=41690. Technical support is provided by UNC-Greensboro and NCCU Research and Instructional Services Librarian, Danielle Colbert-Lewis.

[See references and special contributors.]

Deborah Swain, North Carolina Central University

[email protected]

Amrit Vastrala, North Carolina Central University

[email protected]

Bias in AI (artificial intelligence) and ML (machine learning) refers to the systematic errors that can occur in training data, algorithms, or models and lead to discriminatory or unfair outcomes for groups of people. This bias may be deliberate or inadvertent, and it may result from a number of factors including data collections, pre-existing social biases, improper algorithm design, and ethical motivation of XAI (explainable AI).

The effects of bias can range from maintaining social injustices to making unreliable or inaccurate predictions to patients. The methodologies and frameworks for practice that biomedical researchers and health practitioners have proposed include fairness metrics, pre-processing methods, post-processing strategies to detect and mitigate bias, and models. Both model-level explanations for providers and prediction-level explanations for users/patients have been researched. This poster summarizes evidence-based research and thoughtful recommendations for key stakeholders. Recognizing recent research and reference publications are the primary objective of our literature review and interviews.

Bias from AI is a difficult problem to solve due to complexity and nuance. There is still a lot of work to be done to ensure that biomedical systems are fair and transparent. Everyone involved can address bias in AI/ML and help develop best practices for building responsible and trustworthy systems. Ongoing research and collaboration among experts in ethics, social science, and healthcare will be essential.

Yujia Tian, University of Michigan

[email protected]

Although the etiology of most mental illnesses remains unclear, it is believed that they could be caused by a combination of genetic, social factors, and personal characteristics. The complex clinical manifestations of mental illnesses pose challenges in medical diagnosis. Many infrastructural models exist to support classification of mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) serves as the principal authority for psychiatric diagnoses by the American Psychiatric Association (APA). The International Statistical Classification of Diseases and Related Health Problems (ICD) also provides a taxonomic and diagnostic classification of mental illnesses. However, DSM-5 and ICD still struggle with accurate diagnosis with heterogeneity and comorbidity. In contrast, the Hierarchical Taxonomy Of Psychopathology (HiTOP) system offers a more continuous perspective of the disease spectrum. The Research Domain Criteria (RDoC) framework focuses on dimensions of behavioral/psychological functioning and their implementing neural circuits. Ontology brings a new direction to the future research of psychiatry. The Human Phenotype Ontology (HPO) standardizes various mental illnesses. HPO is also increasingly integrated with genomic data, enhancing its utility in clinical diagnostics and personalized medicine. We plan to integrate the advantages of existing infrastructure models in psychiatry into HPO and fill up missing aspects in the ontology. Ultimately, we can combine the concept of a learning health system (LHS) to create a medical environment that continuously learns and adapts, using big data and machine learning technology to optimize treatment strategies and improve the level of precision medicine.

Arlene Bierman, AHRQ

[email protected]

David Carlson, Clinical Cloud Solutions

[email protected]

Jenna Norton, NIDDK

[email protected]

Evelyn Gallego, EMI Advisors

[email protected]

Stanley Huff, MD, Graphite Health

[email protected]

Part 1: I will be demonstrating a new LOINC Ontology. The LOINC Ontology is being made available as a SNOMED CT extension. The LOINC team at Regenstrief Institute and SNOMED International have a new agreement to make all LOINC content available in a new SNOMED CT extension. The creation of the new ontology is proceeding in a step-wise fashion. The first content in the ontology is 24 000 quantitative laboratory tests. Information about the new ontology can be found at https://loincsnomed.org/ and a browser for the content can be found at https://browser.loincsnomed.org/.

Part 2: The new ontology allows sematic reasoning on LOINC content, a capability that has been lacking in previous releases of the LOINC terminology. For example, using the SNOMED Expression Constraint Language (ECL), you can easily identify the codes that represent fasting glucose levels regardless of the method used for the test. Future releases will allow deeper reasoning, for example, about what kinds of parenteral antibiotics are available for a particular kind of bacteria identified by culture.

Preston Lee, PhD, MBA, FAMIA – Skycapp

[email protected]

Adela Grando, PhD, FACMI, FAMIA – Arizona State University

[email protected]

Part 1: This Skycapp software delivery platform demo will show how a complex NIH-funded CDS Hooks service developed at Arizona State University (ASU) can be widely disseminated and deployed in an automated manner to worldwide evaluators and adopters. The system and approach are generalized to all “level 4” CBK types through strict adherence to FHIR (and other) data standards and infrastructural interoperability.

Part 2: Skycapp's implementation of post-publication CBK delivery and deployment is based on the balloted HL7/Logica Marketplace 2 STU 2 specification. We have purposefully designed the platform to provide a “publish → deploy → adopt” model of dissemination enabling adopters to evaluate artifacts in local context prior to deciding to pursue their adoption, thus encouraging CDS experimentation by deferring any sizable commitments of time or money.

Eric Mercer, Brigham Young University, Computer Science

[email protected]

Bryce Pierson, Brigham Young University, Computer Science

[email protected]

Keith A. Butler, University of Washington

[email protected]

Part-2: FAIR Principles Relevance

Findable: Complex HIT designs could be indexed for search engine discovery by the cognitive work problem they were proven and certified to solve.

Accessible: Our overall aim is automated translation back and forth between the concepts and languages of the design community and those of model checking, thereby making model checking far more accessible and usable for provider participation in HIT design.

Interoperable: The BPMN standard [1] is widely adopted and available in dozens of commercially supported modeling products. BPMN models can be exported as XML files, thereby carrying forward the conceptual design requirement onto implementation platforms.

Reusable: The finite state machine for the cognitive work problem that a workflow design must solve can be reused for model checking multiple HIT designs that purport to solve the same cognitive problem. Certification means they each can solve the identical problem, yet may differ widely in their qualities of usability, function allocation to human vs. computing, cost to develop/deploy, timeliness, etc.

The important, related principle of trustworthiness is also increased by model checking certificates that verify all sequences of an HIT design are correct.

Sabbir Rashid, Rensselaer Polytechnic Institute

[email protected]

Deborah McGuinness, Rensselaer Polytechnic Institute

[email protected]

A clinical decision support system (CDSS) can support physicians in making clinical decisions, such as differential diagnosis, therapy planning, or plan critiquing. To make such informed decisions, there may be a large amount of medical data and literature that a physician needs to keep track of such as new research articles, pharmacological therapies, and updates in Clinical Practice Guidelines. Therefore, a CDSS can be designed to assist physicians by providing relevant, evidence-based clinical recommendations, subsequently reducing the mental overhead required to keep up to date with an evolving body of literature. We designed a CDSS by leveraging Semantic Web technologies to create an AI system that reasons in a way similar to physicians. We base our abstraction of human reasoning on the Select and Test Model (ST-Model), which combines multiple forms of reasoning, such as abstraction, deduction, abduction, and induction, to arrive at and test hypotheses. Based on this framework, we perform ensemble reasoning, the integration and interaction of multiple types of reasoning. We apply our CDSS to the treatment of type 2 diabetes mellitus by designing a domain ontology, the Diabetes Pharmacology Ontology (DPO), that supports both deductive and abductive reasoning. DPO is additionally used to provide a schema for our knowledge representation of hypothetical patients, where each patient is encoded in RDF as a Personalized Health Knowledge Graph (PHKG). We build our system using the Whyis knowledge graph framework by writing software agents to perform custom deductive reasoning and integrate the performance of abduction using an existing reasoning engine, the AAA Abduction Solver. We apply our approach to perform therapy planning on the hypothetical patients, which we will showcase as a part of the demonstration of our system.

The use of semantic technologies allows us to leverage existing reasoning engines, both in terms of deductive and abductive reasoning, use formal knowledge representations, such as clinical ontologies and vocabularies, and incorporate existing techniques for capturing provenance, such as nanopublications. Additionally, our approach allows for the generation of justifications for the reasoning choices made. Furthermore, this work promotes the FAIR principles and guidelines that have been widely adopted and cited for publishing data and metadata on the web. For the ontology to be findable, globally unique and persistent identifiers are created for each resource in the ontology. Concepts are directly accessible from their URL and the ontology itself is directly accessible via the resource URL defined in the ontology. To promote interoperability, we link concepts in our ontology to other standard vocabularies, including LOINC, ChEBI, Symptom Ontology, and NCIT. Finally, to promote the reusability of our resource, we have published, made readily available, and adequately documented the ontology, PHKGs, and software that we use for our CDSS. The demonstration will show our hybrid reasoning clinical decision support system in action in a diabetes setting.

Farid Seifi, Knowledge Systems Lab, University of Michigan

[email protected]

Anurag Bangera, Knowledge Systems Lab, University of Michigan

[email protected]

Additionally, we have improved on our original metadata model and will discuss how standards-based Linked Data metadata improves the Findability, Accessibility, Interoperability, and Reusability of CBK packaged as KOs.

Legacy KOs could only be run as part of a RESTful web service that can be called by other systems. Additionally, the legacy KO model allowed for only one service and implementation. In contrast, the enhanced model allows for multiple services within a single KO, and multiple implementations of the same knowledge and/or services, each in a different programming language. In this demo, we will show how the files, metadata, code, and other information needed to a variety of technical paths can be packaged together inside a single compound digital Knowledge Object that has the potential to support reuse by a variety of people with different professional roles.

The enhanced KO model can package computable biomedical knowledge in a technically variform way so that a wider variety of stakeholders can more quickly and easily use it. By offering many technical paths to deploying and using the same computable knowledge, CBK artifacts can be used in multiple different contexts, increasing interoperability and reusability. Enabling various technical paths to using the same CBK provides different ways for application developers, system integrators, CBK evaluators, data analysts, and others to reuse CBK in ways that meet existing and emerging needs.

Mitchell Shiell, Ontario Institute for Cancer Research

[email protected]

Describe the computable biomedical knowledge (CBK) you will demonstrate:

Next-generation sequencing has made genomics datasets commonplace, posing new challenges for research groups who want to efficiently gather, store, and share their data, while maximizing its value and reuse. This creates a compelling case for new computational tools to mobilize these massive datasets at scale. Overture is a suite of open-source and free-to-use modular software that works in concert to build and deploy scalable genomics data platforms. These platforms streamline the gathering, organizing, and sharing of raw and interpreted data, making it accessible for both humans and machines to translate into knowledge.

Our MCBK demo will highlight how Overture creates data resources that broadly achieve FAIR data goals. We will demonstrate how our core microservices—Ego, Song, Score, Maestro, and Arranger—achieve these data goals with a presentation and practical demonstration of the Overture platform.

Describe how your CBK promotes the FAIR principles and/or trust:

Overture is comprised of five core components that each provide a foundation for mobilizing discoverable, FAIR (Findable, Accessible, Interoperable, and Reusable) genomics data. (1) Ego, Overture's identity and permission management service, enables accessibility with appropriate authentication and authorization procedures using standard and free protocols. (2) Song and (3) Score work together to support findability with data submission, management, and retrieval methods. These services significantly increase data quality, findability, and interoperability with automated tracking and custom metadata validations. (4) Maestro indexes data from a distributed network of Song metadata repositories into a unified Elasticsearch index, and (5) Arranger then uses this index to produce a graphQL search API that can be extended with a library of configurable search and portal UI components. Combining these services completes a comprehensive end-to-end data portal that broadly enables the secure, scalable reuse of genomics data. Overture aims to make large-scale genomics data FAIR and cost-effective for researchers worldwide, fostering mobilization and collaboration over data globally.

2024年MCBK北美分会会议-闪电演讲和演示摘要
海报展示——aketh Boddapati,密歇根大学文学、科学和艺术学院[email protected]Yongqun“Oliver”He,密歇根大学医学院[email protected]医疗保健提供者将持续学习作为其工作的核心部分。然而,随着生物医学知识产出率的提高,需要更好地支持提供者的持续学习。从临床数据中学习的工具以临床质量仪表板和反馈报告的形式广泛存在。然而,这些工具似乎经常未被使用。使临床数据成为有用的学习反馈似乎是卫生系统面临的一项关键挑战。反馈可以包括指导、评估和赞赏,但是为提高绩效而开发的系统在提供者学习的背景下没有充分认识到这些目的。此外,提供者有不同的信息需求、动机取向和工作场所文化,所有这些都会影响数据作为反馈的有用性。为了提高数据作为反馈的有效性,我们开发了一个精度反馈知识库(PFKB)。PFKB包含关于反馈如何影响动机的知识,以使精确反馈系统能够为可能的反馈信息计算动机潜在得分。PFKB有四个主要的知识组成部分:(1)因果路径模型,(2)消息模板,(3)绩效指标,(4)临床数据中激励信息的注释。我们还开发了关于7个不同提供者角色的小插曲,以说明精确反馈系统如何在麻醉护理中使用PFKB。这项正在进行的研究包括一项试点研究,该研究已经证明了精确反馈系统的技术可行性,为在麻醉质量改善联盟中进行精确反馈试验做准备。Bruce Bray,犹他大学,代表HL7学习卫生系统工作组[email protected]数据是可计算生物医学知识(CBK)的生命线,必须遵守标准,以实现在学习卫生系统(LHS)中产生良性学习循环所需的互操作性。HL7学习型卫生系统工作组(HL7 LHS工作组)进行了范围审查,编制了一份初步标准清单,这些标准可支持LHS跨越良性学习循环的“象限”:(1)从知识到行动,(2)从行动到数据,(3)从数据到证据,(4)从证据到知识。我们发现,很少有标准明确地引用了一个总体框架,该框架在LHS的各个阶段中统一了互操作性和数据标准。我们将描述我们最初的工作,以确定在这种环境下标准中的相关差距和重叠。今后的工作应解决在LHS框架内的标准协调和试点测试问题。这些努力可以加强MCBK和HL7等社区之间的协作,以促进基于标准的可计算知识动员。知识对象(KO)是一个模块化的、可扩展的数字对象,旨在允许可计算的生物医学知识(CBK)作为一种资源进行管理,并作为一种服务来实现。这张海报描述了KO模型是如何通过以下方式来更好地支持FAIR原则的:支持多个服务以增加互操作性和重用。ko最初是为与Activator一起运行而开发的,Activator根据请求加载和部署ko,公开服务,并将响应路由为RESTful API。虽然其目的是促进低摩擦的KO实现,但Activator可能会限制互操作性和重用的潜力。扩展模型以减少对Activator的依赖,并允许多种服务,这意味着ko可以满足更广泛的涉众需求。当前的工作包括为激活开发更新的规范和参考实现。支持知识和服务的多种实现,以提高互操作性和重用性。遗留KO模型包括CBK有效负载和激活它的服务的一个实现。将KO设计为包含CBK和服务的多个实现意味着KO可以满足更广泛的涉众需求。我们正在更新我们的模型和能够承载多个实现和服务的工程ko。通过更新的模型和元数据改进可查找性、可访问性、互操作性和重用性。遗留KO模型的元数据主要描述服务。使用新模型,我们现在正在开发基于标准的可扩展关联数据元数据,以描述KO、它包含的知识以及“激活”该知识的服务。 Nicole Gauthreaux,芝加哥大学NORC [email protected]Courtney Zott,芝加哥大学NORC [email protected]Prashila Dullabh,芝加哥大学NORC [email protected]这张海报与临床医生、卫生系统领导者、信息学家和对推动以患者为中心的临床决策支持(PC CDS)的可计算知识的未来动员感兴趣的研究人员相关。我们对实际的PC CDS项目进行了交叉综合,以确定所使用的测量类型、测量挑战和限制,以及推进PC CDS测量的行动步骤。我们回顾了由医疗保健研究和质量局(AHRQ)资助的20个PC CDS项目的研究产品,以收集有关其研究的信息,并与9个项目的主要研究人员进行了关键信息提供者访谈,以收集他们在PC CDS测量方面的经验和挑战。综合研究结果显示,人们相当重视测量PC CDS的有效性,主要是通过收集患者和临床医生对该工具的可用性和可接受性的看法,并观察干预后患者的健康结果。许多项目在他们的研究中纳入了患者的观点,然而,过程测量(例如,患者对设计的满意度)比结果测量(例如,由于PC CDS,患者对管理其健康的激活)更多。很少有项目测量了PC CDS技术的安全性或技术性能和信息。最后,公平措施很少超出参与者社会人口统计学的描述性分析。关键信息提供者描述了与患者招募、技术限制和PC CDS干预措施数据收集不精确相关的其他评估挑战。这些发现为指导未来发展促进以患者为中心的护理知识的采用和使用的措施提供了基础。Pawan Goyal,美国急诊医师学会[email protected]数据正在推动医学的未来。我们已经看到,在2019冠状病毒病大流行期间,实时洞察新的和正在出现的健康威胁,以及卫生保健趋势和资源利用模式的影响至关重要。随着新的急诊医学数据研究所(EMDI)的建立,美国急诊医师学会(ACEP)正在迅速将急诊医学推向数据驱动质量和实践创新的前沿。这一新举措将成为所有急诊医学利益相关者的情报和知识生成的中心来源。ACEP利用医生已经记录的信息的力量,正在综合和标准化多个计费和EHR环境中的数据,创新新的研究,并寻求国家级资助,同时提高急诊医生、患者和更广泛的医疗保健社区的价值。Indika Kahanda博士,北佛罗里达大学[email protected]对动员可计算生物医学知识、生物医学文献中的不一致和矛盾的追求将是一个重大障碍。鉴于科学信息的指数级增长,研究人员经常面临着一项艰巨的任务,即在关键的健康问题上发现相互矛盾的陈述。这项工作建议开发一个成熟的、可信赖的自动化管道,用于可解释的矛盾检测,该管道将集成由本地数据存储、预测模型和可解释的AI (XAI)组件支持的信息检索(IR)系统。用户可以输入有关医学和健康主题的查询,系统将通过句法分析识别出顶级文档和句子,并通过对相关研究声明的语义检查来优化结果。这些句子被转发到由大型语言模型支持的预测模型,该模型将把每对句子分类为矛盾的。XAI组件将帮助输出基于这些预测的可视化解释。我们使用ManConCorpus(一个流行的心血管疾病生物医学矛盾语料库)来开发和评估我们的预测模型。初步结果表明,PubMedBERT的F1得分为97%,在对给定的心血管疾病相关句子进行矛盾或不矛盾的分类方面优于BioBERT、Bioformer和distilt - bert。有必要进行进一步的调查,以确保这些模型在任何健康和医学主题上都具有类似的鲁棒性。在未来,这些预测模型将与前面提到的IR/XAI组件相结合,用于开发原型管道。本研究对医学和保健从业人员、研究人员、学生、系统综述作者和生物医学文本挖掘社区具有启示意义。 Zach Landis-Lewis,密歇根大学学习健康科学系[email protected]医疗保健提供者将持续学习作为其工作的核心部分。然而,随着生物医学知识产出率的提高,需要更好地支持提供者的持续学习。从临床数据中学习的工具以临床质量仪表板和反馈报告的形式广泛存在。然而,这些工具似乎经常未被使用。使临床数据成为有用的学习反馈似乎是卫生系统面临的一项关键挑战。反馈可以包括指导、评估和赞赏,但是为提高绩效而开发的系统在提供者学习的背景下没有充分认识到这些目的。此外,提供者有不同的信息需求、动机取向和工作场所文化,所有这些都会影响数据作为反馈的有用性。为了提高数据作为反馈的有效性,我们开发了一个精度反馈知识库(PFKB)。PFKB包含关于反馈如何影响动机的知识,以使精确反馈系统能够为可能的反馈信息计算动机潜在得分。PFKB有四个主要的知识组成部分:(1)因果路径模型,(2)消息模板,(3)绩效指标,(4)临床数据中激励信息的注释。我们还开发了关于7个不同提供者角色的小插曲,以说明精确反馈系统如何在麻醉护理中使用PFKB。这项正在进行的研究包括一项试点研究,该研究已经证明了精确反馈系统的技术可行性,为在麻醉质量改善联盟中进行精确反馈试验做准备。Siddharth Limaye, Carle Illinois医学院[email protected]了解因果关系是医学教育的基本要素。虽然存在大量关于在药物发现和个性化医疗等主题中使用知识图的研究,但关于(1)将这些图用于医学教育和(2)将因果图用于这些目的的工作相对较少。病理学图是一个概念验证应用程序,它使用类型化数据库来表示人体中几种生理和病理生理过程的因果模型。实际上,病理图是概念图的交互式数字修订。虽然图形数据库适用于构建概念图和因果图,但病理学图使用类型化数据库来实现高阶图结构。这种变化允许病理图表示某些类型的关系(例如,一种酶修饰的反应),否则不能简单地表达在图形数据库中。这种变化还允许推断病理图变化的下游效应,例如,如果将酶抑制剂添加到系统中,预测产物将减少。该功能可能对其他功能很有用,例如为学生自动生成多项选择题和答案,或者允许学生通过检查发现的所有潜在原因来生成鉴别诊断。这个应用程序仍然需要解决的挑战包括数据输入、用户界面和输出可视化,因为这个项目的目标是让那些没有预先编程知识的人也可以访问。Aswini Misro, YouDiagnose Limited[email protected]背景-人工智能正在被大型科技公司纳入医疗保健领域,但公众接受度仍然具有挑战性。这项研究旨在了解人们对人工智能的抵制,尽管人工智能的准确性越来越高,而且有可能缩短患者的等待时间。方法:与英国赫尔大学和学术健康科学网络(AHSN)合作,进行了一项涉及111名成年患者或护理人员的研究。9人没有回应,28人表现出较差的数字素养,因此被排除在外。为其余用户(n = 74)设计了一个交互式用户页面,以便与医疗聊天机器人互动。参与者被问及是否会使用人工智能医疗聊天机器人或护士在最近的急诊室进行分诊。进行了非结构化的开放式访谈,以了解参与者的答案背后的原因。研究发现:参与者的年龄从21岁到74岁不等,女性略占多数(40名女性对34名男性)。大多数,79.8% (n = 59)的受访者表示他们对护士主导的护理感到满意,而只有20.2% (n = 15)的受访者表示愿意与医疗聊天机器人互动。值得注意的是,20-40岁年龄段的人对咨询聊天机器人的想法最为开放。四种主要反对意见是:90.5% (n = 67)认为聊天机器人无法证明其决定的合理性,59.5% (n = 44)怀疑其准确性,78.4% (n = 58)认为聊天机器人缺乏灵活性,79%。 7% (n = 59)的人认为这是不带感情的、超然的。结论有效的治疗依赖于信任、尊重和理解将人工智能纳入医学所需的关键要素。应考虑到患者的情绪和社会环境,逐步引入。Jerome Osheroff, TMIT咨询有限责任公司/犹他大学/VA[email protected]Dave Little (Epic) Stephanie Guarino (Nemours, ChristianaCare) Teresa Cullen (Pima县卫生部)Rosie Bartel(患者合作伙伴)Joshua E. Richardson (RTI International)为POLLC和SCD学习社区了解护理差距对于临床质量改进工作至关重要。疼痛管理/阿片类药物使用LHS学习社区(POLLC)是一项让医疗服务组织(cdo)和其他利益相关者合作加速质量改进工作和成果的倡议,自2022年以来一直致力于为这一目标制定和实施护理差距报告。参与者确定了12个重要的潜在护理改进机会,以评估长期阿片类药物对慢性疼痛的适当使用,例如,高阿片类药物剂量,多次急诊就诊,以及没有处方药监测程序检查。2023年3月,Epic发布了一个试点医疗差距查询,该查询使用SQL代码识别符合这些标准的患者;它需要具有SQL专业知识的Epic分析师来实现和修改查询。2023年11月,Epic使用报告工作台模板发布了一份护理差距报告(CGR),使Epic用户无需特殊专业知识即可运行和配置报告。它允许用户对报告结果进行批量操作,例如,下订单和触发通信。人口普查的参与者正在探索机会,将一个区域内个别cdo的护理差距结果汇总为可用于告知和指导公共卫生干预措施的“人口cgr ”。从POLLC中发展出来的一个关于镰状细胞病(SCD)的平行学习社区正在Epic和Oracle电子病历中开发类似的SCD护理差距报告。人们正在探索利用可互操作、可计算的生物医学知识的方法,以更快、更高效地创建和部署跨目标、ehr和cdo的cgr。许多国家进行调查,收集人口数据,以支持公共政策的决策和发展。问卷调查可能是调查中使用最多的数据获取工具类型,但也可能采用其他类型(特别是在与健康有关的调查中)。在美国,NHANES是由国家卫生统计中心进行的一项全国健康和营养检查调查,旨在收集有关成人和儿童健康和营养状况的数据。数据组织在几个表中,每个表包含一个特定主题的变量,例如人口统计和饮食信息。此外,数据字典可用于(有时是部分地)记录表的内容。虽然数据主要由调查参与者提供,但仪器可能收集与其他实体有关的数据(例如,来自参与者的家庭和家庭,以及来自参与者提供的血液和尿液样本的实验室结果)。所有这些复杂的知识通常只能由人类在结合数据分析和理解数据字典时得出。以机器可解释的格式表示这些知识可以促进数据的进一步使用。我们详细介绍了如何使用语义数据字典(sdd)来使用公开可用的NHANES数据和数据字典来获取有关调查的知识。在sdd中,我们使用来自相关本体的术语形式化变量的语义,包括实体、属性等,并演示如何在自动化过程中使用它们来生成丰富的知识图,从而支持下游任务以支持调查数据分析。在博物馆和图书馆服务研究所(IMLS)的资助下,在2021 - 2022年的MCBK试点培训之后,我们设计并开发了一个开放教育资源(OER)平台,为全球用户提供可访问和可持续的服务。学生和发展伙伴包括美国和加拿大医学图书馆的图书馆员和信息学研究生。然而,国际申请者无法参加。OER集合提供完全访问。资源开放教学的概念支持更大的MCBK社区的开放共享和共享。OER政策允许开放内容和开放教育实践。 在未来,新的材料和潜在的课程和教科书可以成为这个MCBK OER文件和幻灯片集合的一部分。目前,MCBK资源托管在北卡罗来纳数字在线知识集合(NC Docks): https://libres.uncg.edu/ir/nccu/clist.aspx?id=41690。技术支持由北卡大学格林斯博罗分校和北卡大学研究和教学服务馆员Danielle Colbert-Lewis提供。[见参考文献和特别撰稿人。AI(人工智能)和ML(机器学习)中的偏见是指在训练数据、算法或模型中可能出现的系统性错误,并导致对人群产生歧视性或不公平的结果。这种偏见可能是故意的,也可能是无意的,它可能是由许多因素造成的,包括数据收集、预先存在的社会偏见、不当的算法设计和XAI(可解释的AI)的道德动机。偏见的影响范围很广,从维持社会不公正到对患者做出不可靠或不准确的预测。生物医学研究人员和卫生从业人员提出的实践方法和框架包括公平性指标、预处理方法、检测和减轻偏见的后处理策略以及模型。对提供者的模型级解释和对用户/患者的预测级解释进行了研究。这张海报总结了基于证据的研究和对关键利益相关者的深思熟虑的建议。认识到最近的研究和参考出版物是我们文献回顾和访谈的主要目标。由于复杂性和细微差别,人工智能的偏见是一个很难解决的问题。要确保生物医学系统的公平和透明,仍有许多工作要做。每个参与者都可以解决AI/ML中的偏见,并帮助开发构建负责任和值得信赖的系统的最佳实践。伦理、社会科学和医疗保健专家之间的持续研究和合作将是必不可少的。虽然大多数精神疾病的病因尚不清楚,但人们相信它们可能是由遗传、社会因素和个人特征共同引起的。精神疾病复杂的临床表现给医学诊断带来了挑战。存在许多支持精神疾病分类的基础模型。《精神疾病诊断与统计手册》第五版(DSM-5)被美国精神病学协会(APA)作为精神病学诊断的主要权威。《国际疾病和相关健康问题统计分类》(ICD)也提供了精神疾病的分类学和诊断分类。然而,DSM-5和ICD仍在努力准确诊断异质性和合并症。相比之下,精神病理学分级分类法(HiTOP)系统提供了一个更连续的疾病谱系视角。研究领域标准(RDoC)框架侧重于行为/心理功能的维度及其实现的神经回路。本体论为今后的精神病学研究带来了新的方向。人类表型本体论(Human Phenotype Ontology, HPO)规范了各种精神疾病。HPO也越来越多地与基因组数据相结合,增强其在临床诊断和个性化医疗中的效用。我们计划将现有精神病学基础架构模型的优势整合到HPO中,填补本体中缺失的部分。最终,我们可以结合学习健康系统(LHS)的概念,创造一个持续学习和适应的医疗环境,利用大数据和机器学习技术优化治疗策略,提高精准医疗水平。Arlene Bierman, AHRQ[email protected]David Carlson,临床云解决方案[email protected]Jenna Norton, NIDDK[email protected]Evelyn Gallego, EMI Advisors[email protected]Stanley Huff, MD,石墨健康[email protected]第一部分:我将演示一个新的LOINC本体。LOINC本体作为SNOMED CT扩展提供。Regenstrief Institute的LOINC团队和SNOMED International达成了一项新协议,将所有LOINC内容提供给新的SNOMED CT扩展。新本体的创建正在以一种循序渐进的方式进行。本体的第一个内容是24000个定量实验室测试。关于新本体的信息可以在https://loincsnomed.org/上找到,内容的浏览器可以在https://browser.loincsnomed.org/.Part上找到2:新本体允许对LOINC内容进行语义推理,这是以前版本的LOINC术语所缺乏的功能。 例如,使用SNOMED Expression Constraint Language (ECL),您可以轻松地识别表示空腹血糖水平的代码,而不管测试使用的方法是什么。未来的版本将允许更深入的推理,例如,关于哪种肠道外抗生素可用于通过培养识别的特定细菌。Adela Grando,博士,FACMI, FAMIA - Skycapp[email protected]第一部分:这个Skycapp软件交付平台演示将展示如何在亚利桑那州立大学(ASU)开发的由美国国立卫生研究院资助的复杂CDS Hooks服务可以以自动化的方式广泛传播和部署给全球的评估者和采采者。通过严格遵守FHIR(和其他)数据标准和基础设施互操作性,该系统和方法被推广到所有“4级”CBK类型。第2部分:Skycapp发布后CBK交付和部署的实现基于投票表决的HL7/Logica Marketplace 2 STU 2规范。我们有目的地设计了平台,以提供一个“发布→部署→采用”的传播模型,使采用者能够在决定继续采用它们之前评估本地环境中的工件,从而通过推迟任何可观的时间或金钱承诺来鼓励CDS实验。Eric Mercer, Brigham Young University, Computer Science[email protected]Bryce Pierson, Brigham Young University, Computer Science[email protected]Keith A. Butler, University of Washington[email protected]Part-2: FAIR Principles RelevanceFindable:复杂的HIT设计可以通过他们被证明和认证解决的认知工作问题为搜索引擎发现建立索引。可访问:我们的总体目标是在设计社区的概念和语言与模型检查的概念和语言之间来回自动翻译,从而使模型检查更易于访问和可用,以供提供者参与HIT设计。可互操作:BPMN标准[1]在许多商业支持的建模产品中被广泛采用和可用。BPMN模型可以导出为XML文件,从而将概念设计需求传递到实现平台。可重用:工作流设计必须解决的认知工作问题的有限状态机可以被重用,用于模型检查声称解决相同认知问题的多个HIT设计。认证意味着它们都可以解决相同的问题,但在可用性、人类与计算的功能分配、开发/部署的成本、及时性等方面可能存在很大差异。重要的、相关的可信度原则也通过模型检查证书得到提高,该证书验证了HIT设计的所有序列都是正确的。临床决策支持系统(CDSS)可以帮助医生做出临床决策,如鉴别诊断、治疗计划或计划批评。为了做出这样明智的决定,医生可能需要跟踪大量的医学数据和文献,例如新的研究文章、药理疗法和临床实践指南的更新。因此,CDSS可以通过提供相关的、基于证据的临床建议来帮助医生,从而减少与不断发展的文献保持同步所需的精神开销。我们设计了一个CDSS,利用语义网技术来创建一个人工智能系统,以类似于医生的方式进行推理。我们将人类推理的抽象建立在选择和测试模型(ST-Model)的基础上,该模型结合了多种形式的推理,如抽象、演绎、溯因和归纳,以达到和测试假设。基于这个框架,我们进行集成推理,多种类型推理的集成和交互。我们通过设计一个支持演绎推理和溯因推理的领域本体——糖尿病药理学本体(DPO),将我们的CDSS应用于2型糖尿病的治疗。DPO还用于为假设患者的知识表示提供模式,其中每个患者都用RDF编码为个性化健康知识图(PHKG)。我们使用Whyis知识图谱框架构建我们的系统,通过编写软件代理来执行自定义演绎推理,并使用现有的推理引擎AAA溯因求解器集成溯因的性能。我们将应用我们的方法对假设的患者进行治疗计划,我们将作为我们系统演示的一部分进行展示。 语义技术的使用使我们能够利用现有的推理引擎,在演绎和溯因推理方面,使用正式的知识表示,如临床本体论和词汇表,并结合现有的技术来捕获来源,如纳米出版物。此外,我们的方法允许为所做的推理选择生成理由。此外,这项工作促进了FAIR原则和指导方针,这些原则和指导方针已被广泛采用和引用,用于在网络上发布数据和元数据。为了使本体可被找到,需要为本体中的每个资源创建全局唯一且持久的标识符。概念可以从它们的URL直接访问,本体本身可以通过本体中定义的资源URL直接访问。为了促进互操作性,我们将本体论中的概念链接到其他标准词汇表,包括LOINC、ChEBI、Symptom ontology和NCIT。最后,为了促进资源的可重用性,我们已经发布了用于CDSS的本体、PHKGs和软件,使其随时可用,并充分记录了它们。该演示将展示我们的混合推理临床决策支持系统在行动中的糖尿病设置。此外,我们改进了原始元数据模型,并将讨论基于标准的关联数据元数据如何提高打包为ko的CBK的可查找性、可访问性、互操作性和可重用性。遗留ko只能作为RESTful web服务的一部分运行,其他系统可以调用该服务。此外,遗留KO模型只允许一个服务和实现。相比之下,增强模型允许在单个KO中使用多个服务,以及相同知识和/或服务的多个实现,每个实现使用不同的编程语言。在这个演示中,我们将展示如何将各种技术路径所需的文件、元数据、代码和其他信息打包在一个复合数字知识对象中,该对象有可能支持具有不同专业角色的各种人员的重用。增强的KO模型可以以技术可变的方式打包可计算的生物医学知识,以便更广泛的利益相关者可以更快速、更容易地使用它。通过提供许多技术路径来部署和使用相同的可计算知识,CBK工件可以在多种不同的上下文中使用,从而提高互操作性和可重用性。支持使用相同CBK的各种技术路径为应用程序开发人员、系统集成商、CBK评估人员、数据分析人员和其他人提供了不同的方法,以满足现有和新出现的需求来重用CBK。描述你将展示的可计算生物医学知识(CBK):下一代测序使基因组学数据集变得普遍,这给那些想要有效收集、存储和共享数据,同时最大化其价值和重用的研究小组带来了新的挑战。这创造了一个令人信服的案例,需要新的计算工具来大规模地动员这些海量数据集。Overture是一套开源和免费使用的模块化软件,可以协同构建和部署可扩展的基因组数据平台。这些平台简化了原始数据和解释数据的收集、组织和共享,使人类和机器都可以将其转化为知识。我们的MCBK演示将强调Overture如何创建数据资源,以广泛实现FAIR数据目标。我们将演示我们的核心微服务——ego、Song、Score、Maestro和arranger——如何通过Overture平台的演示和实际演示来实现这些数据目标。描述您的CBK如何促进公平原则和/或信任:序曲由五个核心组件组成,每个组件都为动员可发现的、公平的(可查找的、可访问的、可互操作的和可重用的)基因组数据提供了基础。(1) Overture的身份和权限管理服务Ego使用标准和免费的协议,通过适当的身份验证和授权程序实现可访问性。(2) Song和(3)Score一起工作,通过数据提交、管理和检索方法支持可查找性。这些服务通过自动跟踪和自定义元数据验证显著提高了数据质量、可查找性和互操作性。(4) Maestro将来自分布式网络的Song元数据存储库中的数据索引到统一的Elasticsearch索引中,(5)然后,Arranger使用该索引生成graphQL搜索API,该API可以通过可配置搜索和门户UI组件库进行扩展。 结合这些服务,完成了一个全面的端到端数据门户,广泛地实现了基因组学数据的安全、可扩展的重用。Overture的目标是使大规模基因组学数据对全世界的研究人员公平和具有成本效益,促进全球数据的动员和协作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Learning Health Systems
Learning Health Systems HEALTH POLICY & SERVICES-
CiteScore
5.60
自引率
22.60%
发文量
55
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信