Abubakari Yakubu, Fortuna Paloji, Juan Pablo Guerrero Bonnet, Thomas Wetter
{"title":"Development of an Instrument for Assessing the Maturity of Citizens for Consumer Health Informatics in Developing Countries: The Case of Chile, Ghana, and Kosovo.","authors":"Abubakari Yakubu, Fortuna Paloji, Juan Pablo Guerrero Bonnet, Thomas Wetter","doi":"10.1055/s-0041-1731389","DOIUrl":"https://doi.org/10.1055/s-0041-1731389","url":null,"abstract":"<p><strong>Objective: </strong> We aimed to develop a survey instrument to assess the maturity level of consumer health informatics (ConsHI) in low-middle income countries (LMIC).</p><p><strong>Methods: </strong> We deduced items from unified theory of acceptance and use of technology (UTAUT), UTAUT2, patient activation measure (PAM), and ConsHI levels to constitute a pilot instrument. We proposed a total of 78 questions consisting of 14 demographic and 64 related maturity variables using an iterative process. We used a multistage convenient sampling approach to select 351 respondents from all three countries.</p><p><strong>Results: </strong> Our results supported the earlier assertion that mobile devices and technology are standard today than ever, thus confirming that mobile devices have become an essential part of human activities. We used the Wilcoxon Signed-Rank Test (WSRT) and item response theory (IRT) to reduce the ConsHI-related items from 64 to 43. The questionnaire consisted of 10 demographic questions and 43 ConsHI relevant questions on the maturity of citizens for ConsHI in LMIC. Also, the results supported some moderators such as age and gender. Additionally, more demographic items such as marital status, educational level, and location of respondents were validated using IRT and WSRT.</p><p><strong>Conclusion: </strong> We contend that this is the first composite instrument for assessing the maturity of citizens for ConsHI in LMIC. Specifically, it aggregates multiple theoretical models from information systems (UTAUT and UTAUT2) and health (PAM) and the ConsHI level.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"62-70"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39164926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smoothing Corrections for Improving Sample Size Recalculation Rules in Adaptive Group Sequential Study Designs.","authors":"Carolin Herrmann, Geraldine Rauch","doi":"10.1055/s-0040-1721727","DOIUrl":"https://doi.org/10.1055/s-0040-1721727","url":null,"abstract":"<p><strong>Background: </strong>An adequate sample size calculation is essential for designing a successful clinical trial. One way to tackle planning difficulties regarding parameter assumptions required for sample size calculation is to adapt the sample size during the ongoing trial.This can be attained by adaptive group sequential study designs. At a predefined timepoint, the interim effect is tested for significance. Based on the interim test result, the trial is either stopped or continued with the possibility of a sample size recalculation.</p><p><strong>Objectives: </strong>Sample size recalculation rules have different limitations in application like a high variability of the recalculated sample size. Hence, the goal is to provide a tool to counteract this performance limitation.</p><p><strong>Methods: </strong>Sample size recalculation rules can be interpreted as functions of the observed interim effect. Often, a \"jump\" from the first stage's sample size to the maximal sample size at a rather arbitrarily chosen interim effect size is implemented and the curve decreases monotonically afterwards. This jump is one reason for a high variability of the sample size. In this work, we investigate how the shape of the recalculation function can be improved by implementing a smoother increase of the sample size. The design options are evaluated by means of Monte Carlo simulations. Evaluation criteria are univariate performance measures such as the conditional power and sample size as well as a conditional performance score which combines these components.</p><p><strong>Results: </strong>We demonstrate that smoothing corrections can reduce variability in conditional power and sample size as well as they increase the performance with respect to a recently published conditional performance score for medium and large standardized effect sizes.</p><p><strong>Conclusion: </strong>Based on the simulation study, we present a tool that is easily implemented to improve sample size recalculation rules. The approach can be combined with existing sample size recalculation rules described in the literature.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"1-8"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1055/s-0040-1721727","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25417773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John H Holmes, James Beinlich, Mary R Boland, Kathryn H Bowles, Yong Chen, Tessa S Cook, George Demiris, Michael Draugelis, Laura Fluharty, Peter E Gabriel, Robert Grundmeier, C William Hanson, Daniel S Herman, Blanca E Himes, Rebecca A Hubbard, Charles E Kahn, Dokyoon Kim, Ross Koppel, Qi Long, Nebojsa Mirkovic, Jeffrey S Morris, Danielle L Mowery, Marylyn D Ritchie, Ryan Urbanowicz, Jason H Moore
{"title":"Why Is the Electronic Health Record So Challenging for Research and Clinical Care?","authors":"John H Holmes, James Beinlich, Mary R Boland, Kathryn H Bowles, Yong Chen, Tessa S Cook, George Demiris, Michael Draugelis, Laura Fluharty, Peter E Gabriel, Robert Grundmeier, C William Hanson, Daniel S Herman, Blanca E Himes, Rebecca A Hubbard, Charles E Kahn, Dokyoon Kim, Ross Koppel, Qi Long, Nebojsa Mirkovic, Jeffrey S Morris, Danielle L Mowery, Marylyn D Ritchie, Ryan Urbanowicz, Jason H Moore","doi":"10.1055/s-0041-1731784","DOIUrl":"https://doi.org/10.1055/s-0041-1731784","url":null,"abstract":"<p><strong>Background: </strong> The electronic health record (EHR) has become increasingly ubiquitous. At the same time, health professionals have been turning to this resource for access to data that is needed for the delivery of health care and for clinical research. There is little doubt that the EHR has made both of these functions easier than earlier days when we relied on paper-based clinical records. Coupled with modern database and data warehouse systems, high-speed networks, and the ability to share clinical data with others are large number of challenges that arguably limit the optimal use of the EHR OBJECTIVES: Our goal was to provide an exhaustive reference for those who use the EHR in clinical and research contexts, but also for health information systems professionals as they design, implement, and maintain EHR systems.</p><p><strong>Methods: </strong> This study includes a panel of 24 biomedical informatics researchers, information technology professionals, and clinicians, all of whom have extensive experience in design, implementation, and maintenance of EHR systems, or in using the EHR as clinicians or researchers. All members of the panel are affiliated with Penn Medicine at the University of Pennsylvania and have experience with a variety of different EHR platforms and systems and how they have evolved over time.</p><p><strong>Results: </strong> Each of the authors has shared their knowledge and experience in using the EHR in a suite of 20 short essays, each representing a specific challenge and classified according to a functional hierarchy of interlocking facets such as usability and usefulness, data quality, standards, governance, data integration, clinical care, and clinical research.</p><p><strong>Conclusion: </strong> We provide here a set of perspectives on the challenges posed by the EHR to clinical and research users.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"32-48"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9295893/pdf/nihms-1819708.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39200478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Dieter, Friederike Dominick, Alexander Knurr, Janko Ahlbrandt, Frank Ückert
{"title":"Analysis of Not Structurable Oncological Study Eligibility Criteria for Improved Patient-Trial Matching.","authors":"Julia Dieter, Friederike Dominick, Alexander Knurr, Janko Ahlbrandt, Frank Ückert","doi":"10.1055/s-0041-1724107","DOIUrl":"https://doi.org/10.1055/s-0041-1724107","url":null,"abstract":"<p><strong>Background: </strong> Higher enrolment rates of cancer patients into clinical trials are necessary to increase cancer survival. As a prerequisite, an improved semiautomated matching of patient characteristics with clinical trial eligibility criteria is needed. This is based on the computer interpretability, i.e., structurability of eligibility criteria texts. To increase structurability, the common content, phrasing, and structuring problems of oncological eligibility criteria need to be better understood.</p><p><strong>Objectives: </strong> We aimed to identify oncological eligibility criteria that were not possible to be structured by our manual approach and categorize them by the underlying structuring problem. Our results shall contribute to improved criteria phrasing in the future as a prerequisite for increased structurability.</p><p><strong>Methods: </strong> The inclusion and exclusion criteria of 159 oncological studies from the Clinical Trial Information System of the National Center for Tumor Diseases Heidelberg were manually structured and grouped into content-related subcategories. Criteria identified as not structurable were analyzed further and manually categorized by the underlying structuring problem.</p><p><strong>Results: </strong> The structuring of criteria resulted in 4,742 smallest meaningful components (SMCs) distributed across seven main categories (Diagnosis, Therapy, Laboratory, Study, Findings, Demographics, and Lifestyle, Others). A proportion of 645 SMCs (13.60%) was not possible to be structured due to content- and structure-related issues. Of these, a subset of 415 SMCs (64.34%) was considered not remediable, as supplementary medical knowledge would have been needed or the linkage among the sentence components was too complex. The main category \"Diagnosis and Study\" contained these two subcategories to the largest parts and thus were the least structurable. In the inclusion criteria, reasons for lacking structurability varied, while missing supplementary medical knowledge was the largest factor within the exclusion criteria.</p><p><strong>Conclusion: </strong> Our results suggest that further improvement of eligibility criterion phrasing only marginally contributes to increased structurability. Instead, physician-based confirmation of the matching results and the exclusion of factors harming the patient or biasing the study is needed.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"9-20"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1055/s-0041-1724107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38901452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Galina Tremper, Torben Brenner, Florian Stampe, Andreas Borg, Martin Bialke, David Croft, Esther Schmidt, Martin Lablans
{"title":"MAGICPL: A Generic Process Description Language for Distributed Pseudonymization Scenarios.","authors":"Galina Tremper, Torben Brenner, Florian Stampe, Andreas Borg, Martin Bialke, David Croft, Esther Schmidt, Martin Lablans","doi":"10.1055/s-0041-1731387","DOIUrl":"https://doi.org/10.1055/s-0041-1731387","url":null,"abstract":"<p><strong>Objectives: </strong> Pseudonymization is an important aspect of projects dealing with sensitive patient data. Most projects build their own specialized, hard-coded, solutions. However, these overlap in many aspects of their functionality. As any re-implementation binds resources, we would like to propose a solution that facilitates and encourages the reuse of existing components.</p><p><strong>Methods: </strong> We analyzed already-established data protection concepts to gain an insight into their common features and the ways in which their components were linked together. We found that we could represent these pseudonymization processes with a simple descriptive language, which we have called MAGICPL, plus a relatively small set of components. We designed MAGICPL as an XML-based language, to make it human-readable and accessible to nonprogrammers. Additionally, a prototype implementation of the components was written in Java. MAGICPL makes it possible to reference the components using their class names, making it easy to extend or exchange the component set. Furthermore, there is a simple HTTP application programming interface (API) that runs the tasks and allows other systems to communicate with the pseudonymization process.</p><p><strong>Results: </strong> MAGICPL has been used in at least three projects, including the re-implementation of the pseudonymization process of the German Cancer Consortium, clinical data flows in a large-scale translational research network (National Network Genomic Medicine), and for our own institute's pseudonymization service.</p><p><strong>Conclusions: </strong> Putting our solution into productive use at both our own institute and at our partner sites facilitated a reduction in the time and effort required to build pseudonymization pipelines in medical research.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"21-31"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39084872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-automated Conversion of Clinical Trial Legacy Data into CDISC SDTM Standards Format Using Supervised Machine Learning.","authors":"Takuma Oda, Shih-Wei Chiu, Takuhiro Yamaguchi","doi":"10.1055/s-0041-1731388","DOIUrl":"https://doi.org/10.1055/s-0041-1731388","url":null,"abstract":"<p><strong>Objective: </strong> This study aimed to develop a semi-automated process to convert legacy data into clinical data interchange standards consortium (CDISC) study data tabulation model (SDTM) format by combining human verification and three methods: data normalization; feature extraction by distributed representation of dataset names, variable names, and variable labels; and supervised machine learning.</p><p><strong>Materials and methods: </strong> Variable labels, dataset names, variable names, and values of legacy data were used as machine learning features. Because most of these data are string data, they had been converted to a distributed representation to make them usable as machine learning features. For this purpose, we utilized the following methods for distributed representation: Gestalt pattern matching, cosine similarity after vectorization by Doc2vec, and vectorization by Doc2vec. In this study, we examined five algorithms-namely decision tree, random forest, gradient boosting, neural network, and an ensemble that combines the four algorithms-to identify the one that could generate the best prediction model.</p><p><strong>Results: </strong> The accuracy rate was highest for the neural network, and the distribution of prediction probabilities also showed a split between the correct and incorrect distributions. By combining human verification and the three methods, we were able to semi-automatically convert legacy data into the CDISC SDTM format.</p><p><strong>Conclusion: </strong> By combining human verification and the three methods, we have successfully developed a semi-automated process to convert legacy data into the CDISC SDTM format; this process is more efficient than the conventional fully manual process.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"60 1-02","pages":"49-61"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39164925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karamo Kanagi, Cooper Cheng-Yuan Ku, Li-Kai Lin, Wen-Huai Hsieh
{"title":"Efficient Clinical Data Sharing Framework Based on Blockchain Technology.","authors":"Karamo Kanagi, Cooper Cheng-Yuan Ku, Li-Kai Lin, Wen-Huai Hsieh","doi":"10.1055/s-0041-1727193","DOIUrl":"https://doi.org/10.1055/s-0041-1727193","url":null,"abstract":"<p><strong>Background: </strong>While electronic health records have been collected for many years in Taiwan, their interoperability across different health care providers has not been entirely achieved yet. The exchange of clinical data is still inefficient and time consuming.</p><p><strong>Objectives: </strong>This study proposes an efficient patient-centric framework based on the blockchain technology that makes clinical data accessible to patients and enable transparent, traceable, secure, and effective data sharing between physicians and other health care providers.</p><p><strong>Methods: </strong>Health care experts were interviewed for the study, and medical data were collected in collaboration with Ministry of Health and Welfare (MOHW) Chang-Hua hospital. The proposed framework was designed based on the detailed analysis of this information. The framework includes smart contracts in an Ethereum-based permissioned blockchain to secure and facilitate clinical data exchange among different parties such as hospitals, clinics, patients, and other stakeholders. In addition, the framework employs the Logical Observation Identifiers Names and Codes (LOINC) standard to ensure the interoperability and reuse of clinical data.</p><p><strong>Results: </strong>The prototype of the proposed framework was deployed in Chang-Hua hospital to demonstrate the sharing of health examination reports with many other clinics in suburban areas. The framework was found to reduce the average access time to patient health reports from the existing next-day service to a few seconds.</p><p><strong>Conclusion: </strong>The proposed framework can be adopted to achieve health record sharing among health care providers with higher efficiency and protected privacy compared to the system currently used in Taiwan based on the client-server architecture.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"59 6","pages":"193-204"},"PeriodicalIF":1.7,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38975039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Fernandez-Luque, Andre W Kushniruk, Andrew Georgiou, Arindam Basu, Carolyn Petersen, Charlene Ronquillo, Chris Paton, Christian Nøhr, Craig E Kuziemsky, Dari Alhuwail, Diane Skiba, Elaine Huesing, Elia Gabarron, Elizabeth M Borycki, Farah Magrabi, Kerstin Denecke, Linda W P Peute, Max Topaz, Najeeb Al-Shorbaji, Paulette Lacroix, Romaric Marcilly, Ronald Cornet, Shashi B Gogia, Shinji Kobayashi, Sriram Iyengar, Thomas M Deserno, Tobias Mettler, Vivian Vimarlund, Xinxin Zhu
{"title":"Evidence-Based Health Informatics as the Foundation for the COVID-19 Response: A Joint Call for Action.","authors":"Luis Fernandez-Luque, Andre W Kushniruk, Andrew Georgiou, Arindam Basu, Carolyn Petersen, Charlene Ronquillo, Chris Paton, Christian Nøhr, Craig E Kuziemsky, Dari Alhuwail, Diane Skiba, Elaine Huesing, Elia Gabarron, Elizabeth M Borycki, Farah Magrabi, Kerstin Denecke, Linda W P Peute, Max Topaz, Najeeb Al-Shorbaji, Paulette Lacroix, Romaric Marcilly, Ronald Cornet, Shashi B Gogia, Shinji Kobayashi, Sriram Iyengar, Thomas M Deserno, Tobias Mettler, Vivian Vimarlund, Xinxin Zhu","doi":"10.1055/s-0041-1726414","DOIUrl":"https://doi.org/10.1055/s-0041-1726414","url":null,"abstract":"<p><strong>Background: </strong>As a major public health crisis, the novel coronavirus disease 2019 (COVID-19) pandemic demonstrates the urgent need for safe, effective, and evidence-based implementations of digital health. The urgency stems from the frequent tendency to focus attention on seemingly high promising digital health interventions despite being poorly validated in times of crisis.</p><p><strong>Aim: </strong>In this paper, we describe a joint call for action to use and leverage evidence-based health informatics as the foundation for the COVID-19 response and public health interventions. Tangible examples are provided for how the working groups and special interest groups of the International Medical Informatics Association (IMIA) are helping to build an evidence-based response to this crisis.</p><p><strong>Methods: </strong>Leaders of working and special interest groups of the IMIA, a total of 26 groups, were contacted via e-mail to provide a summary of the scientific-based efforts taken to combat COVID-19 pandemic and participate in the discussion toward the creation of this manuscript. A total of 13 groups participated in this manuscript.</p><p><strong>Results: </strong>Various efforts were exerted by members of IMIA including (1) developing evidence-based guidelines for the design and deployment of digital health solutions during COVID-19; (2) surveying clinical informaticians internationally about key digital solutions deployed to combat COVID-19 and the challenges faced when implementing and using them; and (3) offering necessary resources for clinicians about the use of digital tools in clinical practice, education, and research during COVID-19.</p><p><strong>Discussion: </strong>Rigor and evidence need to be taken into consideration when designing, implementing, and using digital tools to combat COVID-19 to avoid delays and unforeseen negative consequences. It is paramount to employ a multidisciplinary approach for the development and implementation of digital health tools that have been rapidly deployed in response to the pandemic bearing in mind human factors, ethics, data privacy, and the diversity of context at the local, national, and international levels. The training and capacity building of front-line workers is crucial and must be linked to a clear strategy for evaluation of ongoing experiences.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"59 6","pages":"183-192"},"PeriodicalIF":1.7,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1055/s-0041-1726414","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38889490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mindy K Ross, Henry Zheng, Bing Zhu, Ailina Lao, Hyejin Hong, Alamelu Natesan, Melina Radparvar, Alex A T Bui
{"title":"Accuracy of Asthma Computable Phenotypes to Identify Pediatric Asthma at an Academic Institution.","authors":"Mindy K Ross, Henry Zheng, Bing Zhu, Ailina Lao, Hyejin Hong, Alamelu Natesan, Melina Radparvar, Alex A T Bui","doi":"10.1055/s-0041-1729951","DOIUrl":"https://doi.org/10.1055/s-0041-1729951","url":null,"abstract":"<p><strong>Objectives: </strong>Asthma is a heterogenous condition with significant diagnostic complexity, including variations in symptoms and temporal criteria. The disease can be difficult for clinicians to diagnose accurately. Properly identifying asthma patients from the electronic health record is consequently challenging as current algorithms (computable phenotypes) rely on diagnostic codes (e.g., International Classification of Disease, ICD) in addition to other criteria (e.g., inhaler medications)-but presume an accurate diagnosis. As such, there is no universally accepted or rigorously tested computable phenotype for asthma.</p><p><strong>Methods: </strong>We compared two established asthma computable phenotypes: the Chicago Area Patient-Outcomes Research Network (CAPriCORN) and Phenotype KnowledgeBase (PheKB). We established a large-scale, consensus gold standard (<i>n</i> = 1,365) from the University of California, Los Angeles Health System's clinical data warehouse for patients 5 to 17 years old. Results were manually reviewed and predictive performance (positive predictive value [PPV], sensitivity/specificity, F1-score) determined. We then examined the classification errors to gain insight for future algorithm optimizations.</p><p><strong>Results: </strong>As applied to our final cohort of 1,365 expert-defined gold standard patients, the CAPriCORN algorithms performed with a balanced PPV = 95.8% (95% CI: 94.4-97.2%), sensitivity = 85.7% (95% CI: 83.9-87.5%), and harmonized F1 = 90.4% (95% CI: 89.2-91.7%). The PheKB algorithm was performed with a balanced PPV = 83.1% (95% CI: 80.5-85.7%), sensitivity = 69.4% (95% CI: 66.3-72.5%), and F1 = 75.4% (95% CI: 73.1-77.8%). Four categories of errors were identified related to method limitations, disease definition, human error, and design implementation.</p><p><strong>Conclusion: </strong>The performance of the CAPriCORN and PheKB algorithms was lower than previously reported as applied to pediatric data (PPV = 97.7 and 96%, respectively). There is room to improve the performance of current methods, including targeted use of natural language processing and clinical feature engineering.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"59 6","pages":"219-226"},"PeriodicalIF":1.7,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9113735/pdf/nihms-1774084.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39183444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bianca Steiner, Lena Elgert, Birgit Saalfeld, Jonas Schwartze, Horst Peter Borrmann, Axel Kobelt-Pönicke, Andreas Figlewicz, Detlev Kasprowski, Michael Thiel, Ralf Kreikebohm, Reinhold Haux, Klaus-Hendrik Wolf
{"title":"Health-Enabling Technologies for Telerehabilitation of the Shoulder: A Feasibility and User Acceptance Study.","authors":"Bianca Steiner, Lena Elgert, Birgit Saalfeld, Jonas Schwartze, Horst Peter Borrmann, Axel Kobelt-Pönicke, Andreas Figlewicz, Detlev Kasprowski, Michael Thiel, Ralf Kreikebohm, Reinhold Haux, Klaus-Hendrik Wolf","doi":"10.1055/s-0040-1713685","DOIUrl":"https://doi.org/10.1055/s-0040-1713685","url":null,"abstract":"<p><strong>Background: </strong>After discharge from a rehabilitation center the continuation of therapy is necessary to secure already achieved healing progress and sustain (re-)integration into working life. To this end, home-based exercise programs are frequently prescribed. However, many patients do not perform their exercises as frequently as prescribed or even with incorrect movements. The telerehabilitation system AGT-Reha was developed to support patients with shoulder diseases during their home-based aftercare rehabilitation.</p><p><strong>Objectives: </strong>The presented pilot study AGT-Reha-P2 evaluates the technical feasibility and user acceptance of the home-based telerehabilitation system AGT-Reha.</p><p><strong>Methods: </strong>A nonblinded, nonrandomized exploratory feasibility study was conducted over a 2-year period in patients' homes. Twelve patients completed a 3-month telerehabilitation exercise program with AGT-Reha. Primary outcome measures are the satisfying technical functionality and user acceptance assessed by technical parameters, structured interviews, and a four-dimensional questionnaire. Secondary endpoints are the medical rehabilitation success measured by the active range of motion and the shoulder function (pain and disability) assessed by employing the Neutral-0 Method and the standardized questionnaire \"Shoulder Pain and Disability Index\" (SPADI), respectively. To prepare an efficacy trial, various standardized questionnaires were included in the study to measure ability to work, capacity to work, and subjective prognosis of work capacity. The participants have been assessed at three measurement points: prebaseline (admission to rehabilitation center), baseline (discharge from rehabilitation center), and posttherapy.</p><p><strong>Results: </strong>Six participants used the first version of AGT-Reha, while six other patients used an improved version. Despite minor technical problems, all participants successfully trained on their own with AGT-Reha at home. On average, participants trained at least once per day during their training period. Five of the 12 participants showed clinically relevant improvements of shoulder function (improved SPADI score > 11). The work-related parameters suggested a positive impact. All participants would recommend the system, ten participants would likely reuse it, and seven participants would have wanted to continue their use after 3 months.</p><p><strong>Conclusion: </strong>The findings show that home-based training with AGT-Reha is feasible and well accepted. Outcomes of SPADI indicate the effectiveness of aftercare with AGT-Reha. A controlled clinical trial to test this hypothesis will be conducted with a larger number of participants.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"59 S 02","pages":"e90-e99"},"PeriodicalIF":1.7,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1055/s-0040-1713685","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38250295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}