E. MacLean, Mikashmi Kohli, Lisa Köppel, Ian Schiller, Surendra K Sharma, M. Pai, C. Denkinger, N. Dendukuri
{"title":"Bayesian latent class analysis produced diagnostic accuracy estimates that were more interpretable than composite reference standards for extrapulmonary tuberculosis tests","authors":"E. MacLean, Mikashmi Kohli, Lisa Köppel, Ian Schiller, Surendra K Sharma, M. Pai, C. Denkinger, N. Dendukuri","doi":"10.1186/s41512-022-00125-x","DOIUrl":"https://doi.org/10.1186/s41512-022-00125-x","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49635825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayley Smith, Michael Sweeting, Tim Morris, Michael J Crowther
{"title":"A scoping methodological review of simulation studies comparing statistical and machine learning approaches to risk prediction for time-to-event data.","authors":"Hayley Smith, Michael Sweeting, Tim Morris, Michael J Crowther","doi":"10.1186/s41512-022-00124-y","DOIUrl":"10.1186/s41512-022-00124-y","url":null,"abstract":"<p><strong>Background: </strong>There is substantial interest in the adaptation and application of so-called machine learning approaches to prognostic modelling of censored time-to-event data. These methods must be compared and evaluated against existing methods in a variety of scenarios to determine their predictive performance. A scoping review of how machine learning methods have been compared to traditional survival models is important to identify the comparisons that have been made and issues where they are lacking, biased towards one approach or misleading.</p><p><strong>Methods: </strong>We conducted a scoping review of research articles published between 1 January 2000 and 2 December 2020 using PubMed. Eligible articles were those that used simulation studies to compare statistical and machine learning methods for risk prediction with a time-to-event outcome in a medical/healthcare setting. We focus on data-generating mechanisms (DGMs), the methods that have been compared, the estimands of the simulation studies, and the performance measures used to evaluate them.</p><p><strong>Results: </strong>A total of ten articles were identified as eligible for the review. Six of the articles evaluated a method that was developed by the authors, four of which were machine learning methods, and the results almost always stated that this developed method's performance was equivalent to or better than the other methods compared. Comparisons were often biased towards the novel approach, with the majority only comparing against a basic Cox proportional hazards model, and in scenarios where it is clear it would not perform well. In many of the articles reviewed, key information was unclear, such as the number of simulation repetitions and how performance measures were calculated.</p><p><strong>Conclusion: </strong>It is vital that method comparisons are unbiased and comprehensive, and this should be the goal even if realising it is difficult. Fully assessing how newly developed methods perform and how they compare to a variety of traditional statistical methods for prognostic modelling is imperative as these methods are already being applied in clinical contexts. Evaluations of the performance and usefulness of recently developed methods for risk prediction should be continued and reporting standards improved as these methods become increasingly popular.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2022-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9161606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45749533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Abdulaziz, J. Perry, K. Yadav, D. Dowlatshahi, I. Stiell, G. Wells, M. Taljaard
{"title":"Quality and transparency of reporting derivation and validation prognostic studies of recurrent stroke in patients with TIA and minor stroke: a systematic review","authors":"K. Abdulaziz, J. Perry, K. Yadav, D. Dowlatshahi, I. Stiell, G. Wells, M. Taljaard","doi":"10.1186/s41512-022-00123-z","DOIUrl":"https://doi.org/10.1186/s41512-022-00123-z","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42169045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Helmrich, A. Mikolić, D. Kent, H. Lingsma, L. Wynants, E. Steyerberg, D. van Klaveren
{"title":"Does poor methodological quality of prediction modeling studies translate to poor model performance? An illustration in traumatic brain injury","authors":"I. Helmrich, A. Mikolić, D. Kent, H. Lingsma, L. Wynants, E. Steyerberg, D. van Klaveren","doi":"10.1186/s41512-022-00122-0","DOIUrl":"https://doi.org/10.1186/s41512-022-00122-0","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49067042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Sammut-Powell, C. Reynard, Joy A Allen, J. McDermott, Julian Braybrook, R. Parisi, D. Lasserson, R. Body, Richard Gail Joy Julian Peter Paul Kerrie Eloise Adam Anna Body Hayward Allen Braybrook Buckle Dark Davis Coo, R. Body, G. Hayward, Joy A Allen, J. Braybrook, P. Buckle, P. Dark, Kerrie Davis, Eloïse Cook, A. Gordon, Anna Halstead, D. Lasserson, A. Lewington, Brian Nicholson, R. Perera-Salazar, J. Simpson, Philip Turner, Graham Prestwich, C. Reynard, Be Riley, Valerie Tate, Mark A. Wilcox
{"title":"Examining the effect of evaluation sample size on the sensitivity and specificity of COVID-19 diagnostic tests in practice: a simulation study","authors":"C. Sammut-Powell, C. Reynard, Joy A Allen, J. McDermott, Julian Braybrook, R. Parisi, D. Lasserson, R. Body, Richard Gail Joy Julian Peter Paul Kerrie Eloise Adam Anna Body Hayward Allen Braybrook Buckle Dark Davis Coo, R. Body, G. Hayward, Joy A Allen, J. Braybrook, P. Buckle, P. Dark, Kerrie Davis, Eloïse Cook, A. Gordon, Anna Halstead, D. Lasserson, A. Lewington, Brian Nicholson, R. Perera-Salazar, J. Simpson, Philip Turner, Graham Prestwich, C. Reynard, Be Riley, Valerie Tate, Mark A. Wilcox","doi":"10.1186/s41512-021-00116-4","DOIUrl":"https://doi.org/10.1186/s41512-021-00116-4","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45429790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantitative prediction error analysis to investigate predictive performance under predictor measurement heterogeneity at model implementation","authors":"K. Luijken, Jiaolei Song, R. Groenwold","doi":"10.1186/s41512-022-00121-1","DOIUrl":"https://doi.org/10.1186/s41512-022-00121-1","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48348478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew W Huang, Martin Haslberger, Neto Coulibaly, Omar Galárraga, Arman Oganisian, Lazaros Belbasis, Orestis A Panagiotou
{"title":"Multivariable prediction models for health care spending using machine learning: a protocol of a systematic review.","authors":"Andrew W Huang, Martin Haslberger, Neto Coulibaly, Omar Galárraga, Arman Oganisian, Lazaros Belbasis, Orestis A Panagiotou","doi":"10.1186/s41512-022-00119-9","DOIUrl":"https://doi.org/10.1186/s41512-022-00119-9","url":null,"abstract":"<p><strong>Background: </strong>With rising cost pressures on health care systems, machine-learning (ML)-based algorithms are increasingly used to predict health care costs. Despite their potential advantages, the successful implementation of these methods could be undermined by biases introduced in the design, conduct, or analysis of studies seeking to develop and/or validate ML models. The utility of such models may also be negatively affected by poor reporting of these studies. In this systematic review, we aim to evaluate the reporting quality, methodological characteristics, and risk of bias of ML-based prediction models for individual-level health care spending.</p><p><strong>Methods: </strong>We will systematically search PubMed and Embase to identify studies developing, updating, or validating ML-based models to predict an individual's health care spending for any medical condition, over any time period, and in any setting. We will exclude prediction models of aggregate-level health care spending, models used to infer causality, models using radiomics or speech parameters, models of non-clinically validated predictors (e.g., genomics), and cost-effectiveness analyses without predicting individual-level health care spending. We will extract data based on the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies (CHARMS), previously published research, and relevant recommendations. We will assess the adherence of ML-based studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement and examine the inclusion of transparency and reproducibility indicators (e.g. statements on data sharing). To assess the risk of bias, we will apply the Prediction model Risk Of Bias Assessment Tool (PROBAST). Findings will be stratified by study design, ML methods used, population characteristics, and medical field.</p><p><strong>Discussion: </strong>Our systematic review will appraise the quality, reporting, and risk of bias of ML-based models for individualized health care cost prediction. This review will provide an overview of the available models and give insights into the strengths and limitations of using ML methods for the prediction of health spending.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8943988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40318437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Fanshawe, P. Turner, Marjorie M. Gillespie, G. Hayward
{"title":"The comparative interrupted time series design for assessment of diagnostic impact: methodological considerations and an example using point-of-care C-reactive protein testing","authors":"T. Fanshawe, P. Turner, Marjorie M. Gillespie, G. Hayward","doi":"10.1186/s41512-022-00118-w","DOIUrl":"https://doi.org/10.1186/s41512-022-00118-w","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43280795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth J Williamson, John Tazare, Krishnan Bhaskaran, Helen I McDonald, Alex J Walker, Laurie Tomlinson, Kevin Wing, Sebastian Bacon, Chris Bates, Helen J Curtis, Harriet J Forbes, Caroline Minassian, Caroline E Morton, Emily Nightingale, Amir Mehrkar, David Evans, Brian D Nicholson, David A Leon, Peter Inglesby, Brian MacKenna, Nicholas G Davies, Nicholas J DeVito, Henry Drysdale, Jonathan Cockburn, William J Hulme, Jessica Morley, Ian Douglas, Christopher T Rentsch, Rohini Mathur, Angel Wong, Anna Schultze, Richard Croker, John Parry, Frank Hester, Sam Harper, Richard Grieve, David A Harrison, Ewout W Steyerberg, Rosalind M Eggo, Karla Diaz-Ordaz, Ruth Keogh, Stephen J W Evans, Liam Smeeth, Ben Goldacre
{"title":"Comparison of methods for predicting COVID-19-related death in the general population using the OpenSAFELY platform.","authors":"Elizabeth J Williamson, John Tazare, Krishnan Bhaskaran, Helen I McDonald, Alex J Walker, Laurie Tomlinson, Kevin Wing, Sebastian Bacon, Chris Bates, Helen J Curtis, Harriet J Forbes, Caroline Minassian, Caroline E Morton, Emily Nightingale, Amir Mehrkar, David Evans, Brian D Nicholson, David A Leon, Peter Inglesby, Brian MacKenna, Nicholas G Davies, Nicholas J DeVito, Henry Drysdale, Jonathan Cockburn, William J Hulme, Jessica Morley, Ian Douglas, Christopher T Rentsch, Rohini Mathur, Angel Wong, Anna Schultze, Richard Croker, John Parry, Frank Hester, Sam Harper, Richard Grieve, David A Harrison, Ewout W Steyerberg, Rosalind M Eggo, Karla Diaz-Ordaz, Ruth Keogh, Stephen J W Evans, Liam Smeeth, Ben Goldacre","doi":"10.1186/s41512-022-00120-2","DOIUrl":"10.1186/s41512-022-00120-2","url":null,"abstract":"<p><strong>Background: </strong>Obtaining accurate estimates of the risk of COVID-19-related death in the general population is challenging in the context of changing levels of circulating infection.</p><p><strong>Methods: </strong>We propose a modelling approach to predict 28-day COVID-19-related death which explicitly accounts for COVID-19 infection prevalence using a series of sub-studies from new landmark times incorporating time-updating proxy measures of COVID-19 infection prevalence. This was compared with an approach ignoring infection prevalence. The target population was adults registered at a general practice in England in March 2020. The outcome was 28-day COVID-19-related death. Predictors included demographic characteristics and comorbidities. Three proxies of local infection prevalence were used: model-based estimates, rate of COVID-19-related attendances in emergency care, and rate of suspected COVID-19 cases in primary care. We used data within the TPP SystmOne electronic health record system linked to Office for National Statistics mortality data, using the OpenSAFELY platform, working on behalf of NHS England. Prediction models were developed in case-cohort samples with a 100-day follow-up. Validation was undertaken in 28-day cohorts from the target population. We considered predictive performance (discrimination and calibration) in geographical and temporal subsets of data not used in developing the risk prediction models. Simple models were contrasted to models including a full range of predictors.</p><p><strong>Results: </strong>Prediction models were developed on 11,972,947 individuals, of whom 7999 experienced COVID-19-related death. All models discriminated well between individuals who did and did not experience the outcome, including simple models adjusting only for basic demographics and number of comorbidities: C-statistics 0.92-0.94. However, absolute risk estimates were substantially miscalibrated when infection prevalence was not explicitly modelled.</p><p><strong>Conclusions: </strong>Our proposed models allow absolute risk estimation in the context of changing infection prevalence but predictive performance is sensitive to the proxy for infection prevalence. Simple models can provide excellent discrimination and may simplify implementation of risk prediction tools.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"6 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8865947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9149943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W S Jones, J Suklan, A Winter, K Green, T Craven, A Bruce, J Mair, K Dhaliwal, T Walsh, A J Simpson, S Graziadio, A J Allen
{"title":"Diagnosing ventilator-associated pneumonia (VAP) in UK NHS ICUs: the perceived value and role of a novel optical technology.","authors":"W S Jones, J Suklan, A Winter, K Green, T Craven, A Bruce, J Mair, K Dhaliwal, T Walsh, A J Simpson, S Graziadio, A J Allen","doi":"10.1186/s41512-022-00117-x","DOIUrl":"https://doi.org/10.1186/s41512-022-00117-x","url":null,"abstract":"<p><strong>Background: </strong>Diagnosing ventilator-associated pneumonia (VAP) in an intensive care unit (ICU) is a complex process. Our aim was to collect, evaluate and represent the information relating to current clinical practice for the diagnosis of VAP in UK NHS ICUs, and to explore the potential value and role of a novel diagnostic for VAP, which uses optical molecular alveoscopy to visualise the alveolar space.</p><p><strong>Methods: </strong>Qualitative study performing semi-structured interviews with clinical experts. Interviews were recorded, transcribed, and thematically analysed. A flow diagram of the VAP patient pathway was elicited and validated with the expert interviewees. Fourteen clinicians were interviewed from a range of UK NHS hospitals: 12 ICU consultants, 1 professor of respiratory medicine and 1 professor of critical care.</p><p><strong>Results: </strong>Five themes were identified, relating to [1] current practice for the diagnosis of VAP, [2] current clinical need in VAP diagnostics, [3] the potential value and role of the technology, [4] the barriers to adoption and [5] the evidence requirements for the technology, to help facilitate a successful adoption. These themes indicated that diagnosis of VAP is extremely difficult, as is the decision to stop antibiotic treatment. The analysis revealed that there is a clinical need for a diagnostic that provides an accurate and timely diagnosis of the causative pathogen, without the long delays associated with return of culture results, and which is not dangerous to the patient. It was determined that the technology would satisfy important aspects of this clinical need for diagnosing VAP (and pneumonia, more generally), but would require further evidence on safety and efficacy in the patient population to facilitate adoption.</p><p><strong>Conclusions: </strong>Care pathway analysis performed in this study was deemed accurate and representative of current practice for diagnosing VAP in a UK ICU as determined by relevant clinical experts, and explored the value and role of a novel diagnostic, which uses optical technology, and could streamline the diagnostic pathway for VAP and other pneumonias.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":" ","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2022-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8830125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39612870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}