{"title":"Measurement Methodology","authors":"S. Shirmohammadi","doi":"10.1109/MIM.2023.10238361","DOIUrl":null,"url":null,"abstract":"From the onset of the COVID-19 pandemic, many researchers rushed to design Machine Learning (ML)-assisted diagnostic tools that could, supposedly, detect COVID-19 fast and reliably. ML seemed perfect for this job since we had access to many COVID-19 datasets, so a datadriven approach should have quickly yielded such diagnostic tools that could then be distributed to the masses. Unfortunately, the reality fell way short of the expectations. In an extensive study, Wynants and colleagues screened 126,978 relevant titles in the literature and found 412 studies describing 731 such ML-based COVID-19 diagnostic tools, but their conclusion was that “most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic” [1]. Only 29 models had low risk of bias and “should be validated before clinical implementation.” This was confirmed by another study that identified 2,212 such tools, of which 415 were included after initial screening, and 62 were systematically reviewed. The result? “Our review finds that none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases” [2]. There were several problems with the proposed tools, but the one that relates to our article is summarized in the following remedial recommendation of the authors: “When reporting results, it is important to include confidence intervals to reflect the uncertainty in the estimate, especially when training models on the small sample sizes commonly seen with COVID-19 data.”","PeriodicalId":55025,"journal":{"name":"IEEE Instrumentation & Measurement Magazine","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Instrumentation & Measurement Magazine","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/MIM.2023.10238361","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
From the onset of the COVID-19 pandemic, many researchers rushed to design Machine Learning (ML)-assisted diagnostic tools that could, supposedly, detect COVID-19 fast and reliably. ML seemed perfect for this job since we had access to many COVID-19 datasets, so a datadriven approach should have quickly yielded such diagnostic tools that could then be distributed to the masses. Unfortunately, the reality fell way short of the expectations. In an extensive study, Wynants and colleagues screened 126,978 relevant titles in the literature and found 412 studies describing 731 such ML-based COVID-19 diagnostic tools, but their conclusion was that “most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic” [1]. Only 29 models had low risk of bias and “should be validated before clinical implementation.” This was confirmed by another study that identified 2,212 such tools, of which 415 were included after initial screening, and 62 were systematically reviewed. The result? “Our review finds that none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases” [2]. There were several problems with the proposed tools, but the one that relates to our article is summarized in the following remedial recommendation of the authors: “When reporting results, it is important to include confidence intervals to reflect the uncertainty in the estimate, especially when training models on the small sample sizes commonly seen with COVID-19 data.”
期刊介绍:
IEEE Instrumentation & Measurement Magazine is a bimonthly publication. It publishes in February, April, June, August, October, and December of each year. The magazine covers a wide variety of topics in instrumentation, measurement, and systems that measure or instrument equipment or other systems. The magazine has the goal of providing readable introductions and overviews of technology in instrumentation and measurement to a wide engineering audience. It does this through articles, tutorials, columns, and departments. Its goal is to cross disciplines to encourage further research and development in instrumentation and measurement.