{"title":"揭示机器学习驱动科学中的过度乐观和发表偏见。","authors":"Pouria Saidi, Gautam Dasarathy, Visar Berisha","doi":"10.1016/j.patter.2025.101185","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning (ML) is increasingly used across many disciplines with impressive reported results. However, recent studies suggest that the published performances of ML models are often overoptimistic. Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size. This paper investigates factors contributing to overoptimism in ML-driven science, focusing on overfitting and publication bias. We introduce a stochastic model for observed accuracy, integrating parametric learning curves and the aforementioned biases. We construct an estimator that corrects for these biases in observed data. Theoretical and empirical results show that our framework can estimate the underlying learning curve, providing realistic performance assessments from published results. By applying the model to meta-analyses of classifications of neurological conditions, we estimate the inherent limits of ML-driven prediction in each domain.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"6 4","pages":"101185"},"PeriodicalIF":6.7000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010447/pdf/","citationCount":"0","resultStr":"{\"title\":\"Unraveling overoptimism and publication bias in ML-driven science.\",\"authors\":\"Pouria Saidi, Gautam Dasarathy, Visar Berisha\",\"doi\":\"10.1016/j.patter.2025.101185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Machine learning (ML) is increasingly used across many disciplines with impressive reported results. However, recent studies suggest that the published performances of ML models are often overoptimistic. Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size. This paper investigates factors contributing to overoptimism in ML-driven science, focusing on overfitting and publication bias. We introduce a stochastic model for observed accuracy, integrating parametric learning curves and the aforementioned biases. We construct an estimator that corrects for these biases in observed data. Theoretical and empirical results show that our framework can estimate the underlying learning curve, providing realistic performance assessments from published results. By applying the model to meta-analyses of classifications of neurological conditions, we estimate the inherent limits of ML-driven prediction in each domain.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":\"6 4\",\"pages\":\"101185\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-02-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010447/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2025.101185\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/4/11 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2025.101185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/11 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Unraveling overoptimism and publication bias in ML-driven science.
Machine learning (ML) is increasingly used across many disciplines with impressive reported results. However, recent studies suggest that the published performances of ML models are often overoptimistic. Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size. This paper investigates factors contributing to overoptimism in ML-driven science, focusing on overfitting and publication bias. We introduce a stochastic model for observed accuracy, integrating parametric learning curves and the aforementioned biases. We construct an estimator that corrects for these biases in observed data. Theoretical and empirical results show that our framework can estimate the underlying learning curve, providing realistic performance assessments from published results. By applying the model to meta-analyses of classifications of neurological conditions, we estimate the inherent limits of ML-driven prediction in each domain.