Supervised machine learning for microbiomics: Bridging the gap between current and best practices

Natasha Katherine Dudek , Mariami Chakhvadze , Saba Kobakhidze , Omar Kantidze , Yuriy Gankin
{"title":"Supervised machine learning for microbiomics: Bridging the gap between current and best practices","authors":"Natasha Katherine Dudek ,&nbsp;Mariami Chakhvadze ,&nbsp;Saba Kobakhidze ,&nbsp;Omar Kantidze ,&nbsp;Yuriy Gankin","doi":"10.1016/j.mlwa.2024.100607","DOIUrl":null,"url":null,"abstract":"<div><div>Machine learning (ML) is poised to drive innovations in clinical microbiomics, such as in disease diagnostics and prognostics. However, the successful implementation of ML in these domains necessitates the development of reproducible, interpretable models that meet the rigorous performance standards set by regulatory agencies. This study aims to identify key areas in need of improvement in current ML practices within microbiomics, with a focus on bridging the gap between existing methodologies and the requirements for clinical application. To do so, we analyze 100 peer-reviewed articles from 2021 to 2022. Within this corpus, datasets have a median size of 161.5 samples, with over one-third containing fewer than 100 samples, signaling a high potential for overfitting. Limited demographic data further raises concerns about generalizability and fairness, with 24% of studies omitting participants' country of residence, and attributes like race/ethnicity, education, and income rarely reported (11%, 2%, and 0%, respectively). Methodological issues are also common; for instance, for 86% of studies we could not confidently rule out test set omission and data leakage, suggesting a strong potential for inflated performance estimates across the literature. Reproducibility is a concern, with 78% of studies abstaining from sharing their ML code publicly. Based on this analysis, we provide guidance to avoid common pitfalls that can hinder model performance, generalizability, and trustworthiness. An interactive tutorial on applying ML to microbiomics data accompanies the discussion, to help establish and reinforce best practices within the community.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"18 ","pages":"Article 100607"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827024000835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) is poised to drive innovations in clinical microbiomics, such as in disease diagnostics and prognostics. However, the successful implementation of ML in these domains necessitates the development of reproducible, interpretable models that meet the rigorous performance standards set by regulatory agencies. This study aims to identify key areas in need of improvement in current ML practices within microbiomics, with a focus on bridging the gap between existing methodologies and the requirements for clinical application. To do so, we analyze 100 peer-reviewed articles from 2021 to 2022. Within this corpus, datasets have a median size of 161.5 samples, with over one-third containing fewer than 100 samples, signaling a high potential for overfitting. Limited demographic data further raises concerns about generalizability and fairness, with 24% of studies omitting participants' country of residence, and attributes like race/ethnicity, education, and income rarely reported (11%, 2%, and 0%, respectively). Methodological issues are also common; for instance, for 86% of studies we could not confidently rule out test set omission and data leakage, suggesting a strong potential for inflated performance estimates across the literature. Reproducibility is a concern, with 78% of studies abstaining from sharing their ML code publicly. Based on this analysis, we provide guidance to avoid common pitfalls that can hinder model performance, generalizability, and trustworthiness. An interactive tutorial on applying ML to microbiomics data accompanies the discussion, to help establish and reinforce best practices within the community.
用于微生物组学的有监督机器学习:缩小当前实践与最佳实践之间的差距
机器学习(ML)有望推动临床微生物组学的创新,如疾病诊断和预后。然而,要在这些领域成功实施机器学习,就必须开发出可重复、可解释的模型,以满足监管机构制定的严格性能标准。本研究旨在确定微生物组学中当前 ML 实践中需要改进的关键领域,重点是缩小现有方法与临床应用要求之间的差距。为此,我们分析了 2021 年至 2022 年的 100 篇同行评审文章。在这一语料库中,数据集的中位数为 161.5 个样本,其中超过三分之一的数据集包含的样本少于 100 个,这表明过度拟合的可能性很大。有限的人口统计学数据进一步引发了对普遍性和公平性的担忧,24% 的研究遗漏了参与者的居住国,种族/民族、教育程度和收入等属性也很少被报告(分别为 11%、2% 和 0%)。方法论问题也很常见;例如,在 86% 的研究中,我们无法有把握地排除测试集遗漏和数据泄漏的可能性,这表明在所有文献中都很有可能出现夸大绩效估计值的情况。可重复性也是一个令人担忧的问题,有 78% 的研究放弃公开共享其 ML 代码。基于这一分析,我们提供了避免常见陷阱的指南,这些陷阱可能会妨碍模型的性能、可推广性和可信度。在讨论的同时,我们还提供了将 ML 应用于微生物组学数据的互动教程,以帮助在社区内建立和加强最佳实践。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Machine learning with applications
Machine learning with applications Management Science and Operations Research, Artificial Intelligence, Computer Science Applications
自引率
0.00%
发文量
0
审稿时长
98 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信