{"title":"Towards equitable AI in oncology","authors":"Vidya Sankar Viswanathan, Vani Parmar, Anant Madabhushi","doi":"10.1038/s41571-024-00909-8","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) stands at the threshold of revolutionizing clinical oncology, with considerable potential to improve early cancer detection and risk assessment, and to enable more accurate personalized treatment recommendations. However, a notable imbalance exists in the distribution of the benefits of AI, which disproportionately favour those living in specific geographical locations and in specific populations. In this Perspective, we discuss the need to foster the development of equitable AI tools that are both accurate in and accessible to a diverse range of patient populations, including those in low-income to middle-income countries. We also discuss some of the challenges and potential solutions in attaining equitable AI, including addressing the historically limited representation of diverse populations in existing clinical datasets and the use of inadequate clinical validation methods. Additionally, we focus on extant sources of inequity including the type of model approach (such as deep learning, and feature engineering-based methods), the implications of dataset curation strategies, the need for rigorous validation across a variety of populations and settings, and the risk of introducing contextual bias that comes with developing tools predominantly in high-income countries. Artificial intelligence (AI) has the potential to dramatically change several aspects of oncology including diagnosis, early detection and treatment-related decision making. However, many of the underlying algorithms have been or are being trained on datasets that do not necessarily reflect the diversity of the target population. For this, and other reasons, many AI tools might not be suitable for application in less economically developed countries and/or in patients of certain ethnicities. In this Perspective, the authors discuss possible sources of inequity in AI development, and how to ensure the development and implementation of equitable AI tools for use in patients with cancer.","PeriodicalId":19079,"journal":{"name":"Nature Reviews Clinical Oncology","volume":null,"pages":null},"PeriodicalIF":81.1000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Reviews Clinical Oncology","FirstCategoryId":"3","ListUrlMain":"https://www.nature.com/articles/s41571-024-00909-8","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) stands at the threshold of revolutionizing clinical oncology, with considerable potential to improve early cancer detection and risk assessment, and to enable more accurate personalized treatment recommendations. However, a notable imbalance exists in the distribution of the benefits of AI, which disproportionately favour those living in specific geographical locations and in specific populations. In this Perspective, we discuss the need to foster the development of equitable AI tools that are both accurate in and accessible to a diverse range of patient populations, including those in low-income to middle-income countries. We also discuss some of the challenges and potential solutions in attaining equitable AI, including addressing the historically limited representation of diverse populations in existing clinical datasets and the use of inadequate clinical validation methods. Additionally, we focus on extant sources of inequity including the type of model approach (such as deep learning, and feature engineering-based methods), the implications of dataset curation strategies, the need for rigorous validation across a variety of populations and settings, and the risk of introducing contextual bias that comes with developing tools predominantly in high-income countries. Artificial intelligence (AI) has the potential to dramatically change several aspects of oncology including diagnosis, early detection and treatment-related decision making. However, many of the underlying algorithms have been or are being trained on datasets that do not necessarily reflect the diversity of the target population. For this, and other reasons, many AI tools might not be suitable for application in less economically developed countries and/or in patients of certain ethnicities. In this Perspective, the authors discuss possible sources of inequity in AI development, and how to ensure the development and implementation of equitable AI tools for use in patients with cancer.
期刊介绍:
Nature Reviews publishes clinical content authored by internationally renowned clinical academics and researchers, catering to readers in the medical sciences at postgraduate levels and beyond. Although targeted at practicing doctors, researchers, and academics within specific specialties, the aim is to ensure accessibility for readers across various medical disciplines. The journal features in-depth Reviews offering authoritative and current information, contextualizing topics within the history and development of a field. Perspectives, News & Views articles, and the Research Highlights section provide topical discussions, opinions, and filtered primary research from diverse medical journals.