{"title":"Equity, autonomy, and the ethical risks and opportunities of generalist medical AI","authors":"Reuben Sass","doi":"10.1007/s43681-023-00380-8","DOIUrl":null,"url":null,"abstract":"<div><p>This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"567 - 577"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00380-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of novel AI systems in health care. The ethics of two applications of GMAI are examined: enabling decision aids that inform and educate patients about certain treatments and conditions, and expanding AI-driven diagnosis and treatment recommendation. Emphasis is placed on the potential of GMAI to improve shared decision-making between patients and providers, which supports patient autonomy. Another focus is on health equity, or the reduction of health and access disparities facing underserved populations. Although GMAI presents opportunities to improve patient autonomy, health literacy, and health equity, premature or inadequately regulated adoption of GMAI has the potential to compromise both health equity and patient autonomy. On the other hand, there are significant risks to health equity and autonomy that may arise from not adopting GMAI that has been thoroughly validated and tested. A careful balancing of these risks and benefits will be required to secure the best ethical outcome, if GMAI is ever employed at scale.