{"title":"Mitigated deployment strategy for ethical AI in clinical settings.","authors":"Sahar Abdulrahman, Markus Trengove","doi":"10.1136/bmjhci-2024-101363","DOIUrl":null,"url":null,"abstract":"<p><p>Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on existing work that considers a selective deployment approach where poorly performing subgroups are excluded from deployments. Alternatively, the proposed 'mitigated deployment' strategy requires safety nets to be built into clinical workflows to safeguard under-represented groups in a universal deployment. This approach relies on human-artificial intelligence collaboration and postmarket evaluation to continually improve model performance across subgroups with real-world data. Using a real-world case study, the benefits and limitations of mitigated deployment are explored. This will add to the tools available to healthcare organisations when considering how to safely deploy models with differential performance across subgroups.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"32 1","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12258279/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2024-101363","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on existing work that considers a selective deployment approach where poorly performing subgroups are excluded from deployments. Alternatively, the proposed 'mitigated deployment' strategy requires safety nets to be built into clinical workflows to safeguard under-represented groups in a universal deployment. This approach relies on human-artificial intelligence collaboration and postmarket evaluation to continually improve model performance across subgroups with real-world data. Using a real-world case study, the benefits and limitations of mitigated deployment are explored. This will add to the tools available to healthcare organisations when considering how to safely deploy models with differential performance across subgroups.