Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100165
Ceilidh Welsh, Susana Román García, Gillian C Barnett, Raj Jena
{"title":"Democratising artificial intelligence in healthcare: community-driven approaches for ethical solutions.","authors":"Ceilidh Welsh, Susana Román García, Gillian C Barnett, Raj Jena","doi":"10.1016/j.fhj.2024.100165","DOIUrl":"10.1016/j.fhj.2024.100165","url":null,"abstract":"<p><p>The rapid advancement and widespread adoption of artificial intelligence (AI) has ushered in a new era of possibilities in healthcare, ranging from clinical task automation to disease detection. AI algorithms have the potential to analyse medical data, enhance diagnostic accuracy, personalise treatment plans and predict patient outcomes among other possibilities. With a surge in AI's popularity, its developments are outpacing policy and regulatory frameworks, leading to concerns about ethical considerations and collaborative development. Healthcare faces its own ethical challenges, including biased datasets, under-representation and inequitable access to resources, all contributing to mistrust in medical systems. To address these issues in the context of AI healthcare solutions and prevent perpetuating existing inequities, it is crucial to involve communities and stakeholders in the AI lifecycle. This article discusses four community-driven approaches for co-developing ethical AI healthcare solutions, including understanding and prioritising needs, defining a shared language, promoting mutual learning and co-creation, and democratising AI. These approaches emphasise bottom-up decision-making to reflect and centre impacted communities' needs and values. These collaborative approaches provide actionable considerations for creating equitable AI solutions in healthcare, fostering a more just and effective healthcare system that serves patient and community needs.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100165"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452836/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100183
Anmol Arora, Tom Lawton
{"title":"Artificial intelligence in the NHS: Moving from ideation to implementation.","authors":"Anmol Arora, Tom Lawton","doi":"10.1016/j.fhj.2024.100183","DOIUrl":"10.1016/j.fhj.2024.100183","url":null,"abstract":"","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100183"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100177
Siân Carey, Allan Pang, Marc de Kamps
{"title":"Fairness in AI for healthcare.","authors":"Siân Carey, Allan Pang, Marc de Kamps","doi":"10.1016/j.fhj.2024.100177","DOIUrl":"10.1016/j.fhj.2024.100177","url":null,"abstract":"<p><p>Artificial intelligence (AI) is a technology that enables computers to simulate human intelligence and has the potential to improve healthcare in a multitude of ways. However, there are also possibilities that it may continue, or exacerbate, current disparities. We discuss the problem of bias in healthcare and AI, and go on to highlight some of the ongoing and future solutions that are being researched in the area.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100177"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100162
Rohan Misra, Pearse A Keane, Henry David Jeffry Hogg
{"title":"How should we train clinicians for artificial intelligence in healthcare?","authors":"Rohan Misra, Pearse A Keane, Henry David Jeffry Hogg","doi":"10.1016/j.fhj.2024.100162","DOIUrl":"10.1016/j.fhj.2024.100162","url":null,"abstract":"","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100162"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100171
Melissa D McCradden, Ian Stedman
{"title":"Explaining decisions without explainability? Artificial intelligence and medicolegal accountability.","authors":"Melissa D McCradden, Ian Stedman","doi":"10.1016/j.fhj.2024.100171","DOIUrl":"10.1016/j.fhj.2024.100171","url":null,"abstract":"<p><p>Image, graphical abstract.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100171"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100180
Cori Crider
{"title":"Two paths for health AI governance: paternalism or democracy.","authors":"Cori Crider","doi":"10.1016/j.fhj.2024.100180","DOIUrl":"10.1016/j.fhj.2024.100180","url":null,"abstract":"<p><p>This article assesses the cyclical failures of NHS data modernisation programmes, and considers that they fail because they proceed from a faulty - excessively paternalistic - governance model. Bias in algorithmic delivery of healthcare, a demonstrated problem with many existing health applications, is another serious risk. To regain trust and move towards better use of data in the NHS, we should democratise the development of these systems, and de-risk operational systems from issues such as automation bias. As a comparison, the essay explores two approaches to trust and bias problems in other contexts: Taiwan's digital democracy, and American Airlines' struggles to overcome automation bias in their pilots.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100180"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100179
Ibrahim Habli, Mark Sujan, Tom Lawton
{"title":"Moving beyond the AI sales pitch - Empowering clinicians to ask the right questions about clinical AI.","authors":"Ibrahim Habli, Mark Sujan, Tom Lawton","doi":"10.1016/j.fhj.2024.100179","DOIUrl":"10.1016/j.fhj.2024.100179","url":null,"abstract":"<p><p>We challenge the dominant technology-centric narrative around clinical AI. To realise the true potential of the technology, clinicians must be empowered to take a whole-system perspective and assess the suitability of AI-supported tasks for their specific complex clinical setting. Key factors include the AI's capacity to augment human capabilities, evidence of clinical safety beyond general performance metrics and equitable clinical decision-making by the human-AI team. Proactively addressing these issues could pave the way for an accountable clinical buy-in and a trustworthy deployment of the technology.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100179"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452827/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Future healthcare journalPub Date : 2024-09-19eCollection Date: 2024-09-01DOI: 10.1016/j.fhj.2024.100181
Kit Fotheringham, Helen Smith
{"title":"Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption.","authors":"Kit Fotheringham, Helen Smith","doi":"10.1016/j.fhj.2024.100181","DOIUrl":"10.1016/j.fhj.2024.100181","url":null,"abstract":"<p><p>This article contributes to the ongoing debate about legal liability and responsibility for patient harm in scenarios where artificial intelligence (AI) is used in healthcare.We note that due to the structure of negligence liability in England and Wales, it is likely that clinicians would be held solely negligent for patient harms arising from software defects, even though AI algorithms will share the decision-making space with clinicians.Drawing on previous research, we argue that the traditional model of negligence liability for clinical malpractice cannot be relied upon to offer justice for clinicians and patients. There is a pressing need for law reform to consider the use of risk pooling, alongside detailed professional guidance for the use of AI in healthcare spaces.</p>","PeriodicalId":73125,"journal":{"name":"Future healthcare journal","volume":"11 3","pages":"100181"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142383326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}