Francesca Aurora Sacchi, Fidelia Cascini, Noemi Conditi, Alice Ravizza, Margherita Daverio, Francesco Andrea Causio, Vittorio De Vita, Alessio Pivetta, Pierpaolo Maio, Luigi De Angelis, Francesco Baglivo, Giacomo Diedenhofen, Marcello Di Pumpo, Alessandro Belpiede, Diana Ferro, Luca Bolognini
{"title":"通往可信赖的医疗人工智能之路:可解释性的演变作用。","authors":"Francesca Aurora Sacchi, Fidelia Cascini, Noemi Conditi, Alice Ravizza, Margherita Daverio, Francesco Andrea Causio, Vittorio De Vita, Alessio Pivetta, Pierpaolo Maio, Luigi De Angelis, Francesco Baglivo, Giacomo Diedenhofen, Marcello Di Pumpo, Alessandro Belpiede, Diana Ferro, Luca Bolognini","doi":"10.1701/4573.45774","DOIUrl":null,"url":null,"abstract":"<p><p>The integration of artificial intelligence (AI) in medicine has applications across several clinical domains, spanning from disease prevention and diagnosis through treatment and long-term care, as well as remote care. However, many AI systems are inherently characterized by limited explainability, meaning the processes behind their outcomes cannot be clearly understood or communicated to humans, whether developers or end users. This viewpoint explores the importance of AI explainability in medicine by first tracing its evolution from a primarily ethical concern to a legal requirement. It then examines the connection between explainability and the trustworthiness of AI systems. Finally, it considers how explainability is approached from a technical standpoint and its inherent tension with achieving high accuracy.</p>","PeriodicalId":20887,"journal":{"name":"Recenti progressi in medicina","volume":"116 10","pages":"546-550"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The path to trustworthy medical AI: the evolving role of explainability.\",\"authors\":\"Francesca Aurora Sacchi, Fidelia Cascini, Noemi Conditi, Alice Ravizza, Margherita Daverio, Francesco Andrea Causio, Vittorio De Vita, Alessio Pivetta, Pierpaolo Maio, Luigi De Angelis, Francesco Baglivo, Giacomo Diedenhofen, Marcello Di Pumpo, Alessandro Belpiede, Diana Ferro, Luca Bolognini\",\"doi\":\"10.1701/4573.45774\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The integration of artificial intelligence (AI) in medicine has applications across several clinical domains, spanning from disease prevention and diagnosis through treatment and long-term care, as well as remote care. However, many AI systems are inherently characterized by limited explainability, meaning the processes behind their outcomes cannot be clearly understood or communicated to humans, whether developers or end users. This viewpoint explores the importance of AI explainability in medicine by first tracing its evolution from a primarily ethical concern to a legal requirement. It then examines the connection between explainability and the trustworthiness of AI systems. Finally, it considers how explainability is approached from a technical standpoint and its inherent tension with achieving high accuracy.</p>\",\"PeriodicalId\":20887,\"journal\":{\"name\":\"Recenti progressi in medicina\",\"volume\":\"116 10\",\"pages\":\"546-550\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Recenti progressi in medicina\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1701/4573.45774\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recenti progressi in medicina","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1701/4573.45774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
The path to trustworthy medical AI: the evolving role of explainability.
The integration of artificial intelligence (AI) in medicine has applications across several clinical domains, spanning from disease prevention and diagnosis through treatment and long-term care, as well as remote care. However, many AI systems are inherently characterized by limited explainability, meaning the processes behind their outcomes cannot be clearly understood or communicated to humans, whether developers or end users. This viewpoint explores the importance of AI explainability in medicine by first tracing its evolution from a primarily ethical concern to a legal requirement. It then examines the connection between explainability and the trustworthiness of AI systems. Finally, it considers how explainability is approached from a technical standpoint and its inherent tension with achieving high accuracy.
期刊介绍:
Giunta ormai al sessantesimo anno, Recenti Progressi in Medicina continua a costituire un sicuro punto di riferimento ed uno strumento di lavoro fondamentale per l"ampliamento dell"orizzonte culturale del medico italiano. Recenti Progressi in Medicina è una rivista di medicina interna. Ciò significa il recupero di un"ottica globale e integrata, idonea ad evitare sia i particolarismi della informazione specialistica sia la frammentazione di quella generalista.