Michael Rigby, Elisavet Andrikopoulou, Mirela Prgomet, Stephanie Medlock, Zoie Sy Wong, Kathrin Cresswell
{"title":"验证和评估是确保安全的人工智能健康应用的关键。","authors":"Michael Rigby, Elisavet Andrikopoulou, Mirela Prgomet, Stephanie Medlock, Zoie Sy Wong, Kathrin Cresswell","doi":"10.3233/SHTI251494","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial Intelligence (AI) is a rapidly growing technology within health informatics, but it is not subject to the rigor of scientific and safety validation required for all other new health techniques. Moreover, some functions of health AI cannot only introduce biases but can then reinforce and spread them by building on them. Thus, while health AI may bring benefit, it can also pose risks for safety and efficiency, as end users cannot rely on rigorous pre-implementation evidence or in-use validation. This review aims to revisit the principles and techniques already developed in health informatics, to build scientific principles for AI evaluation and the production of evidence. The Precautionary Principle provides further justification for such processes, and continuous quality improvement methods can add assurance. Developers should be expected to provide a robust evidence and evaluation trail, and clinicians and patient groups should expect this to be required by policy makers. This needs to be balanced with a need for developing pragmatic and agile evaluation methods in this fast-evolving area, to deepen knowledge and to guard against the risk of hidden perpetuation of errors.</p>","PeriodicalId":94357,"journal":{"name":"Studies in health technology and informatics","volume":"332 ","pages":"52-56"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Validation and Evaluation as Essentials to Ensuring Safe AI Health Applications.\",\"authors\":\"Michael Rigby, Elisavet Andrikopoulou, Mirela Prgomet, Stephanie Medlock, Zoie Sy Wong, Kathrin Cresswell\",\"doi\":\"10.3233/SHTI251494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial Intelligence (AI) is a rapidly growing technology within health informatics, but it is not subject to the rigor of scientific and safety validation required for all other new health techniques. Moreover, some functions of health AI cannot only introduce biases but can then reinforce and spread them by building on them. Thus, while health AI may bring benefit, it can also pose risks for safety and efficiency, as end users cannot rely on rigorous pre-implementation evidence or in-use validation. This review aims to revisit the principles and techniques already developed in health informatics, to build scientific principles for AI evaluation and the production of evidence. The Precautionary Principle provides further justification for such processes, and continuous quality improvement methods can add assurance. Developers should be expected to provide a robust evidence and evaluation trail, and clinicians and patient groups should expect this to be required by policy makers. This needs to be balanced with a need for developing pragmatic and agile evaluation methods in this fast-evolving area, to deepen knowledge and to guard against the risk of hidden perpetuation of errors.</p>\",\"PeriodicalId\":94357,\"journal\":{\"name\":\"Studies in health technology and informatics\",\"volume\":\"332 \",\"pages\":\"52-56\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in health technology and informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/SHTI251494\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in health technology and informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/SHTI251494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Validation and Evaluation as Essentials to Ensuring Safe AI Health Applications.
Artificial Intelligence (AI) is a rapidly growing technology within health informatics, but it is not subject to the rigor of scientific and safety validation required for all other new health techniques. Moreover, some functions of health AI cannot only introduce biases but can then reinforce and spread them by building on them. Thus, while health AI may bring benefit, it can also pose risks for safety and efficiency, as end users cannot rely on rigorous pre-implementation evidence or in-use validation. This review aims to revisit the principles and techniques already developed in health informatics, to build scientific principles for AI evaluation and the production of evidence. The Precautionary Principle provides further justification for such processes, and continuous quality improvement methods can add assurance. Developers should be expected to provide a robust evidence and evaluation trail, and clinicians and patient groups should expect this to be required by policy makers. This needs to be balanced with a need for developing pragmatic and agile evaluation methods in this fast-evolving area, to deepen knowledge and to guard against the risk of hidden perpetuation of errors.