S P Oei, T H G F Bakkes, M Mischi, R A Bouwman, R J G van Sloun, S Turco
{"title":"人工智能在临床决策支持和不良事件预测中的应用。","authors":"S P Oei, T H G F Bakkes, M Mischi, R A Bouwman, R J G van Sloun, S Turco","doi":"10.3389/fdgth.2025.1403047","DOIUrl":null,"url":null,"abstract":"<p><p>This review focuses on integrating artificial intelligence (AI) into healthcare, particularly for predicting adverse events, which holds potential in clinical decision support (CDS) but also presents significant challenges. Biases in data acquisition, such as population shifts and data scarcity, threaten the generalizability of AI-based CDS algorithms across different healthcare centers. Techniques like resampling and data augmentation are crucial for addressing biases, along with external validation to mitigate population bias. Moreover, biases can emerge during AI training, leading to underfitting or overfitting, necessitating regularization techniques for balancing model complexity and generalizability. The lack of interpretability in AI models poses trust and transparency issues, advocating for transparent algorithms and requiring rigorous testing on specific hospital populations before implementation. Additionally, emphasizing human judgment alongside AI integration is essential to mitigate the risks of deskilling healthcare practitioners. Ongoing evaluation processes and adjustments to regulatory frameworks are crucial for ensuring the ethical, safe, and effective use of AI in CDS, highlighting the need for meticulous attention to data quality, preprocessing, model training, interpretability, and ethical considerations.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1403047"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162700/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in clinical decision support and the prediction of adverse events.\",\"authors\":\"S P Oei, T H G F Bakkes, M Mischi, R A Bouwman, R J G van Sloun, S Turco\",\"doi\":\"10.3389/fdgth.2025.1403047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This review focuses on integrating artificial intelligence (AI) into healthcare, particularly for predicting adverse events, which holds potential in clinical decision support (CDS) but also presents significant challenges. Biases in data acquisition, such as population shifts and data scarcity, threaten the generalizability of AI-based CDS algorithms across different healthcare centers. Techniques like resampling and data augmentation are crucial for addressing biases, along with external validation to mitigate population bias. Moreover, biases can emerge during AI training, leading to underfitting or overfitting, necessitating regularization techniques for balancing model complexity and generalizability. The lack of interpretability in AI models poses trust and transparency issues, advocating for transparent algorithms and requiring rigorous testing on specific hospital populations before implementation. Additionally, emphasizing human judgment alongside AI integration is essential to mitigate the risks of deskilling healthcare practitioners. Ongoing evaluation processes and adjustments to regulatory frameworks are crucial for ensuring the ethical, safe, and effective use of AI in CDS, highlighting the need for meticulous attention to data quality, preprocessing, model training, interpretability, and ethical considerations.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1403047\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162700/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1403047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1403047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Artificial intelligence in clinical decision support and the prediction of adverse events.
This review focuses on integrating artificial intelligence (AI) into healthcare, particularly for predicting adverse events, which holds potential in clinical decision support (CDS) but also presents significant challenges. Biases in data acquisition, such as population shifts and data scarcity, threaten the generalizability of AI-based CDS algorithms across different healthcare centers. Techniques like resampling and data augmentation are crucial for addressing biases, along with external validation to mitigate population bias. Moreover, biases can emerge during AI training, leading to underfitting or overfitting, necessitating regularization techniques for balancing model complexity and generalizability. The lack of interpretability in AI models poses trust and transparency issues, advocating for transparent algorithms and requiring rigorous testing on specific hospital populations before implementation. Additionally, emphasizing human judgment alongside AI integration is essential to mitigate the risks of deskilling healthcare practitioners. Ongoing evaluation processes and adjustments to regulatory frameworks are crucial for ensuring the ethical, safe, and effective use of AI in CDS, highlighting the need for meticulous attention to data quality, preprocessing, model training, interpretability, and ethical considerations.