Patient-Specific Explanations for Predictions of Clinical Outcomes.

ACI open Pub Date : 2019-07-01 Epub Date: 2019-11-10 DOI:10.1055/s-0039-1697907
Mohammadamin Tajgardoon, Malarkodi J Samayamuthu, Luca Calzoni, Shyam Visweswaran
{"title":"Patient-Specific Explanations for Predictions of Clinical Outcomes.","authors":"Mohammadamin Tajgardoon,&nbsp;Malarkodi J Samayamuthu,&nbsp;Luca Calzoni,&nbsp;Shyam Visweswaran","doi":"10.1055/s-0039-1697907","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Machine learning models that are used for predicting clinical outcomes can be made more useful by augmenting predictions with simple and reliable patient-specific explanations for each prediction.</p><p><strong>Objectives: </strong>This article evaluates the quality of explanations of predictions using physician reviewers. The predictions are obtained from a machine learning model that is developed to predict dire outcomes (severe complications including death) in patients with community acquired pneumonia (CAP).</p><p><strong>Methods: </strong>Using a dataset of patients diagnosed with CAP, we developed a predictive model to predict dire outcomes. On a set of 40 patients, who were predicted to be either at very high risk or at very low risk of developing a dire outcome, we applied an explanation method to generate patient-specific explanations. Three physician reviewers independently evaluated each explanatory feature in the context of the patient's data and were instructed to disagree with a feature if they did not agree with the magnitude of support, the direction of support (supportive versus contradictory), or both.</p><p><strong>Results: </strong>The model used for generating predictions achieved a F1 score of 0.43 and area under the receiver operating characteristic curve (AUROC) of 0.84 (95% confidence interval [CI]: 0.81-0.87). Interreviewer agreement between two reviewers was strong (Cohen's kappa coefficient = 0.87) and fair to moderate between the third reviewer and others (Cohen's kappa coefficient = 0.49 and 0.33). Agreement rates between reviewers and generated explanations-defined as the proportion of explanatory features with which majority of reviewers agreed-were 0.78 for actual explanations and 0.52 for fabricated explanations, and the difference between the two agreement rates was statistically significant (Chi-square = 19.76, <i>p</i>-value < 0.01).</p><p><strong>Conclusion: </strong>There was good agreement among physician reviewers on patient-specific explanations that were generated to augment predictions of clinical outcomes. Such explanations can be useful in interpreting predictions of clinical outcomes.</p>","PeriodicalId":72041,"journal":{"name":"ACI open","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1055/s-0039-1697907","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACI open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1055/s-0039-1697907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/11/10 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Background: Machine learning models that are used for predicting clinical outcomes can be made more useful by augmenting predictions with simple and reliable patient-specific explanations for each prediction.

Objectives: This article evaluates the quality of explanations of predictions using physician reviewers. The predictions are obtained from a machine learning model that is developed to predict dire outcomes (severe complications including death) in patients with community acquired pneumonia (CAP).

Methods: Using a dataset of patients diagnosed with CAP, we developed a predictive model to predict dire outcomes. On a set of 40 patients, who were predicted to be either at very high risk or at very low risk of developing a dire outcome, we applied an explanation method to generate patient-specific explanations. Three physician reviewers independently evaluated each explanatory feature in the context of the patient's data and were instructed to disagree with a feature if they did not agree with the magnitude of support, the direction of support (supportive versus contradictory), or both.

Results: The model used for generating predictions achieved a F1 score of 0.43 and area under the receiver operating characteristic curve (AUROC) of 0.84 (95% confidence interval [CI]: 0.81-0.87). Interreviewer agreement between two reviewers was strong (Cohen's kappa coefficient = 0.87) and fair to moderate between the third reviewer and others (Cohen's kappa coefficient = 0.49 and 0.33). Agreement rates between reviewers and generated explanations-defined as the proportion of explanatory features with which majority of reviewers agreed-were 0.78 for actual explanations and 0.52 for fabricated explanations, and the difference between the two agreement rates was statistically significant (Chi-square = 19.76, p-value < 0.01).

Conclusion: There was good agreement among physician reviewers on patient-specific explanations that were generated to augment predictions of clinical outcomes. Such explanations can be useful in interpreting predictions of clinical outcomes.

Abstract Image

对临床结果预测的患者特异性解释。
背景:用于预测临床结果的机器学习模型可以通过为每个预测提供简单可靠的患者特定解释来增强预测,从而变得更有用。目的:本文评价医师审稿人对预测解释的质量。这些预测来自一个机器学习模型,该模型是为了预测社区获得性肺炎(CAP)患者的可怕结果(包括死亡在内的严重并发症)而开发的。方法:使用诊断为CAP的患者数据集,我们开发了一个预测模型来预测可怕的结果。在一组40名患者中,我们预测他们要么处于非常高的风险,要么处于非常低的风险,我们应用了一种解释方法来产生针对患者的解释。三名医师审稿人在患者数据的背景下独立评估每个解释特征,如果他们不同意支持的程度,支持的方向(支持与矛盾),或两者都不同意,则指示他们不同意一个特征。结果:用于生成预测的模型F1得分为0.43,受试者工作特征曲线下面积(AUROC)为0.84(95%置信区间[CI]: 0.81-0.87)。两位审稿人之间的审稿人一致性较强(Cohen’s kappa系数= 0.87),第三位审稿人与其他审稿人之间的审稿人一致性中等(Cohen’s kappa系数= 0.49和0.33)。审稿人与生成解释的一致性率(即大多数审稿人同意的解释特征的比例),实际解释为0.78,虚构解释为0.52,两者的一致性率差异有统计学意义(χ 2 = 19.76, p值< 0.01)。结论:医师审稿人对患者特异性解释有很好的共识,这些解释可以增强对临床结果的预测。这样的解释在解释临床结果的预测时是有用的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信